CN112929558B - Image processing method and electronic device - Google Patents

Image processing method and electronic device Download PDF

Info

Publication number
CN112929558B
CN112929558B CN201911244547.5A CN201911244547A CN112929558B CN 112929558 B CN112929558 B CN 112929558B CN 201911244547 A CN201911244547 A CN 201911244547A CN 112929558 B CN112929558 B CN 112929558B
Authority
CN
China
Prior art keywords
image
electronic device
images
processing
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911244547.5A
Other languages
Chinese (zh)
Other versions
CN112929558A (en
Inventor
郑耀国
张古强
吴天航
杨坤
赵乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN201911244547.5A priority Critical patent/CN112929558B/en
Publication of CN112929558A publication Critical patent/CN112929558A/en
Application granted granted Critical
Publication of CN112929558B publication Critical patent/CN112929558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image processing method and electronic equipment, relates to the technical field of electronics, and can shoot images with good visual effects in a short time in a dark light shooting scene, so that user experience is improved. The specific scheme is as follows: after the electronic equipment detects that the user indicates the operation of taking pictures, N images can be acquired; wherein N is a positive integer greater than 1; then, the electronic device can respectively preprocess the N images, and perform time domain noise reduction according to the preprocessed images to obtain a composite image; then, the electronic device can perform image enhancement processing according to the synthesized image to obtain an enhanced image; and performing spatial domain noise reduction processing and color noise point processing on the enhanced image to finally obtain a target image obtained by photographing. The embodiment of the application is used for the process of dark light image shooting.

Description

Image processing method and electronic device
Technical Field
The embodiment of the application relates to the technical field of electronics, in particular to an image processing method and electronic equipment.
Background
With the development of the technology, the mobile phone photographing capability has become an important mobile phone performance index. The night photographing is a common scene for a user to photograph by using a mobile phone. When the mobile phone takes a picture in a night scene, the picture taken in the common shooting mode is darker due to the darker environment, and the visual experience of the user is poor.
At present, some manufacturers have introduced mobile phones with a night view shooting mode for shooting scenes at night, and the mobile phones can shoot images with good visual effects in the night view shooting mode. However, when the user takes a picture using the night view photographing mode, it takes a long time to wait for photographing a picture, thereby making the user experience poor.
Disclosure of Invention
The embodiment of the application provides an image processing method and electronic equipment, and under a dark light shooting scene, an image with a good visual effect can be shot in a short time, so that user experience is improved.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in one aspect, the present application provides an image processing method, which can be applied to an electronic device, and the method includes: firstly, the electronic equipment detects the operation of shooting indicated by a user; the electronic equipment acquires N images; wherein N is a positive integer greater than 1. Then, the electronic equipment preprocesses the N images; performing time domain noise reduction according to the 1 st image and the 2 nd image in the preprocessed N images to obtain a 1 st synthetic image; performing time domain noise reduction according to the jth synthetic image and the preprocessed jth +2 image to obtain a jth +1 synthetic image; wherein j is a positive integer, and j is more than or equal to 1 and less than or equal to N-2. Then, the electronic equipment carries out image enhancement processing on the 1 st synthetic image to the (N-1) th synthetic image to obtain enhanced images; performing spatial domain noise reduction processing on the enhanced image; and carrying out color noise point processing on the enhanced image subjected to the spatial domain noise reduction processing to obtain a target image obtained by photographing.
Wherein N may be less than 16. For example, N can be 4,5,6,7,8,9, 10, etc.
In this scheme, an image processing method including more noise reduction processing is provided, and compared with a night view shooting mode in the prior art, the method can obtain a shot image with a better visual effect by using images with fewer frames, that is, the N value can be smaller. Therefore, the acquisition time and the processing time of the multi-frame images can be saved, a user can obtain a high-quality shot image without waiting for a long time in a dark light shooting scene, and the user experience is good.
In one possible implementation, before the electronic device detects an operation of a user instructing to take a picture, the method may further include: the electronic equipment detects the operation of opening the camera by a user; the electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises a preview image; the electronic device determines target image acquisition parameters from the one or more preview images. The electronic device acquires N images, which may include: the electronic equipment acquires N images according to the target image acquisition parameters.
The target image acquisition parameters may include exposure time and exposure gain, among others.
That is to say, when shooting, the electronic device does not need to enter a separate night scene shooting interface to shoot the night scene, and in a common shooting mode, the electronic device can automatically judge that the current shooting scene is a dim light shooting scene according to the preview image and automatically determine target image acquisition parameters, so that subsequent image processing is automatically performed, a shooting image with higher quality is finally obtained, and user experience is further improved.
In another possible implementation, after the electronic device determines the target image acquisition parameters from the one or more preview images, the electronic device may continue to acquire images according to the target image acquisition parameters. After the electronic device detects that the user instructs to take a picture, the electronic device may acquire N images from the acquired images.
That is to say, the electronic device can acquire the image according to the target image acquisition parameter after determining the target image acquisition parameter according to the preview image, and directly acquire N images from the acquired image after the user instructs to take a picture, so that the image acquisition time is saved, and the user experience is improved.
In another possible implementation manner, the electronic device may determine the target image acquisition parameter after detecting an operation of a user instructing to take a picture.
That is, the electronic device may determine the target image acquisition parameters from the captured state, which may more accurately match the current captured scene than the target image acquisition parameters determined in the preview state.
In another possible implementation manner, the electronic device determines the target image acquisition parameter according to one or more preview images, including: if the sensitivity (ISO) of the one or more preview images is greater than the first threshold T1, and the luminance mean value Ymean of the one or more preview images is less than the second threshold T2 and greater than or equal to the third threshold T3, the electronic device determines the target image acquisition parameter.
That is, the electronic device determines the target image acquisition parameter when the ISO corresponding to the preview image is greater than the first threshold T1 and the Ymean of the preview image is less than the second threshold T2 and greater than or equal to the third threshold T3.
In another possible implementation, the electronic device determines target image acquisition parameters, including: if the Ymean is greater than or equal to a third threshold value T3 and less than a fourth threshold value T4, the electronic equipment determines that the target image acquisition parameter is a first image acquisition parameter; if Ymean is greater than or equal to the fourth threshold T4 and less than the fifth threshold T5; the electronic equipment determines the target image acquisition parameter as a second image acquisition parameter; if Ymean is greater than or equal to the fifth threshold T5 and less than the second threshold T2; the electronic equipment determines that the target image acquisition parameter is a third image acquisition parameter; wherein T3 is more than T4 and more than T5 is more than T2.
That is to say, the electronic device can perform grading according to the value range of the Ymean of the preview image, and different gears correspond to different target image acquisition parameters, so that the electronic device can determine different target image acquisition parameters according to different values of the Ymean of the preview image.
In another possible implementation, the electronic device performs preprocessing on the N images, including: the electronic equipment extracts the features of each image in the N images; the electronic device uses the characteristics of each image to make each image except the reference image be registered with the reference image; wherein the reference image is an image of the N images; and the electronic equipment performs ghost detection on each image after the image registration.
That is, the electronic device may pre-process each of the N images by feature extraction, image registration, and ghost detection, such that the electronic device subsequently performs image noise reduction and image enhancement processing on the pre-processed images.
In another possible implementation manner, the electronic device performs image enhancement processing on the 1 st synthetic image to the N-1 st synthetic image to obtain an enhanced image, including: the electronic equipment carries out non-linear accumulation of brightness on the 1 st synthetic image to the N-1 st synthetic image to obtain an accumulated image; and the electronic equipment linearly compresses the accumulated image to obtain an enhanced image.
That is, the electronic device can brighten the image through the image enhancement processing, and then control the brightness of the image within a reasonable range through the linear compression, so as to facilitate the subsequent further processing of the image.
In another possible implementation manner, the electronic device performs color noise processing on the enhanced image after the spatial domain noise reduction processing to obtain a target image, including: the electronic equipment carries out color noise point processing on the enhanced image after the spatial domain noise reduction processing according to the noise reduction parameters to obtain a target image; wherein, different target image acquisition parameters correspond to different noise reduction parameters.
That is to say, the electronic device may perform color noise processing on the enhanced image after the spatial noise reduction processing by using the noise reduction parameter adapted to the target image acquisition parameter according to the target image acquisition parameter.
On the other hand, the present technical solution provides an electronic device, including: one or more processors; a memory; a plurality of application programs; and one or more computer programs; wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the steps of: detecting the operation of photographing instructed by a user; acquiring N images; wherein N is a positive integer greater than 1; preprocessing the N images; performing time domain noise reduction according to the 1 st image and the 2 nd image in the preprocessed N images to obtain a 1 st synthetic image; performing time domain noise reduction according to the jth synthetic image and the (j + 2) th preprocessed image to obtain a (j + 1) th synthetic image; wherein j is a positive integer, and j is more than or equal to 1 and less than or equal to N-2; performing image enhancement processing on the 1 st synthetic image to the (N-1) th synthetic image to obtain enhanced images; performing spatial domain noise reduction processing on the enhanced image; and carrying out color noise point processing on the enhanced image subjected to the spatial domain noise reduction processing to obtain a target image obtained by photographing.
Wherein N may be less than 16. For example, N may be 4,5,6,7,8,9, 10, etc.
In a possible implementation manner, before detecting an operation of the user instructing to take a picture, the electronic device further performs the following steps: detecting an operation of opening a camera by a user; displaying a shooting preview interface, wherein the shooting preview interface comprises a preview image; determining target image acquisition parameters according to the one or more preview images; acquiring N images, including: and acquiring N images according to the target image acquisition parameters.
The target image acquisition parameters may include exposure time and exposure gain, among others.
In another possible implementation, determining target image acquisition parameters from one or more preview images includes: and if the sensitivity ISO corresponding to the one or more preview images is larger than a first threshold value T1, and the brightness mean value Ymean of the one or more preview images is smaller than a second threshold value T2 and larger than or equal to a third threshold value T3, determining the target image acquisition parameter.
In another possible implementation, determining target image acquisition parameters includes: if the Ymean is greater than or equal to a third threshold value T3 and less than a fourth threshold value T4, determining that the target image acquisition parameter is a first image acquisition parameter; if Ymean is greater than or equal to the fourth threshold T4 and less than the fifth threshold T5; determining the target image acquisition parameter as a second image acquisition parameter; if Ymean is greater than or equal to the fifth threshold T5 and less than the second threshold T2; determining the target image acquisition parameter as a third image acquisition parameter; wherein T3 is more than T4 and less than T5 and less than T2.
In another possible implementation, the preprocessing the N images includes: extracting features of each of the N images; registering each image other than the reference image with the reference image using the features of each image; wherein the reference image is an image of the N images; and carrying out ghost detection on each image after image registration.
In another possible implementation manner, performing image enhancement processing on the 1 st to the N-1 st composite images to obtain enhanced images includes: carrying out nonlinear accumulation of brightness on the 1 st synthetic image to the (N-1) th synthetic image to obtain an accumulated image; and performing linear compression on the accumulated image to obtain an enhanced image.
In another possible implementation manner, the performing color noise processing on the enhanced image after the spatial domain noise reduction processing to obtain a target image includes: carrying out color noise point processing on the enhanced image subjected to the spatial domain noise reduction processing according to the noise reduction parameters to obtain a target image; wherein, different target image acquisition parameters correspond to different noise reduction parameters.
In another aspect, the present disclosure provides an image processing apparatus, which is included in an electronic device, and has a function of implementing behaviors of the electronic device in the above aspect and possible implementations of the above aspect. The functions may be implemented by hardware, or by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the above-described functions. Such as a detection module or unit, a display module or unit, a processing module or unit, etc.
On the other hand, the present technical solution provides an electronic device, including: one or more processors; a memory; a plurality of application programs; and one or more computer programs. Wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions. The instructions, when executed by the electronic device, cause the electronic device to perform the image processing method of any one of the possible implementations of the above aspect.
In another aspect, the present disclosure provides a computer-readable storage medium, which includes computer instructions, and when the computer instructions are executed on an electronic device, the electronic device is caused to execute an image processing method in any one of the possible implementations of any one of the foregoing aspects.
In another aspect, the present disclosure provides a computer program product, which when run on an electronic device, causes the electronic device to execute the image processing method in any one of the possible designs of the above aspects.
Drawings
Fig. 1 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a flowchart of an image processing method according to an embodiment of the present application;
fig. 3A is a schematic diagram of a display interface provided in an embodiment of the present application;
FIG. 3B is a schematic view of another display interface provided in an embodiment of the present application;
fig. 3C is an image captured in a dark-light capturing scene according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of another image processing method according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a non-linear accumulated gain curve according to an embodiment of the present application;
fig. 6A is an image captured in a dark-light capturing scene according to an embodiment of the present disclosure;
fig. 6B is an image captured in a dark-light shooting scene according to the prior art;
FIG. 7 is a flowchart of another image processing method provided in the embodiments of the present application;
fig. 8 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
Night photographing is a common scene for a user to photograph using a mobile phone. The mobile phone with the night view shooting mode can synthesize a plurality of frames of images (for example, 16 frames of images) by using a night view shooting mode algorithm to achieve the effects of image enhancement and noise suppression, so as to obtain a shot image with a better visual effect. The night scene shooting mode algorithm mainly obtains a shot image with a better visual effect by carrying out simple noise reduction and image brightness accumulation on multi-frame images. Due to the fact that the number of the image frames is large, long time is needed for collecting multi-frame images and processing the collected multi-frame images by utilizing a night scene shooting mode algorithm. Therefore, when the user takes a picture through the night scene shooting mode, the user needs to wait for a long time to obtain a shot target image after clicking the shooting control, and the user experience is poor.
The embodiment of the application provides an image processing method which can be applied to electronic equipment, and N images can be acquired after the electronic equipment detects that a user indicates to take a picture; wherein N is a positive integer greater than 1. Then, the electronic device may perform preprocessing on the N images, and perform time-domain noise reduction on the preprocessed images to obtain a composite image. Then, the electronic device can perform image enhancement processing according to the synthesized image to obtain an enhanced image; and performing spatial domain noise reduction processing and color noise point processing on the enhanced image to finally obtain a target image obtained by photographing, wherein the target image has better image brightening effect and noise suppression effect. Compared with the night scene shooting mode of the shot image with better visual effect obtained by simple noise reduction and image brightness accumulation of multiple frames of images (for example, 16 frames), the method comprises more noise reduction processing operations, and the shot image with better visual effect can be obtained by fewer frames of images, namely, the N value can be smaller. Therefore, the acquisition time and the processing time of the multi-frame images can be saved, a user can obtain a high-quality shot image without waiting for a long time in a dark light shooting scene, and the user experience is good.
In addition, as the user needs to keep the mobile phone still during photographing, the method provided by the embodiment of the application can reduce the waiting time of the user during photographing, so that the user only needs to keep the mobile phone still during a short period of photographing the picture, and does not need to keep the mobile phone still during a long period of photographing the picture as in the prior art, thereby improving the user experience.
Moreover, the smaller number of frames means that the probability of image quality degradation caused by hand trembling of the user is reduced, and the proportion of image registration failure caused by ghost images generated by moving objects in the images or large-area movement is also reduced, which is beneficial to obtaining a night view image with higher quality. And the peak memory occupied by image processing with less frame number is lower, and the method can be adapted to middle and low-end equipment with weaker computing processing capability.
However, in the prior art, the number of frames of images acquired in the night scene shooting mode is large, which means that the probability of the quality reduction of the image caused by the hand trembling of the user is increased, and the large number of frames also increases the proportion of image registration failure caused by ghosting generated by moving objects in the image or large-area movement, thereby affecting the quality of the image of the night scene. Moreover, the peak memory occupied by processing images with a large number of frames by using a night scene shooting mode algorithm is high, and the night scene shooting mode algorithm is difficult to adapt to a middle-end or low-end mobile phone with weak calculation processing capacity.
The image processing method provided by the embodiment of the application can be applied to electronic devices such as a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and the like, and the embodiment of the application does not limit the specific types of the electronic devices.
Fig. 1 shows a schematic structural diagram of an electronic device 100. The electronic device 100 may include a processor 110, a memory 120, an antenna 1, an antenna 2, a mobile communication module 130, a wireless communication module 140, a sensor module 150, a camera 160, a display 170, keys 180, an indicator 190, and the like. The sensor module 150 may include a gyroscope sensor 150A, a fingerprint sensor 150B, a touch sensor 150C, a pressure sensor 150D, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. Wherein, the different processing units may be independent devices or may be integrated in one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
The memory 120 is used for storing application program codes for executing the scheme of the present application, and is controlled by the processor 110 to execute. The memory 120 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that may store static information and instructions, a RAM or other type of dynamic storage device that may store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and coupled to the processor via a bus. The memory may also be integral to the processor.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 130, the wireless communication module 140, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 130 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 130 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 130 can receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 130 can also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 130 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 130 may be disposed in the same device as at least some of the modules of the processor 110.
The wireless communication module 140 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 140 may be one or more devices integrating at least one communication processing module. The wireless communication module 140 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 140 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 130 and antenna 2 is coupled to wireless communication module 140 so that electronic device 100 can communicate with networks and other devices via wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), general Packet Radio Service (GPRS), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), time division code division multiple access (time-division multiple access, TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The gyro sensor 150A may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 150A. The gyro sensor 150A may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 150A detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyro sensor 150A may also be used for navigation, somatosensory gaming scenes.
The fingerprint sensor 150B is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The touch sensor 150C is also referred to as a "touch panel". The touch sensor 150C may be disposed on the display screen 170, and the touch sensor 150C and the display screen 170 form a touch screen, which is also called a "touch screen". The touch sensor 150C is used to detect a touch operation applied thereto or nearby. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display screen 170. In other embodiments, the touch sensor 150C may be disposed on a surface of the electronic device 100 at a different location than the display screen 170.
The pressure sensor 150D is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 150D may be disposed on the display screen 170. The pressure sensor 150D can be of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 150D, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 170, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 150D. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 150D. In some embodiments, the touch operations that are applied to the same touch position but have different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The electronic device 100 may implement a photographing function through the ISP, the camera 160, the video codec, the GPU, the display screen 170, the application processor, and the like.
The ISP is used to process the data fed back by the camera 160. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 160.
The camera 160 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include 1 or N cameras 160, N being a positive integer greater than 1.
The electronic device 100 implements display functions via the GPU, the display screen 170, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 170 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 170 is used to display images, video, and the like. The display screen 170 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 170, N being a positive integer greater than 1.
The keys 180 include a power-on key, a volume key, and the like. The keys 180 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The indicator 190 may be an indicator light, and may be used to indicate a charging status, a change in power, or a message, a missed call, a notification, etc.
In this embodiment of the present application, the camera 160 may capture multiple frames of initial dim-light images, and the processor 110 performs image processing according to the multiple frames of initial dim-light images, where the image processing may include feature extraction, image registration, ghost detection, time domain noise reduction, image enhancement, spatial domain noise reduction, color noise point processing, and the like, and through the image processing, a target image with a better image enhancement effect and noise suppression effect is obtained. The processor 110 may then control the display screen 170 to present the processed target image, which is the image captured in the dark scene.
For convenience of understanding, the following embodiments of the present application will specifically describe an image processing method provided by the embodiments of the present application by taking an electronic device having a structure shown in fig. 1 as an example, with reference to the accompanying drawings.
In some embodiments, the image processing method may include: the electronic equipment detects the operation of a user for indicating photographing and acquires N images; wherein N is a positive integer greater than 1. Then, the electronic equipment preprocesses the N images; performing time domain noise reduction according to the 1 st image and the 2 nd image in the preprocessed N images to obtain a 1 st synthetic image; performing time domain noise reduction according to the jth synthetic image and the preprocessed jth +2 image to obtain a jth +1 synthetic image; wherein j is a positive integer, and j is more than or equal to 1 and less than or equal to N-2. Then, the electronic equipment carries out image enhancement processing on the 1 st synthetic image to the (N-1) th synthetic image to obtain enhanced images; and then carrying out spatial domain noise reduction processing on the enhanced image. And finally, the electronic equipment performs color noise point processing on the enhanced image subjected to the spatial domain noise reduction processing to obtain a target image obtained by photographing. Compared with the night scene shooting mode algorithm in the prior art, the method comprises more noise reduction processing operations, and shot images with better visual effect can be obtained through images with fewer frames. Therefore, the acquisition time and the processing time of the multi-frame images can be saved, a user can obtain a high-quality shot image without waiting for a long time in a dark light shooting scene, and the user experience is good.
In other embodiments, referring to fig. 2, the image processing method may include:
201. the electronic device detects an operation of a user to turn on the camera.
As mentioned above, the electronic device may be a mobile phone, a tablet computer, or other devices. The following description is given by taking an electronic device as a mobile phone. Referring to fig. 3A, when the electronic device is a mobile phone, fig. 3A shows a Graphical User Interface (GUI) of the mobile phone, which is a desktop 301 of the mobile phone. For example, the operation of the user to open the camera may be an operation of the user clicking an icon 302 of a camera Application (APP) on the desktop 301. As another example, the operation of the user to turn on the camera may be a voice instruction of the user. For another example, the operation of the user to turn on the camera may be a gesture operation of the user. For example, a user operates a gesture to draw a circle trace on the desktop 301.
202. The electronic equipment displays a shooting preview interface, and the shooting preview interface comprises a preview image.
Illustratively, after the mobile phone detects an operation of opening the camera by the user, the camera application may be started, and another GUI as shown in fig. 3B is displayed, which may be referred to as a shooting preview interface 303. A preview image 304 may be included on the capture preview interface 303.
203. The electronic device determines target image acquisition parameters from the one or more preview images.
The target image acquisition parameters may include exposure time, exposure gain, and the like. Exposure is a process of sensing light by an image sensor in an electronic device. In the exposure process, the image sensor senses light and converts a light signal into an electric signal. The exposure time is the time during which the image sensor is sensitive to light. Controlling the exposure time can control the total luminous flux. The exposure gain is a coefficient for amplifying an electric signal output from the image sensor. Generally, the darker the shooting scene is, the larger the exposure time and the exposure gain are, so that the image sensor can collect more optical signals and fully amplify the electrical signals converted from the optical signals, and the electronic device can convert the electrical signals into images for processing.
The electronic device determines the target image acquisition parameter according to the one or more preview images, and may include: if the ISO corresponding to the one or more preview images is greater than the first threshold value T1, and the luminance mean value Ymean of the one or more preview images is less than the second threshold value T2 and greater than or equal to the third threshold value T3, the electronic device determines the target image acquisition parameter.
ISO is used to indicate the degree of sensitivity of an image sensor to light. Typically, in a dim light shooting scenario, a larger ISO needs to be selected to capture brighter images. The mean luminance value Ymean of one or more preview images is used to indicate the average luminance of the preview images. It can be understood that, in a dark-light shooting scene, since ambient light is dark, the average brightness of the preview image is small. In an extremely dark scene, that is, when the Ymean of the preview image is small, the effective image information that can be captured by the image sensor is very little and the image noise is serious, and at this time, even if the image processing method provided by the embodiment of the present application is executed again, a high-quality captured image cannot be obtained. Therefore, the electronic device may continue to execute the image processing method according to the embodiment of the present application only when the Ymean of the preview image is within a certain interval. For example, the electronic device may determine the target image acquisition parameter only when the yman of the preview image is less than the second threshold T2 and greater than or equal to the third threshold T3. And when the Ymean of the preview image is smaller than the third threshold T3, it indicates that the brightness of the preview image is extremely dark, and the electronic device does not execute the image processing method described in the embodiment of the present application, and only uses the ordinary photographing mode to photograph the image.
The first threshold T1, the second threshold T2, and the third threshold T3 are preset values. For example, the first threshold T1 may be 4000, the second threshold T2 may be 50, and the third threshold T3 may be 5. It can be understood that the first threshold T1, the second threshold T2, and the third threshold T3 may also be other values, and the specific values of the first threshold T1, the second threshold T2, and the third threshold T3 are not limited in this embodiment of the application.
That is, the electronic device determines the target image acquisition parameter when the ISO corresponding to the preview image is greater than the first threshold T1 and the Ymean of the preview image is less than the second threshold T2 and greater than or equal to the third threshold T3.
Illustratively, the electronic device determines the target image acquisition parameters, which may include: if the mean of the preview image is greater than or equal to the third threshold value T3 and less than the fourth threshold value T4, the electronic device determines that the target image acquisition parameter is the first image acquisition parameter. If Ymean is greater than or equal to the fourth threshold T4 and less than the fifth threshold T5; the electronic device determines the target image acquisition parameter as the second image acquisition parameter. If Ymean is greater than or equal to the fifth threshold T5 and less than the second threshold T2; the electronic device determines the target image acquisition parameter as the third image acquisition parameter. Wherein T3 is more than T4 and less than T5 and less than T2.
The fourth threshold T4 and the fifth threshold T5 are both preset values, and the specific values of the fourth threshold T4 and the fifth threshold T5 are not limited in the embodiment of the present application. The first image acquisition parameter, the second image acquisition parameter and the third image acquisition parameter are different target image acquisition parameters corresponding to different value intervals of the electronic equipment Ymean. Since Ymean represents the luminance mean value of the preview image, that is, ymean corresponds to the luminance of the preview scene, and different luminances of the preview scene should match different target image acquisition parameters, the electronic device can adapt to different target acquisition parameters according to different value intervals of Ymean. The electronic device may simply be referred to as a "stepping" according to the different values of Ymean corresponding to different target acquisition parameters. The above example shows that the electronic device corresponds to 3 different target image acquisition parameters according to the value range of Ymean of the preview image through the second threshold T2, the third threshold T3, the fourth threshold T4 and the fifth threshold T5, that is, 3 gears are divided. It can be understood that the manner of the electronic device performing the gear shifting according to the value range of the Ymean of the preview image is not limited to the above example, that is, the electronic device may be divided into a plurality of different gears. For example, the electronic device may divide a number of different gears by fewer or more thresholds, e.g., 2, 3, 4,5, or 6 gears. The number of the gears divided by the electronic equipment is not limited in the embodiment of the application.
That is to say, the electronic device can perform grading according to the value range of the Ymean of the preview image, and different gears correspond to different target image acquisition parameters, so that the electronic device can determine different target image acquisition parameters according to different values of the Ymean of the preview image.
204. The electronic equipment detects the operation of taking a picture instructed by the user.
After the electronic device determines the target image acquisition parameters according to the one or more preview images, the electronic device may detect an operation of the user instructing to take a picture. The operation of the user to instruct the photographing may take various forms. Illustratively, referring to fig. 3B, a capture control 305 may also be included on the capture preview interface 303. The operation of the user instructing to take a picture may be an operation of the user clicking the shooting control 405, and in the shooting mode, after the mobile phone detects that the user clicks the shooting control 305, the mobile phone executes a shooting operation. Still further illustratively, the operation of the user instructing to take a picture may be a voice instruction of the user. Still illustratively, the operation of the user instructing to take a picture may be a gesture operation of the user, for example, a gesture operation of drawing a check on the shooting preview interface 303 may be performed for the user. The embodiment of the present application does not limit the form of the operation of the user to instruct the photographing.
205. The electronic equipment acquires N images according to the target image acquisition parameters; wherein N is a positive integer greater than 1.
For example, after the electronic device detects that the user instructs to take a picture, the electronic device may acquire N images according to the determined target image capturing parameters. Illustratively, N may be less than 16. For example, N may be 4,5,6,7,8,9, 10, etc.
Among them, as shown in steps 203 to 205, a process in which the electronic device determines target image capturing parameters from the preview image, and acquires N images according to the target image capturing parameters after detecting an operation of photographing instructed by the user may be referred to as an Auto Exposure (AE) process. That is, the electronic device may automatically determine the target image capturing parameters according to the preview image, and then, after the user instructs to take a picture, the electronic device may automatically acquire N images according to the determined target image capturing parameters. As previously described, the target image acquisition parameters may include exposure time and exposure gain, that is, the electronics may automatically acquire N images based on the determined exposure time and exposure gain.
In some embodiments, after step 203, i.e., after the electronic device determines the target image acquisition parameters from the one or more preview images, the electronic device may continue to acquire images according to the target image acquisition parameters. After the electronic device detects that the user instructs to take a picture, that is, after step 204, the electronic device may acquire the latest N images acquired according to the target image acquisition parameters before the moment when the user instructs to take a picture.
In other embodiments, the electronic device may determine the target image acquisition parameter after detecting an operation of the user instructing to take a picture. That is, the electronic device may not determine the target image capturing parameter using the preview image in step 203, but may determine the target image capturing parameter from the image captured in the photographing state after step 204.
206. The electronic device pre-processes the N images.
The electronics preprocess the N images, which may include, for example, feature extraction, image registration, and ghost detection, among others.
Exemplarily, referring to fig. 4, step 206 may specifically include:
401. the electronic device extracts features of each of the N images.
Illustratively, the electronic device may extract features such as Speeded Up Robust Features (SURFs) or scale-invariant feature transforms (SIFTs) for each of the N images. It is to be understood that the features of each of the N images are not limited to SURF or SIFT, and the embodiments of the present application do not limit the types of the features of each of the N images.
402. The electronic device uses the features of each image to register each image other than the reference image with the reference image.
Image registration refers to the alignment of two or more images of the same target in spatial position. The electronic device may use the features of each of the extracted N images to perform image registration, registering each image with a reference image. The reference picture may be a picture of the N pictures. For example, the electronic device may set the flag of the 1 st image of the N images to 0, the flag of the last image of the N images to 2, and the flags of the remaining images of the N images to 1. For example, the reference picture may be the 1 st picture of the N pictures, that is, the reference picture may be the picture with the flag bit of 0 among the N pictures.
403. And the electronic equipment performs ghost detection on each image after image registration.
Ghost detection can be used to calculate the deviation between each image after image registration and neighboring images due to moving objects. The deviation can be expressed in terms of a ghost area. The larger the ghost area, the larger the deviation. The moving object is a moving person or object existing in a scene corresponding to the N images. For example, after the electronic device detects that the user instructs to take a picture, when the electronic device acquires N images according to the target image acquisition parameters, a running person exists in the shooting scene, and ghost images may exist in the N images acquired by the electronic device.
That is, the electronic device may pre-process each of the N images by feature extraction, image registration, and ghost detection, such that the electronic device subsequently performs image noise reduction and image enhancement processing on the pre-processed images.
207. And the electronic equipment performs time domain noise reduction according to the 1 st image and the 2 nd image in the N preprocessed images to obtain the 1 st synthesized image.
After the electronic device preprocesses the N images, the electronic device may perform time-domain noise reduction on the 1 st image and the 2 nd image of the preprocessed N images to obtain a 1 st synthesized image. Time-domain noise reduction is noise reduction analysis based on image time sequence, and noise points which fluctuate randomly in an image can be suppressed.
According to the foregoing, after the electronic device preprocesses the N images, the electronic device may detect a ghost area corresponding to the ghost region according to the 1 st image and the 2 nd image. Illustratively, the electronic device performs temporal denoising according to the 1 st image and the 2 nd image in the preprocessed N images, and may include: when the detected ghost area is smaller than the preset value, the electronic device may perform time-domain fusion on the 1 st image and the 2 nd image of the N images. The time domain fusion may be averaging after adding the time domain sequences. It can be understood that the time domain fusion mode is not limited thereto, and the time domain fusion mode is not limited in the embodiment of the present application. Or, when the ghost area is greater than or equal to the preset value, the electronic device may take the content of the 1 st image from the ghost area corresponding to the ghost area, and fuse the 1 st image in the N images with the other areas except for the ghost area in the 2 nd image. That is, when the ghost area is large, the electronic device does not perform temporal fusion on the ghost area, but performs temporal fusion on the other areas except for the ghost area. This is because, when the area of the ghost is large, the temporal fusion of the ghost areas will generate a severe ghost phenomenon, which affects the quality of the 1 st composite image.
208. And the electronic equipment performs time domain noise reduction according to the jth synthetic image and the preprocessed jth +2 image to obtain a jth +1 synthetic image.
Wherein j is a positive integer, and j is more than or equal to 1 and less than or equal to N-2. Illustratively, step 208 may include: and the electronic equipment detects the ghost area according to the jth composite image and the preprocessed jth +2 image. When the detected ghost area is smaller than the preset value, the electronic device may perform time domain fusion on the jth composite image and the preprocessed jth +2 image. Or, when the detected ghost area is greater than or equal to the preset value, the electronic device may take the content of the 1 st image from the ghost area corresponding to the ghost area, and fuse the jth composite image and the other areas except for the ghost area in the preprocessed jth +2 th image.
Based on step 208, the electronic device may perform time-domain denoising according to the 1 st synthetic image and the preprocessed 3 rd image, to obtain a 2 nd synthetic image; performing time domain noise reduction according to the 2 nd synthetic image and the preprocessed 4 th image to obtain a 3 rd synthetic image; … …; and performing time domain noise reduction according to the N-2 th synthetic image and the N image after preprocessing to obtain an N-1 th synthetic image. Thus, the electronic device can finally obtain the 1 st composite image to the (N-1) th composite image.
209. The electronic equipment carries out image enhancement processing on the 1 st synthetic image to the (N-1) th synthetic image to obtain an enhanced image.
After the time domain noise reduction, the electronic equipment carries out image enhancement processing on the 1 st synthetic image to the (N-1) th synthetic image to obtain an enhanced image. The image enhancement processing can be used for enhancing the brightness of the image, so that the details of the dark part area of the image are improved.
In some embodiments, step 209 may comprise: the electronic equipment carries out non-linear accumulation of brightness on the 1 st synthetic image to the (N-1) th synthetic image to obtain an accumulated image. And the electronic equipment linearly compresses the accumulated image to obtain an enhanced image.
For example, the electronic device may perform non-linear accumulation of luminance for the 1 st through N-1 st composite images by the non-linear accumulation gain curve shown in fig. 5. Referring to fig. 5, the horizontal axis represents image brightness and the vertical axis represents accumulation weight. The accumulation weights of different brightness are different, the accumulation weight of a darker area is great, and the accumulation weight of a lighter area is small, so that the information of a dark part area of an image can be improved, and a bright part area is kept from being overexposed. It is to be understood that the manner in which the electronic device performs the nonlinear accumulation of the luminance values on the 1 st composite image to the N-1 st composite image is not limited to the example shown in fig. 5, and the electronic device may also implement the nonlinear accumulation of the 1 st composite image to the N-1 st composite image in other manners, which is not limited in this embodiment of the present application.
After obtaining the accumulated image through non-linear accumulation, the electronic device may perform linear compression on the accumulated image to obtain an enhanced image. As can be seen from fig. 5, assuming that the original luminance range of the N images is [0, 255], the luminance range of the obtained accumulated image may possibly exceed the range of [0, 255] after the 1 st to N-1 st composite images are subjected to the non-linear accumulation. Thus, the electronics can linearly compress the accumulated image such that the resulting enhanced image is still in the range of [0, 255 ]. The embodiment of the present application does not limit the way in which the electronic device linearly compresses the accumulated image.
210. And the electronic equipment performs spatial domain noise reduction processing on the enhanced image.
After the electronic device performs image enhancement processing on the 1 st to the N-1 st composite images, the electronic device may perform spatial noise reduction processing on the enhanced images to further suppress image noise. The spatial domain noise reduction is a noise reduction analysis aiming at a space, is a single-frame image processing technology, can be used for smoothing high-frequency noise of an image, and simultaneously protects image details from being softened. For example, the electronic device may perform spatial domain denoising by using a wavelet decomposition method, and it is understood that the electronic device may also perform spatial domain denoising by using other denoising algorithms, and the denoising method related to the spatial domain denoising is not limited in the embodiment of the present application.
211. And the electronic equipment carries out color noise point processing on the enhanced image subjected to the spatial domain noise reduction processing to obtain a target image obtained by photographing.
After time domain noise reduction and space domain noise reduction are carried out on the image, the electronic equipment can also carry out color noise point processing on the enhanced image subjected to the space domain noise reduction processing so as to inhibit color noise points in the enhanced image, and finally a target image obtained by photographing is obtained. For example, after the electronic device detects that the user clicks the shooting control 305 of fig. 3B, the electronic device may take a picture to obtain the target image 306 shown in fig. 3C.
Illustratively, the electronic device may use a wavelet transform-based denoising algorithm for color noise processing. It is to be understood that the electronic device may use other noise reduction algorithms to perform the color noise processing, and the noise reduction algorithm involved in the color noise processing is not limited in the embodiment of the present application.
In some embodiments, step 211 may include: and the electronic equipment performs color noise point processing on the enhanced image subjected to the spatial domain noise reduction processing according to the noise reduction parameters to obtain a target image. Wherein, different target image acquisition parameters may correspond to different noise reduction parameters.
That is to say, the electronic device may perform color noise processing on the enhanced image after the spatial noise reduction processing by using the noise reduction parameter adapted to the target image acquisition parameter according to the target image acquisition parameter.
The method provided by the embodiment of the application can obviously improve the details of the dark part area of the image and simultaneously can keep the bright part area of the image from being overexposed. Referring to fig. 6A to 6B, fig. 6A illustrates an image captured in a dark light capturing scene according to the image processing method of the embodiment of the present application; fig. 6B shows an image captured in the same dim-light capturing scene in the ordinary photographing mode in the related art.
In the solutions described in step 201 to step 211, in a dim light shooting scene, the electronic device may determine, according to a preview image of the shooting preview interface, that a current shooting environment is a dim light scene and determine a target image acquisition parameter corresponding to the current dim light scene. Then, the electronic equipment acquires N images according to the target image acquisition parameters; wherein N is a positive integer greater than 1. Then, the electronic device can perform preprocessing, time domain noise reduction, image enhancement, space domain noise reduction and color noise point processing on the N images, and finally obtains a target image obtained by photographing with a good image brightening effect and a good noise suppression effect. Compared with the night scene shooting mode in the prior art, the method comprises more noise reduction processing operations, and shot images with better visual effect can be obtained through images with fewer frames, namely the N value can be smaller. Therefore, the acquisition time and the processing time of the multi-frame images can be saved, a user can obtain a high-quality shot image without waiting for a long time in a dark light shooting scene, and the user experience is good.
In addition, the smaller number of frames means that the probability of image quality degradation caused by hand trembling of a user is reduced, and the proportion of image registration failure caused by ghost images generated by moving objects in the images or large-area movement is also reduced, which is beneficial to obtaining a night view image with higher quality. And the peak memory occupied by image processing with less frame number is lower, and the method can be adapted to middle and low-end equipment with weaker computing processing capability.
In addition, according to the scheme, when shooting in dark light, the electronic equipment does not need to enter an independent night-scene shooting interface to shoot the night scene, and under a common shooting mode, the electronic equipment can automatically judge that the current scene is the dark-light scene according to the preview image and automatically determine target image acquisition parameters, so that subsequent image processing is automatically performed, a shot image with higher quality is finally obtained, and the user experience is further improved.
Exemplarily, fig. 7 shows a flowchart of an image processing method provided by an embodiment of the present application. Referring to fig. 7, in a shooting scene, the electronic device may determine that the current shooting scene is a dim light shooting scene according to ISO > T1 and Ymean < T2 corresponding to a preview image of the shooting preview interface, and determine a gear corresponding to the current scene according to a value of Ymean. Wherein different gears correspond to different target image acquisition parameters. Referring to fig. 7, the electronic device may divide 5 gears according to a value of Ymean, where the 5 gears correspond to the first image capturing parameter, the second image capturing parameter, the third image capturing parameter, the fourth image capturing parameter, and the fifth image capturing parameter, respectively. Then, the electronic device can acquire the multi-frame image according to the target image acquisition parameter corresponding to the determined gear. And then, the electronic equipment can sequentially process the multiple frames of images and synthesize the multiple frames of images into a target image obtained by shooting. And different target image acquisition parameters correspond to different target images obtained by final shooting. Referring to fig. 7, the first image capturing parameter, the second image capturing parameter, the third image capturing parameter, the fourth image capturing parameter, and the fifth image capturing parameter correspond to a first target image, a second target image, a third target image, a fourth target image, and a fifth target image, respectively, which are obtained by shooting.
The processing, by the electronic device, the multiple frames of images in sequence may include: the electronic equipment sequentially performs feature extraction, image registration and ghost detection on each frame of image in the multi-frame image. Feature extraction, image registration, and ghost detection may be collectively referred to as image pre-processing. And then, the electronic equipment sequentially performs time domain noise reduction and image enhancement on each frame of preprocessed image. Then, after the electronic device performs image enhancement by using the last frame of image to obtain an enhanced image, the electronic device may perform spatial noise reduction and color noise processing on the enhanced image. Thus, by the image processing method shown in fig. 7, the electronic device can obtain a target image obtained by photographing with good image enhancement effect and noise suppression effect. The specific process of the electronic device performing the above processing on the multiple frames of images may refer to the description of fig. 2, and is not described herein again.
It will be appreciated that in order to implement the above-described functions, the electronic device comprises corresponding hardware and/or software modules for performing the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed in hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In this embodiment, the electronic device may be divided into functional modules according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the case of dividing each functional module by corresponding functions, fig. 8 shows a possible composition diagram of the electronic device 800 involved in the above embodiment, as shown in fig. 8, the electronic device 800 may include: a detection unit 801, a display unit 802, a determination unit 803, an acquisition unit 804, and a processing unit 805.
Among other things, detection unit 801 may be used to enable electronic device 800 to perform steps 201, 204, etc., described above, and/or other processes for the techniques described herein.
Display unit 802 may be used to enable electronic device 800 to perform, among other things, steps 202 described above, and/or other processes for the techniques described herein.
The determination unit 803 may be used to enable the electronic device 800 to perform, among other things, the above-described steps 203, and/or other processes for the techniques described herein.
Acquisition unit 804 may be used to support electronic device 800 in performing, among other things, steps 205 described above, and/or other processes for the techniques described herein.
The processing unit 805 may be used to support the electronic device 800 in performing the above-described steps 206-211, steps 401-403, etc., and/or other processes for the techniques described herein.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The electronic device 800 provided by the embodiment is used for executing the image processing method, so that the same effects as the implementation method can be achieved.
Where an integrated unit is employed, the electronic device 800 may include a processing module, a memory module, and a communication module. The processing module may be configured to control and manage actions of the electronic device 800, and for example, may be configured to support the electronic device 800 to execute steps executed by the detection unit 801, the display unit 802, the determination unit 803, the acquisition unit 804, and the processing unit 805 described above. The memory modules may be used to support the electronic device 800 in storing program codes and data and the like. A communication module may be used to support communication of the electronic device 800 with other devices, such as with wireless access devices.
The processing module may be a processor or a controller. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., a combination of one or more microprocessors, a Digital Signal Processing (DSP) and a microprocessor, or the like. The storage module may be a memory. The communication module may specifically be a radio frequency circuit, a bluetooth chip, a Wi-Fi chip, or other devices that interact with other electronic devices.
Embodiments of the present application further provide a computer-readable storage medium, in which computer instructions are stored, and when the computer instructions are executed on an electronic device, the electronic device is caused to execute the above related method steps to implement the image processing method in the above embodiments.
Embodiments of the present application further provide a computer program product, which when run on a computer, causes the computer to execute the above related steps to implement the image processing method performed by the electronic device in the above embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the image processing method executed by the electronic device in the above-mentioned method embodiments.
The electronic device, the computer-readable storage medium, the computer program product, or the chip provided in this embodiment are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the electronic device, the computer-readable storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the description of the foregoing embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the functional modules is used for illustration, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules, so as to complete all or part of the functions described above.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application, or portions of the technical solutions that substantially contribute to the prior art, or all or portions of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. An image processing method, comprising:
the electronic equipment detects the operation of photographing instructed by a user;
the electronic equipment acquires N images; wherein N is a positive integer greater than 1; the N images are acquired before receiving the operation of photographing instructed by the user;
the electronic equipment preprocesses the N images;
the electronic equipment performs time domain noise reduction according to the 1 st image and the 2 nd image in the N preprocessed images to obtain a 1 st synthesized image;
the electronic equipment performs time domain noise reduction according to the jth synthetic image and the preprocessed jth +2 image to obtain a jth +1 synthetic image; wherein j is a positive integer, and j is more than or equal to 1 and less than or equal to N-2;
the electronic device performs image enhancement processing on the 1 st synthetic image to the (N-1) th synthetic image to obtain an enhanced image, and the image enhancement processing includes: the electronic equipment carries out nonlinear accumulation of brightness on the 1 st synthetic image to the (N-1) th synthetic image to obtain an accumulated image; the electronic equipment linearly compresses the accumulated image to obtain the enhanced image;
the electronic equipment performs spatial domain noise reduction processing on the enhanced image;
and the electronic equipment performs color noise point processing on the enhanced image subjected to the spatial domain noise reduction processing to obtain a target image obtained by photographing.
2. The method of claim 1, wherein before the electronic device detects the operation of the user indicating to take a picture, the method further comprises:
the electronic equipment detects the operation of opening a camera by the user;
the electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises a preview image;
the electronic equipment determines target image acquisition parameters according to one or more preview images;
the electronic device acquiring the N images includes: and the electronic equipment acquires the N images according to the target image acquisition parameters.
3. The method of claim 2, wherein the electronic device determines the target image acquisition parameters from one or more of the preview images, comprising:
if the sensitivity ISO corresponding to one or more preview images is greater than a first threshold T1, and the luminance mean value Ymean of one or more preview images is less than a second threshold T2 and greater than or equal to a third threshold T3, the electronic device determines the target image acquisition parameter.
4. The method of claim 3, wherein the electronic device determines the target image acquisition parameters, comprising:
if the Ymean is greater than or equal to the third threshold T3 and less than a fourth threshold T4, the electronic device determines that the target image acquisition parameter is a first image acquisition parameter;
if said Ymean is greater than or equal to said fourth threshold T4 and less than a fifth threshold T5; the electronic equipment determines the target image acquisition parameter as a second image acquisition parameter;
if said yman is greater than or equal to said fifth threshold T5 and less than said second threshold T2; the electronic equipment determines the target image acquisition parameter as a third image acquisition parameter;
wherein T3 is more than T4 and more than T5 is more than T2.
5. The method of any of claims 2-4, wherein the target image acquisition parameters include exposure time and exposure gain.
6. The method according to any of claims 2-4, wherein N =6.
7. The method of any of claims 2-4, wherein the electronic device performs the pre-processing on the N images, comprising:
the electronic device extracting features of each of the N images;
the electronic equipment uses the characteristics of each image to register each image except the reference image with the reference image; wherein the reference image is an image of the N images;
and the electronic equipment performs ghost detection on each image after image registration.
8. The method according to any one of claims 2-4, wherein the electronic device performs color noise processing on the enhanced image after the spatial noise reduction processing to obtain the target image, and comprises:
the electronic equipment carries out color noise point processing on the enhanced image subjected to the airspace noise reduction processing according to the noise reduction parameters to obtain the target image; wherein, different target image acquisition parameters correspond to different noise reduction parameters.
9. An electronic device, comprising:
one or more processors; a memory; a plurality of application programs; and one or more computer programs; wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the steps of:
detecting an operation of a user for indicating photographing;
acquiring N images; wherein N is a positive integer greater than 1; the N images are acquired before receiving the operation of photographing instructed by the user;
preprocessing the N images;
performing time domain noise reduction according to the 1 st image and the 2 nd image in the preprocessed N images to obtain a 1 st synthetic image;
performing time domain noise reduction according to the jth synthetic image and the preprocessed jth +2 image to obtain a jth +1 synthetic image; wherein j is a positive integer, and j is more than or equal to 1 and less than or equal to N-2;
the image enhancement processing is carried out on the 1 st composite image to the (N-1) th composite image to obtain an enhanced image, and the image enhancement processing method comprises the following steps:
carrying out nonlinear accumulation of brightness on the 1 st synthetic image to the N-1 st synthetic image to obtain an accumulated image;
performing linear compression on the accumulated image to obtain the enhanced image;
performing spatial domain noise reduction processing on the enhanced image;
and carrying out color noise point processing on the enhanced image subjected to the spatial domain noise reduction processing to obtain a target image obtained by photographing.
10. The electronic device according to claim 9, wherein before the operation of the user instructing to take a picture is detected, the electronic device further performs the steps of:
detecting an operation of the user to turn on a camera;
displaying a shooting preview interface, wherein the shooting preview interface comprises a preview image;
determining target image acquisition parameters according to one or more preview images;
the acquiring the N images includes: and acquiring the N images according to the target image acquisition parameters.
11. The electronic device of claim 10, wherein the determining the target image acquisition parameters from the one or more preview images comprises:
and if the sensitivity ISO corresponding to one or more preview images is larger than a first threshold value T1, and the brightness mean value Ymean of one or more preview images is smaller than a second threshold value T2 and larger than or equal to a third threshold value T3, determining a target image acquisition parameter.
12. The electronic device of claim 11, wherein the determining target image acquisition parameters comprises:
if the Ymean is greater than or equal to the third threshold value T3 and less than a fourth threshold value T4, determining that the target image acquisition parameter is a first image acquisition parameter;
if said yman is greater than or equal to said fourth threshold T4 and less than a fifth threshold T5; determining the target image acquisition parameter as a second image acquisition parameter;
if said Ymean is greater than or equal to said fifth threshold T5 and less than said second threshold T2; determining the target image acquisition parameter as a third image acquisition parameter;
wherein T3 is more than T4 and more than T5 is more than T2.
13. The electronic device of any of claims 10-12, wherein the target image acquisition parameters include exposure time and exposure gain.
14. The electronic device of any of claims 10-12, wherein N =6.
15. The electronic device of any of claims 10-12, wherein the pre-processing the N images comprises:
extracting features of each of the N images;
registering each image other than the reference image with the reference image using features of said each image; wherein the reference image is an image of the N images;
and carrying out ghost detection on each image after image registration.
16. The electronic device according to any of claims 10-12, wherein the performing color noise processing on the enhanced image after spatial domain noise reduction processing to obtain the target image comprises:
carrying out color noise point processing on the enhanced image subjected to the airspace noise reduction processing according to the noise reduction parameters to obtain the target image; wherein, different target image acquisition parameters correspond to different noise reduction parameters.
17. A computer-readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the image processing method of any one of claims 1-8.
CN201911244547.5A 2019-12-06 2019-12-06 Image processing method and electronic device Active CN112929558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911244547.5A CN112929558B (en) 2019-12-06 2019-12-06 Image processing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911244547.5A CN112929558B (en) 2019-12-06 2019-12-06 Image processing method and electronic device

Publications (2)

Publication Number Publication Date
CN112929558A CN112929558A (en) 2021-06-08
CN112929558B true CN112929558B (en) 2023-03-28

Family

ID=76161932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911244547.5A Active CN112929558B (en) 2019-12-06 2019-12-06 Image processing method and electronic device

Country Status (1)

Country Link
CN (1) CN112929558B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115484394B (en) * 2021-06-16 2023-11-14 荣耀终端有限公司 Guide use method of air separation gesture and electronic equipment
CN114302026B (en) * 2021-12-28 2024-06-21 维沃移动通信有限公司 Noise reduction method, device, electronic equipment and readable storage medium
CN115526788A (en) * 2022-03-18 2022-12-27 荣耀终端有限公司 Image processing method and device
CN117499789B (en) * 2023-12-25 2024-05-17 荣耀终端有限公司 Shooting method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104869309A (en) * 2015-05-15 2015-08-26 广东欧珀移动通信有限公司 Shooting method and shooting apparatus
CN108924420A (en) * 2018-07-10 2018-11-30 Oppo广东移动通信有限公司 Image capturing method, device, medium, electronic equipment and model training method
CN109873953A (en) * 2019-03-06 2019-06-11 深圳市道通智能航空技术有限公司 Image processing method, shooting at night method, picture processing chip and aerial camera
CN110166708A (en) * 2019-06-13 2019-08-23 Oppo广东移动通信有限公司 Night scene image processing method, device, electronic equipment and storage medium
CN110290289A (en) * 2019-06-13 2019-09-27 Oppo广东移动通信有限公司 Image denoising method, device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101493694B1 (en) * 2008-08-01 2015-02-16 삼성전자주식회사 Image processing apparatus, method for processing image, and recording medium storing program to implement the method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104869309A (en) * 2015-05-15 2015-08-26 广东欧珀移动通信有限公司 Shooting method and shooting apparatus
CN108924420A (en) * 2018-07-10 2018-11-30 Oppo广东移动通信有限公司 Image capturing method, device, medium, electronic equipment and model training method
CN109873953A (en) * 2019-03-06 2019-06-11 深圳市道通智能航空技术有限公司 Image processing method, shooting at night method, picture processing chip and aerial camera
CN110166708A (en) * 2019-06-13 2019-08-23 Oppo广东移动通信有限公司 Night scene image processing method, device, electronic equipment and storage medium
CN110290289A (en) * 2019-06-13 2019-09-27 Oppo广东移动通信有限公司 Image denoising method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112929558A (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN112929558B (en) Image processing method and electronic device
AU2018415738B2 (en) Photographing Mobile Terminal
CN110136183B (en) Image processing method and device and camera device
WO2023016025A1 (en) Image capture method and device
CN113810600B (en) Terminal image processing method and device and terminal equipment
CN108605099A (en) The method and terminal taken pictures for terminal
CN110930329B (en) Star image processing method and device
CN113507558B (en) Method, device, terminal equipment and storage medium for removing image glare
CN112446252A (en) Image recognition method and electronic equipment
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN113810603A (en) Point light source image detection method and electronic equipment
CN115514876B (en) Image fusion method, electronic device, storage medium and computer program product
US11989863B2 (en) Method and device for processing image, and storage medium
WO2014098143A1 (en) Image processing device, imaging device, image processing method, and image processing program
CN115460343B (en) Image processing method, device and storage medium
CN113810622B (en) Image processing method and device
WO2022179412A1 (en) Recognition method and electronic device
CN117880645A (en) Image processing method and device, electronic equipment and storage medium
CN115706869A (en) Terminal image processing method and device and terminal equipment
CN116723408B (en) Exposure control method and electronic equipment
CN117395495B (en) Image processing method and electronic equipment
EP4274248A1 (en) Photographing method and electronic device
CN116896626B (en) Method and device for detecting video motion blur degree
CN115526786B (en) Image processing method and related device
CN117132511B (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant