CN110166708B - Night scene image processing method and device, electronic equipment and storage medium - Google Patents

Night scene image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110166708B
CN110166708B CN201910509696.3A CN201910509696A CN110166708B CN 110166708 B CN110166708 B CN 110166708B CN 201910509696 A CN201910509696 A CN 201910509696A CN 110166708 B CN110166708 B CN 110166708B
Authority
CN
China
Prior art keywords
image
frame
images
image processing
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910509696.3A
Other languages
Chinese (zh)
Other versions
CN110166708A (en
Inventor
黄杰文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910509696.3A priority Critical patent/CN110166708B/en
Publication of CN110166708A publication Critical patent/CN110166708A/en
Application granted granted Critical
Publication of CN110166708B publication Critical patent/CN110166708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a night scene image processing method, a night scene image processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps of monitoring the available amount of system resources for image processing, generating a thumbnail according to a collected frame of image if the available amount is lower than a threshold value, synthesizing collected multi-frame images to obtain a target image, and updating the thumbnail according to the target image. According to the method, the number of the acquired images is adjusted according to the monitored system resource available amount for image processing, so that the system resource available amount for image processing is reduced, the total duration of the whole shooting process is shortened, the technical problem that the waiting time of a user is too long due to the fact that the whole shooting process is too long is solved, the phenomenon that the user feels pause in the image shooting process is avoided, and the use experience of the user is improved.

Description

Night scene image processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a night scene image processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of intelligent terminal technology, the use of mobile terminal devices (such as smart phones, tablet computers, and the like) is becoming more and more popular. The camera is built in most mobile terminal equipment, and along with the enhancement of mobile terminal processing capacity and the development of camera technology, the performance of built-in camera is more and more powerful, and the quality of shooting images is also more and more high. Nowadays, all easy operations of mobile terminal equipment are portable again, and more users use mobile terminal equipment such as smart mobile phone, panel computer to shoot in daily life.
The intelligent mobile terminal brings convenience to daily photographing of people, and meanwhile, the requirements of people on the quality of photographed images are higher and higher, and particularly in a special scene of a night scene, the image quality is lower.
At present, multiple frames of original images are generally collected for synthesis, and the condition that the system resource availability is less when the mobile terminal is used for image processing exists, so that the whole shooting process is longer, the picture is slower, and the use experience of a user is influenced.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
The application provides a night scene image processing method and device, electronic equipment and a storage medium, so that the frame number of collected images is adjusted according to the monitored system resource available amount for image processing, and further the system resource available amount for image processing is reduced, the total duration of the whole shooting process is shortened, and the technical problems that when the system resource available amount for image processing is small in the prior art, the whole shooting process is long, and the picture is slow are solved.
An embodiment of a first aspect of the present application provides a night scene image processing method, including:
monitoring the available amount of system resources for image processing;
if the available quantity is lower than a threshold value, generating a thumbnail according to the collected frame of image;
synthesizing the collected multi-frame images to obtain a target image;
and updating the thumbnail according to the target image.
According to the night scene image processing method, the available amount of system resources for image processing is monitored, if the available amount is lower than a threshold value, a thumbnail is generated according to a collected frame of image, a target image is obtained by synthesizing collected multi-frame images, and the thumbnail is updated according to the target image. According to the method, the number of the acquired images is adjusted according to the monitored system resource available amount for image processing, so that the system resource available amount for image processing is reduced, the total duration of the whole shooting process is shortened, the technical problem that the waiting time of a user is too long due to the fact that the whole shooting process is too long is solved, the phenomenon that the user feels pause in the image shooting process is avoided, and the use experience of the user is improved.
An embodiment of a second aspect of the present application provides a night scene image processing apparatus, including:
the monitoring module is used for monitoring the available amount of system resources for image processing;
the generation module is used for generating a thumbnail according to the acquired frame of image if the available quantity is lower than a threshold value;
the synthesis module is used for synthesizing the collected multi-frame images to obtain a target image;
and the updating module is used for updating the thumbnail according to the target image.
According to the night scene image processing device, the available amount of system resources for image processing is monitored, if the available amount is lower than a threshold value, a thumbnail is generated according to a collected frame of image, a target image is obtained by synthesizing collected multi-frame images, and the thumbnail is updated according to the target image. According to the method, the number of the acquired images is adjusted according to the monitored system resource available amount for image processing, so that the system resource available amount for image processing is reduced, the total duration of the whole shooting process is shortened, the technical problem that the waiting time of a user is too long due to the fact that the whole shooting process is too long is solved, the phenomenon that the user feels pause in the image shooting process is avoided, and the use experience of the user is improved.
An embodiment of a third aspect of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the night scene image processing method as described in the foregoing embodiments.
A fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the night-scene image processing method as described in the above embodiments.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a first night-scene image processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a second night-scene image processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a third night-scene image processing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a fourth night-scene image processing method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a fifth night-scene image processing method according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a sixth night-scene image processing method according to an embodiment of the present application;
fig. 7 is an exemplary diagram of a night scene image processing method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a night scene image processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 10 is a schematic diagram of an electronic device according to an embodiment of the present application;
fig. 11 is a schematic diagram of an image processing circuit according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
Aiming at the problem that in the related technology, due to the fact that the load capacity of electronic equipment is limited, when the number of shooting frames is large, the phenomenon that the available amount of system resources is insufficient, and therefore the shooting image is slow, the method for processing the night scene image is provided.
A night-scene image processing method, apparatus, electronic device, and storage medium according to an embodiment of the present application are described below with reference to the drawings.
Fig. 1 is a schematic flow chart of a first night-scene image processing method according to an embodiment of the present application.
The night scene processing method is applied to electronic equipment, and the electronic equipment can be hardware equipment with various operating systems and imaging equipment, such as a mobile phone, a tablet personal computer, a personal digital assistant and wearable equipment.
As shown in fig. 1, the night scene image processing method includes the following steps:
step 101, monitoring the available amount of system resources for image processing.
In the embodiment of the application, the electronic device may be provided with a monitoring module for monitoring the available amount of system resources for image processing in the process of acquiring images by the electronic device in real time.
As an example, in the Process of capturing an image by using an electronic device, resource availability of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processing (DSP), a Neural Network Processor (NPU), and the like of the electronic device may be monitored in real time by a monitoring module. Furthermore, according to the available amount of system resources for image processing, the number of the collected multi-frame images of the night scene is dynamically adjusted or a processing algorithm is simplified, so that a user can not wait for the process time when shooting the night scene under any condition, and the shooting experience of the user is improved.
And 102, if the available quantity is lower than a threshold value, generating a thumbnail according to the acquired frame of image.
The thumbnail is an image obtained by reducing the collected image to a preset ratio.
In the embodiment of the application, when the monitoring module of the electronic device monitors that the available amount of the system resources for image processing is lower than the threshold, the number of the acquisition frames can be adjusted to reduce the available amount of the system resources for image processing.
For example, when the CPU occupancy rate is monitored to reach 90% in the process of acquiring images by an image sensor of the electronic equipment, the number of the EV0 frames can be changed from the original 3-6 frames to 1-3 frames, and the number of the negative EV frames is changed from the original EV-2 and EV-4 frames to only one EV-3.
As a possible implementation manner, when a user operates the electronic device to capture an image, the image sensor of the electronic device captures a multi-frame image in response to the user's capturing operation. In the process of collecting multiple frames of images by an image sensor, when a monitoring module of the electronic device monitors that the available amount of system resources for image processing is lower than a threshold value, one frame of image can be selected from the collected multiple frames of images, and then a thumbnail is generated according to the selected frame of image.
As another possible implementation manner, when the electronic device detects that the shooting operation is performed, a monitoring module of the electronic device monitors that the available amount of system resources for image processing is lower than a threshold, in this case, the acquired preview image may be used as one frame of image, and a thumbnail may be generated according to the acquired one frame of image.
It should be noted that, all the multi-frame images acquired by the image sensor of the electronic device are RAW images without any processing, where the RAW images are RAW images obtained by converting light source signals captured by the image sensor into digital signals. RAW images record RAW information collected by a digital camera sensor, and also record some metadata generated by camera shooting, such as setting of sensitivity, shutter speed, aperture value, white balance, and the like.
In the embodiment of the application, when the monitoring module of the electronic device monitors that the available amount of system resources for image processing is lower than a threshold, a night scene shooting mode can be started to acquire multi-frame images under different exposures.
And 103, synthesizing the collected multi-frame images to obtain a target image.
As a possible implementation mode, the high-dynamic synthesis can be carried out on the collected multi-frame images to obtain the target image. High-Dynamic synthesis, that is, synthesizing pictures with different exposures in the same scene to obtain a High-Dynamic Range image (HDR). It should be noted that, compared with a common image, an HDR image can provide more Dynamic ranges and image details, and a final HDR image is synthesized by using an LDR image with the best details corresponding to each exposure time according to a Low-Dynamic Range (LDR) image with different exposure times, so that a visual effect in a real environment can be better reflected.
Specifically, the target image is obtained by extracting picture information in a multi-frame image and superposing the corresponding picture information.
Since the multi-frame images are captured under different exposures, the multi-frame images include screen information of different brightness. Different images may be overexposed, underexposed, or properly exposed for the same scene. After the images are subjected to high-dynamic synthesis, each scene in the target image is exposed properly as much as possible and is more similar to an actual scene.
And 104, updating the thumbnail according to the target image.
In the embodiment of the application, after the target image is obtained by synthesizing the multi-frame images collected by the image sensor, the thumbnail is updated according to the target image so as to be displayed.
According to the night scene image processing method, the available amount of system resources for image processing is monitored, if the available amount is lower than a threshold value, a thumbnail is generated according to a collected frame of image, a target image is obtained by synthesizing collected multi-frame images, and the thumbnail is updated according to the target image. According to the method, the number of the acquired images is adjusted according to the monitored system resource available amount for image processing, so that the system resource available amount for image processing is reduced, the total duration of the whole shooting process is shortened, the technical problem that the waiting time of a user is too long due to the fact that the whole shooting process is too long is solved, and the use experience of the user is improved.
On the basis of the embodiment shown in fig. 1, in one scenario, when acquiring multiple frames of images, an exposure compensation mode may be determined according to the monitored available amount of system resources for image processing, and then multiple frames of night scene images conforming to the exposure compensation mode are acquired according to the reference sensitivity determined by the degree of jitter, so as to obtain images with different dynamic ranges, so that the synthesized image has a higher dynamic range, and the overall brightness and quality of the image are improved. Referring to fig. 2, fig. 2 is a schematic flow chart of a second night-scene image processing method provided in the embodiment of the present application, and as shown in fig. 2, the method specifically includes the following steps:
step 201, determining an exposure compensation mode according to the available amount.
Wherein, the exposure compensation mode is used for indicating the number of image frames and the exposure compensation level of each frame image.
It should be noted that the Exposure compensation mode refers to a combination of Exposure compensation levels (EV) preset for each frame of image to be captured. In the initial definition of exposure, exposure does not mean an exact number, but rather "the combination of all camera apertures and exposure periods that give the same exposure". The sensitivity, aperture and exposure duration determine the exposure of the camera, and different combinations of parameters can produce equal exposures. The exposure compensation level is a parameter that adjusts the exposure amount so that some images are under-exposed, some are over-exposed, and some are properly exposed.
For example, if the number of images to be captured is 7 frames, the exposure compensation mode may correspond to an EV value range of [ +1, 0, -3, -6 ]. The exposure compensation mode is a frame of EV +1, the noise problem can be solved, time domain noise reduction is carried out through a frame with higher brightness, and noise is suppressed while the details of a dark part are improved; the exposure compensation mode is an EV-6 frame, so that the problem of high light overexposure can be solved, and the details of a high light area are reserved; the exposure compensation mode is the frames of EV0 and EV-3, and the exposure compensation mode can be used for maintaining the transition between highlight and dark areas and maintaining the good effect of bright-dark transition.
It should be noted that each EV value corresponding to the exposure compensation mode may be specifically set according to actual needs, or may be obtained according to a set EV value range and a principle that differences between the EV values are equal, which is not limited in this embodiment of the present application.
In a possible implementation form of the embodiment of the present application, the size of the aperture may be unchanged, and after determining the number of frames of images to be acquired according to the available amount of system resources for image processing, which is monitored by the monitoring module of the electronic device, a corresponding exposure compensation mode may be determined according to the number of images to be currently acquired.
In step 202, a corresponding reference sensitivity is determined according to the degree of the dithering.
In the embodiment of the present application, the reference sensitivity is determined according to the degree of shaking, and may be a sensitivity that is set according to the degree of shaking of the picture of the preview image and is suitable for the current degree of shaking; the sensitivity may be set according to the current shake degree of the image sensor that captures the preview image, and the sensitivity may be set according to the current shake degree, which is not limited herein. The reference sensitivity may range from 100ISO to 200 ISO.
The sensitivity, also called ISO value, is an index for measuring the sensitivity of a negative to light. For a lower sensitivity film, a longer exposure time is required to achieve the same imaging as for a higher sensitivity film. The sensitivity of a digital camera is an index similar to the sensitivity of a film, and the ISO of a digital camera can be adjusted by adjusting the sensitivity of a photosensitive device or combining photosensitive points, that is, the ISO can be improved by increasing the light sensitivity of the photosensitive device or combining several adjacent photosensitive points.
It should be noted that, no matter whether digital or negative photography is adopted, the lower the ISO value is, the higher the quality of the acquired image is, the finer the detail expression of the image is, and the higher the ISO value is, the stronger the light sensing performance is, the more light can be received, and thus more heat is generated, and therefore, more noise is usually introduced by using the relatively higher light sensitivity, and the image quality is reduced. In the embodiment of the application, by simultaneously collecting multiple frames of images with lower light sensitivity and synthesizing the collected multiple frames of images to generate the target image, the dynamic range and the overall brightness of the night scene shot image can be improved, noise in the image can be effectively inhibited by controlling the value of the light sensitivity, and the quality of the night scene shot image can be improved.
It can be understood that the sensitivity of the acquired image affects the overall shooting time, and the shooting time is too long, which may cause the shake degree of the image sensor to be aggravated during the handheld shooting, thereby affecting the image quality. Therefore, the reference sensitivity corresponding to the acquired preview image can be determined according to the picture shaking degree of the preview image and the current shaking degree of an image sensor for acquiring the preview image, so that the shooting time length is controlled in a proper range.
In the embodiment of the application, in order to determine the shaking degree, displacement information may be collected according to a displacement sensor arranged in the electronic device, and then, the shaking degree of the picture of the preview image or the shaking degree of an image sensor collecting the preview image may be determined according to the collected displacement information of the electronic device.
As an example, the current shaking degree of the electronic device, that is, the shaking degree of the image sensor that captures the preview image, may be determined by acquiring current gyroscope (Gyro-sensor) information of the electronic device.
The gyroscope is also called an angular velocity sensor, and can measure the rotation angular velocity of the physical quantity during deflection and inclination. In the electronic equipment, the gyroscope can well measure the actions of rotation and deflection, so that the actual actions of a user can be accurately analyzed and judged. The gyroscope information (gyro information) of the electronic device may include motion information of the mobile phone in three dimensions in a three-dimensional space, and the three dimensions of the three-dimensional space may be respectively expressed as three directions of an X axis, a Y axis, and a Z axis, where the X axis, the Y axis, and the Z axis are in a pairwise vertical relationship.
It should be noted that, the shake degree of the image sensor for acquiring the preview image may be determined according to the current gyro information of the electronic device. The greater the absolute value of gyro motion of the electronic device in three directions, the greater the degree of shake of the image sensor that acquires the preview image. Specifically, absolute value thresholds of gyro motion in three directions may be preset, and the current shake degree of the image sensor for acquiring the preview image may be determined according to a relationship between the sum of the acquired absolute values of gyro motion in the three directions and the preset threshold.
For example, it is assumed that the preset threshold values are a first threshold value a, a second threshold value B, and a third threshold value C, where a < B < C, and the sum of absolute values of gyro motion in three directions currently acquired is S. If S < A, determining that the shaking degree of the image sensor for acquiring the preview image is 'no shaking'; if A < S < B, the shaking degree of the image sensor for acquiring the preview image can be determined to be 'slight shaking'; if B < S < C, the shaking degree of the image sensor for acquiring the preview image can be determined to be small shaking; if S > C, it can be determined that the degree of shake of the image sensor that captures the preview image is "large shake".
It should be noted that the above examples are only illustrative and should not be construed as limiting the present application. In actual use, the number of the threshold values and the specific numerical values of the threshold values can be preset according to actual needs, and the mapping relation between gyro information and the jitter degree of the image sensor for collecting preview images can be preset according to the relation between the gyro information and the threshold values.
Specifically, if the shake degree of the image sensor for acquiring the preview image is small, the reference sensitivity corresponding to each frame of image to be acquired can be properly compressed into a small value, so that the noise of each frame of image is effectively inhibited, and the quality of the shot image is improved; if the shake degree of the image sensor for acquiring the preview image is large, the reference sensitivity corresponding to each frame of image to be acquired can be properly improved to be a large value, so that the shooting time length is shortened.
For example, if it is determined that the image sensor for capturing the preview image has a "no-shake" degree, the reference sensitivity may be determined to be a smaller value to obtain an image with a higher quality as much as possible, such as determining the reference sensitivity to be 100; if the shake degree of the image sensor for acquiring the preview image is determined to be "slight shake", the reference sensitivity may be determined to be a larger value to reduce the shooting time period, for example, the reference sensitivity is determined to be 120; if the shaking degree of the image sensor for acquiring the preview image is determined to be small shaking, the reference sensitivity can be further increased to reduce the shooting time length, for example, the reference sensitivity is determined to be 180; if the shake degree of the image sensor for capturing the preview image is determined to be "large shake", it may be determined that the current shake degree is too large, and at this time, the reference sensitivity may be further increased to reduce the shooting time duration, for example, the reference sensitivity is determined to be 200.
It should be noted that the above examples are only illustrative and should not be construed as limiting the present application. In actual use, when the shake degree of the image sensor for acquiring the preview image changes, the reference sensitivity may be changed to obtain an optimal solution. The mapping relation between the jitter degree of the image sensor for acquiring the preview image and the reference sensitivity corresponding to each frame of image to be acquired can be preset according to actual needs.
In the embodiment of the application, the picture shaking degree of the preview image and the shaking degree of the image sensor for collecting the preview image are in a positive correlation, and the implementation process of setting the reference sensitivity according to the picture shaking degree of the preview image is referred to in the above process, which is not described herein again.
However, in the present embodiment, the reference sensitivity is not limited to be adjusted only in accordance with the degree of shaking, and may be determined in a comprehensive manner in accordance with a plurality of parameters such as the degree of shaking and luminance information of a shooting scene. And are not limited herein.
And step 203, collecting multiple frames of night scene images conforming to the exposure compensation mode according to the reference sensitivity.
In the embodiment of the application, after the reference sensitivity and the exposure compensation mode of multiple frames of images to be acquired are determined, the electronic device is controlled to acquire multiple frames of night scene images conforming to the exposure compensation mode according to the reference sensitivity of each frame of image to be acquired, which is not described in detail herein.
It should be noted that, when acquiring a plurality of frames of images, image acquisition is performed based on the same reference sensitivity, which not only helps to reduce noise of the plurality of frames of images, but also avoids the technical problem of increased noise of the acquired plurality of frames of images due to increased sensitivity.
According to the night scene image processing method, the exposure compensation mode is determined according to the available amount, the corresponding reference sensitivity is determined according to the shaking degree, and the multi-frame night scene image conforming to the exposure compensation mode is collected according to the reference sensitivity. Therefore, the exposure compensation mode is determined according to the monitored available amount of system resources for image processing, and then the multi-frame night scene image conforming to the exposure compensation mode is collected according to the reference light sensitivity determined by the jitter degree, so that the dynamic range and the overall brightness of the shot image in the night scene shooting mode are improved, the noise in the shot image is effectively inhibited, the quality of the shot image in the night scene is improved, and the user experience is improved.
On the basis of the embodiment shown in fig. 2, as another possible implementation manner, in the embodiment of the present application, multiple night view modes are preset, and different night view modes correspond to different complementary exposure modes, referring to fig. 3, where fig. 3 is a flowchart of a third night view image processing method provided in the embodiment of the present application, specifically, the method may include the following steps:
step 301, adjusting the number of image frames according to the available amount.
In the embodiment of the application, the number of the image frames can be adjusted according to the available amount of system resources for image processing. Specifically, when the available amount is large, images of a plurality of frames can be acquired; when the available quantity is less, the load of the system can be reduced by reducing the frame number of the collected images, thereby improving the image collection speed.
For example, when the CPU occupancy rate is monitored to reach 90% in the process of acquiring images by an image sensor of the electronic equipment, the number of the EV0 frames can be changed from the original 3-6 frames to 1-3 frames, and the number of the negative EV frames is changed from the original EV-2 and EV-4 frames to only one EV-3.
Step 302, identifying whether the preview image contains a human face.
In the embodiment of the application, the preview image of the current shooting scene can be acquired, and the exposure compensation mode is determined by identifying whether the preview image contains a human face.
As a possible implementation, whether the preview image contains a human face or not can be determined by a face recognition technology. The face recognition technology is to analyze and compare face visual characteristic information to identify identity, belongs to the biological characteristic recognition technology, and is to distinguish organism individuals from the biological characteristics of organisms (generally specially people).
It should be noted that, when it is detected that the image currently acquired by the image sensor includes a human face, the light metering module of the camera module automatically performs light metering mainly based on the human face area, and determines the reference exposure amount according to the light metering result of the human face area. However, in the night view mode, the illuminance of the face region is usually low, which results in a determined reference exposure amount, which is higher than the reference exposure amount determined when the face is not included, and if too many overexposed frames are still acquired when the face is included, the face region is easily overexposed, which results in a poor target image effect. Therefore, for the same shake degree, the exposure compensation mode corresponding to the image sensor that has acquired the image that includes the human face needs to have a lower exposure compensation range than that when the human face is not included.
Step 303, if the face is included, determining that the exposure compensation mode is the first mode according with the adjusted frame number.
Step 304, if no human face is included, determining the exposure compensation mode as a second mode according with the adjusted frame number.
And the value range of the exposure compensation level of the second mode is larger than that of the first mode.
In a possible implementation form of the embodiment of the application, for the same shaking degree, different exposure compensation strategies may be adopted according to whether a preview image currently acquired by an image sensor contains a human face. Therefore, for the same degree of shaking, it is possible to correspond to a plurality of exposure compensation modes. After the current shaking degree of the image sensor is determined and whether the preview image currently acquired by the image sensor contains a human face or not is determined, the preset exposure compensation mode which is consistent with the current actual situation can be determined.
For example, assuming that the current shake degree of the image sensor is "slight shake", the corresponding preset exposure compensation modes include a first mode and a second mode, wherein each EV value corresponding to the first mode is [0, -2, -4, -6], each EV value corresponding to the second mode is [ +1, + 0, -3, -6], and it can be seen that the exposure compensation range of the first mode is smaller than that of the second mode. If the fact that the preview image currently acquired by the image sensor contains a human face is detected, determining that the preset exposure compensation mode is a first mode according with the adjusted frame number, namely that each EV value is [0, -2, -4, -6 ]; and if the preview image currently acquired by the image sensor does not contain a human face, determining the preset exposure compensation mode as a second mode according with the adjusted frame number, namely determining each EV value as [ +1, 0, -3, -6 ].
According to the night scene image processing method, the number of image frames is adjusted according to the available amount, whether a face is contained in a preview image or not is identified, if the face is contained, the exposure compensation mode is determined to be a first mode according with the adjusted number of frames, and if the face is not contained, the exposure compensation mode is determined to be a second mode according with the adjusted number of frames. Therefore, the number of frames of the image to be acquired is adjusted according to the monitored system resource available amount for image processing, and then the exposure compensation mode of each frame is adjusted by identifying whether the preview image contains the human face, so that the dynamic range and the overall brightness of the image shot in the night scene shooting mode are improved, the noise in the shot image is effectively inhibited, the quality of the night scene shot image is improved, and the user experience is improved.
Because the image sensor in the electronic device is subjected to different degrees of photo-electromagnetic interference between peripheral circuits and pixels of the image sensor in the electronic device during shooting, noise inevitably exists in the shot original image, and the definition of the shot image is different due to different interference degrees. Therefore, noise also inevitably exists in a target image obtained by synthesizing the acquired multi-frame images, and the target image needs to be subjected to noise reduction processing. For example, in a night scene shooting scene, an image is usually shot by using a larger aperture and a longer exposure time, and if the exposure time is reduced by selecting a higher sensitivity, the shot image inevitably generates noise.
As a possible implementation manner, a neural network model may be used to perform noise reduction processing on the synthesized target image, and noise reduction may be performed on the highlight area and the dim area in the target image at the same time, so as to obtain a target noise-reduced image with a better noise reduction effect. The above process is described in detail with reference to fig. 4, and fig. 4 is a flowchart illustrating a fourth night-scene image processing method according to an embodiment of the present application.
As shown in fig. 4, the method specifically includes the following steps:
step 401, a neural network model is adopted to identify the noise characteristics of the target image.
In the embodiment of the application, the neural network model learns the mapping relation between the reference sensitivity and the noise characteristic.
In the embodiment of the present application, the noise characteristic may be a statistical characteristic of random noise caused by the image sensor. The noise mainly includes thermal noise and shot noise, where the thermal noise conforms to a gaussian distribution, and the shot noise conforms to a poisson distribution, and the statistical characteristic in the embodiment of the present application may refer to a variance value of the noise, and may also be a value of other possible situations, which is not limited herein.
As a possible implementation manner, after sample images with various sensitivities captured under different environmental light intensities are acquired, the sample images with various sensitivities are adopted to train the neural network model. And taking the noise characteristic labeled in the sample image as the characteristic of model training, and inputting the sample image labeled by the noise characteristic into the neural network model so as to train the neural network model and further identify the noise characteristic of the image. Of course, the neural network model is only one possible implementation manner for implementing the artificial intelligence based noise reduction, and in the actual implementation process, the artificial intelligence based noise reduction may be implemented in any other possible manner, for example, it may also be implemented by using a conventional programming technique (such as a simulation method and an engineering method), or, for example, it may also be implemented by using a genetic algorithm.
Since the neural network model has learned the mapping relationship between the reference sensitivity and the noise characteristic. Therefore, the synthesized target image can be input into the neural network model to perform noise characteristic identification on the target image by using the neural network model, so that the noise characteristic of the target image can be identified.
And 402, denoising the target image according to the identified noise characteristics.
In the embodiment of the application, the noise of the target image is reduced according to the noise characteristics identified by the neural network model, so that the target noise reduction image is obtained, the purpose of reducing the noise is achieved, and the signal to noise ratio of the image is improved.
According to the night scene image processing method, the neural network model is adopted to identify the noise characteristics of the target image, and then the noise of the target image is reduced according to the identified noise characteristics. Therefore, both the bright light area and the dark light area in the synthesized target image can be denoised, the effectiveness of denoising is improved, the image detail of the target denoising image obtained by denoising is kept while the image noise is reduced, and the imaging effect with better definition is obtained.
In order to obtain a better artificial intelligence noise reduction effect, a neural network model can be selected for noise reduction, and a sample image with each sensitivity is used for training the neural network model to improve the noise characteristic recognition capability of the neural network model, and a specific training process is shown in fig. 5, and specifically includes the following steps:
step 501, a sample image at each sensitivity is acquired.
Wherein the noise characteristics of the image have been labeled in the sample image.
In this embodiment of the application, the sample image may be an image obtained by shooting with different sensitivities and under different ambient brightness. That is, the ambient brightness should be plural, and in each ambient brightness, the multi-frame image is captured as the sample image at different sensitivities.
In order to obtain a better and accurate noise characteristic identification result, the ambient brightness and the ISO can be subdivided, and the frame number of the sample image can be increased, so that after the synthesized target image is input into the neural network model, the neural network can accurately identify the noise characteristic of the image.
Step 502, training a neural network model by using sample images with various sensitivities.
In the embodiment of the application, after sample images with various sensitivities obtained by shooting under different environmental light brightness are obtained, the sample images are adopted to train the neural network model. And taking the noise characteristic labeled in the sample image as the characteristic of model training, and inputting the sample image labeled by the noise characteristic into the neural network model so as to train the neural network model and further identify the noise characteristic of the image. Of course, the neural network model is only one possible implementation manner for implementing noise reduction based on artificial intelligence, and in the actual implementation process, noise reduction based on artificial intelligence may be implemented in any other possible manner, for example, it may also be implemented by using a conventional programming technique (such as a simulation method and an engineering method), for example, it may also be implemented by using a genetic algorithm and an artificial neural network method, which is not limited herein.
The reason why the neural network model is trained by labeling the noise characteristics in the sample image is that the labeled sample image can clearly show the noise position and the noise type of the image, so that the labeled noise characteristics are taken as the characteristics of model training, and after the target image is input into the neural network model, the noise characteristics in the image can be identified.
Step 503, until the noise characteristic identified by the neural network model matches the noise characteristic labeled in the corresponding sample image, the training of the neural network model is completed.
In the embodiment of the application, the sample images with various photosensitivities are adopted to train the neural network model until the noise characteristics identified by the neural network model are matched with the statistical characteristics marked in the corresponding sample images,
in the embodiment of the application, the neural network model is trained by acquiring the sample images with various photosensitivities and adopting the sample images with various photosensitivities until the noise characteristics identified by the neural network model are matched with the noise characteristics marked in the corresponding sample images, and the training of the neural network model is completed. Because the neural network model is trained by adopting the sample image labeled with the noise characteristics under each light sensitivity, the noise characteristics of the image can be accurately identified after the image is input into the neural network model, so that the noise reduction processing of the image is realized, and the shooting quality of the image is improved.
On the basis of the embodiment shown in fig. 1, in a possible scenario, the multiple frames of original images acquired in step 103 may include at least two frames of first images with the same exposure and at least one frame of second images with an exposure lower than that of the first images, so as to perform noise reduction processing on the acquired multiple frames of images in the following process, thereby further improving the imaging quality. The above process is described in detail with reference to fig. 6, where fig. 6 is a schematic flow chart of a sixth night-scene image processing method provided in the embodiment of the present application, and as shown in fig. 6, step 103 may specifically include:
step 601, selecting block alignment or global alignment for at least two frames of first images according to the available amount.
In the embodiment of the application, when the monitoring module of the electronic device monitors that the available amount of system resources for image processing is lower than the threshold, the image can be processed by adopting a simplified image processing algorithm under the condition of not adjusting the number of frames of the image to be acquired.
As a possible implementation, at least two frames of the first image may be block-aligned or globally aligned.
It should be noted that whether block alignment or global alignment is performed may be determined according to the photographic subject of the at least two frames of the first image. For example, when the captured image is mainly a person, the person regions in at least two first images may be selected for alignment. The image alignment method may refer to the prior art, and is not described in detail in this embodiment.
Step 602, performing multi-frame noise reduction on at least two frames of the first image to obtain a synthesized noise-reduced image.
The multi-frame noise reduction is to collect multi-frame images through an image sensor in a night scene or dark light environment, find different pixel points with noise properties under different frame numbers, and obtain a clean and pure night scene or dark light photo after weighted synthesis.
In the embodiment of the application, when the electronic equipment shoots a night scene or a dark light environment through the image sensor, at least two frames of first images are collected, the number and the positions of a plurality of frames of noise in the at least two frames of images are calculated and screened, the positions of the places with the noise are replaced by the frames without the noise, and a clean synthesized noise reduction image is obtained through repeated weighting and replacement. Therefore, dark part details in the image can be treated very softly through multi-frame noise reduction, and more image details are reserved while noise is reduced.
In the embodiment of the application, the definition of the at least two first images obtained by shooting can be judged according to the definition threshold of the images, and then the obtained at least two first images are screened and the clear images are kept for synthesis. Specifically, when the definition of the first image is greater than or equal to the definition threshold, it is indicated that the first image is clear, the first image is retained, and when the definition of the first image is less than the definition threshold, it is indicated that the first image is blurred, and the first image is filtered. Further, the remaining sharp first images are synthesized to obtain a synthesized noise-reduced image.
The definition threshold is a value determined by manually testing the definition of a large number of images, when the definition of an image is greater than the value, the image is clear, and when the definition of the image is less than the value, the image is fuzzy.
As a possible implementation manner, the sharpness of the at least two first images is compared with the sharpness threshold of the image, the at least two first images are screened, and if the frame number of the screened first images is not zero, the noise suppression degree is improved on the basis of the initial noise suppression degree according to the frame number of the screened first images.
It can be understood that, when the number of frames of the first image to be screened is large, the number of blurred frames in the first image obtained by shooting is large, the blurred image needs to be discarded, the number of images to be subjected to noise reduction is small, and on the basis of the initial noise suppression degree, the noise suppression degree is improved, so that the remaining images are subjected to effective noise reduction. Thus, the larger the number of frames of the first image to be screened, the larger the noise suppression degree is improved in addition to the initial noise suppression degree. But after the first image is subjected to filtering and noise reduction processing by using a higher noise suppression degree, the details of the image are less.
As another possible implementation manner, the definitions of the at least two frames of first images are compared with a definition threshold of the image, the at least two frames of first images are screened, and if the number of the screened first images is zero, it indicates that the definitions of the at least two frames of first images obtained by the capturing and shooting at this time are both greater than or equal to the definition threshold.
According to the embodiment of the application, the noise suppression degree is improved or reduced according to the frame number of the screened first image, and then the retained first image is subjected to weighted synthesis noise reduction according to the determined noise suppression degree to obtain a synthesized noise reduction image, so that the noise of the image is effectively reduced, and the information of the image is retained to the maximum extent.
Step 603, performing high dynamic synthesis on the synthesized noise-reduced image and at least one frame of second image to obtain a target image.
In the embodiment of the application, the synthesized noise-reduced image is overlapped with at least one frame of second image sub-image picture area to obtain the target image. For example: if the composite noise-reduced image is obtained by multi-frame noise reduction of the original image with several frames of EV0, the target image may be over-exposed for highlight areas and properly exposed for mid-low brightness areas, while the EV value of the at least one second image is usually negative, so that the second image may be properly exposed for highlight areas and the mid-low brightness areas may be under-exposed. By combining the parts corresponding to the same area in different images according to the weight, the images can be properly exposed in each area, and the imaging quality is improved.
It should be noted that, since the noise of the image has been effectively reduced in the synthesized noise-reduced image, and the information of the image is retained to the maximum extent, after the high-dynamic synthesis is performed on at least one frame of second image, the obtained target image contains more picture information, and is closer to the actual scene.
In the embodiment of the application, according to the available amount, block alignment or global alignment is selected for at least two frames of first images, then multi-frame noise reduction is carried out on the at least two frames of first images to obtain a synthesized noise-reduced image, and high-dynamic synthesis is carried out on the synthesized noise-reduced image and at least one frame of second image to obtain a target image. Therefore, in the obtained target image, the noise of the image is effectively reduced, the information of the image is kept to the maximum extent, the quality of the shot image is improved, and the user experience is improved.
As an example, referring to fig. 7, fig. 7 is an exemplary diagram of a night scene image processing method provided in an embodiment of the present application. As can be seen from fig. 7, after the preview picture is detected, when the shooting scene is determined to be a night scene, the image sensor is controlled to capture at least two frames of original images with a photosensitivity value of EV0, one EV-2 original image and one EV-4 original image. The original image is a RAW image without any processing. And performing noise reduction processing on at least two frames of original images of the EV0 to obtain a synthesized noise-reduced image so as to improve the signal-to-noise ratio of the image, and performing high-dynamic synthesis on the synthesized noise-reduced image, one EV-2 original image and one EV-4 original image to obtain a target image. Wherein, the target image is also an RAW format image. Further, carrying out artificial intelligent noise reduction processing on the target image to obtain a noise-reduced target noise-reduced image, inputting the noise-reduced target noise-reduced image into an ISP (internet service provider) processor for format conversion, and converting the RAW-format target noise-reduced image into a YUV-format image. And finally, inputting the YUV appropriate target noise reduction image into a JPEG encoder to obtain a final JPG image.
It should be noted that, on the basis of the night view image processing method described in fig. 7, the frame number of the original image to be acquired may be adjusted according to the available amount of system resources for image processing monitored by the monitoring module of the electronic device during the image shooting process, or the image processing algorithm may be simplified, so that the user may not have a long waiting time for shooting a night view under any circumstances, the quality and speed of the shot image are improved, and the user photographing experience is improved.
In order to implement the above embodiments, the present application further provides a night scene image processing apparatus.
Fig. 8 is a schematic structural diagram of a night scene image processing apparatus according to an embodiment of the present application.
As shown in fig. 8, the night view image processing apparatus 100 includes: a monitoring module 110, a generating module 120, a synthesizing module 130, and an updating module 140.
A monitoring module 110, configured to monitor an available amount of system resources for performing image processing;
and the generating module 120 is configured to generate a thumbnail according to the acquired frame of image if the available amount is lower than the threshold.
And the synthesizing module 130 is configured to synthesize the acquired multiple frames of images to obtain a target image.
And an updating module 140 for updating the thumbnail according to the target image.
As one possible implementation manner, the night-scene image processing apparatus 100 further includes:
and the acquisition module is used for responding to the shooting operation and acquiring the multi-frame image.
And the processing module is used for selecting one frame of image from the multiple frames of images, or taking the acquired preview image as one frame of image when the shooting operation is detected.
As another possible implementation manner, the acquisition module further includes:
a first determination unit configured to determine an exposure compensation mode according to the available amount; wherein, the exposure compensation mode is used for indicating the number of image frames and the exposure compensation level of each frame image.
And a second determining unit configured to determine a corresponding reference sensitivity according to the degree of shaking.
And the acquisition unit is used for acquiring the multi-frame night scene image conforming to the exposure compensation mode according to the reference sensitivity.
As another possible implementation manner, the first determining unit may be further configured to:
adjusting the image frame number according to the available amount;
identifying whether the preview image contains a human face;
if the human face is included, determining that the exposure compensation mode is a first mode according with the adjusted frame number;
if the human face is not included, determining the exposure compensation mode as a second mode according with the adjusted frame number;
and the value range of the exposure compensation level of the second mode is larger than that of the first mode.
As another possible implementation manner, the night-scene image processing apparatus 100 further includes:
the identification unit is used for identifying the noise characteristics of the target image by adopting a neural network model; the neural network model learns the mapping relation between the reference sensitivity and the noise characteristic.
And the noise reduction unit is used for reducing noise of the target image according to the identified noise characteristics.
As another possible implementation manner, the neural network model is trained by using sample images of each sensitivity until the noise characteristics identified by the neural network model are matched with the noise characteristics labeled in the corresponding sample images, and the training of the neural network model is completed.
As another possible implementation manner, the multi-frame image comprises at least two first images with the same exposure and at least one second image with the exposure lower than that of the first image; the synthesis module 130 may be further configured to:
performing multi-frame noise reduction on at least two frames of first images to obtain a synthesized noise-reduced image; and carrying out high-dynamic synthesis on the synthesized noise-reduced image and at least one frame of second image to obtain a target image.
As another possible implementation, the synthesis module 130 may be further configured to:
block alignment or global alignment is selected for the at least two frames of the first image according to the available amount.
It should be noted that the explanation of the embodiment of the night-scene image processing method is also applicable to the night-scene image processing apparatus of the embodiment, and is not repeated herein.
According to the night scene image processing device, the available amount of system resources for image processing is monitored, if the available amount is lower than a threshold value, a thumbnail is generated according to a collected frame of image, a target image is obtained by synthesizing collected multi-frame images, and the thumbnail is updated according to the target image. According to the method, the number of the acquired images is adjusted according to the monitored system resource available amount for image processing, so that the system resource available amount for image processing is reduced, the total duration of the whole shooting process is shortened, the technical problem that the waiting time of a user is too long due to the fact that the whole shooting process is too long is solved, the phenomenon that the user feels pause in the image shooting process is avoided, and the use experience of the user is improved.
In order to implement the foregoing embodiments, the present application further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and running on the processor, and when the processor executes the computer program, the night-scene image processing method described in the foregoing embodiments is implemented.
As an example, the present application also proposes an electronic device 200, see fig. 9, comprising: the image sensor 210 is electrically connected with the processor 220, and the processor 220 executes the program to implement the night scene image processing method as described in the above embodiments.
As one possible scenario, the processor 220 may include: an Image Signal Processor (ISP) and a GPU connected with the ISP Processor.
As an example, please refer to fig. 10, on the basis of the electronic device shown in fig. 9, fig. 10 is a schematic diagram illustrating an electronic device according to an embodiment of the present application. The memory 230 of the electronic device 200 includes the non-volatile memory 80, the internal memory 82, and the processor 220. Memory 230 has stored therein computer readable instructions. The computer readable instructions, when executed by the memory, cause the processor 230 to perform the night scene image processing method of any of the above embodiments.
As shown in fig. 10, the electronic apparatus 200 includes a processor 220, a nonvolatile memory 80, an internal memory 82, a display screen 83, and an input device 84, which are connected via a system bus 81. The non-volatile memory 80 of the electronic device 200 stores, among other things, an operating system and computer readable instructions. The computer readable instructions can be executed by the processor 220 to implement the exposure control method according to the embodiment of the present application. The processor 220 is used to provide computing and control capabilities that support the operation of the overall electronic device 200. The internal memory 82 of the electronic device 200 provides an environment for the execution of computer readable instructions in the non-volatile memory 80. The display 83 of the electronic device 200 may be a liquid crystal display or an electronic ink display, and the input device 84 may be a touch layer covered on the display 83, a button, a trackball or a touch pad arranged on a housing of the electronic device 200, or an external keyboard, a touch pad or a mouse. The electronic device 200 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (e.g., a smart bracelet, a smart watch, a smart helmet, smart glasses), etc. It will be understood by those skilled in the art that the structure shown in fig. 10 is only a schematic diagram of a part of the structure related to the present application, and does not constitute a limitation to the electronic device 200 to which the present application is applied, and a specific electronic device 200 may include more or less components than those shown in the drawings, or combine some components, or have a different arrangement of components.
To implement the foregoing embodiments, an image processing circuit is further provided in the present application, please refer to fig. 11, fig. 11 is a schematic diagram of an image processing circuit according to an embodiment of the present application, and as shown in fig. 11, an image processing circuit 90 includes an image signal processing ISP processor 91 (the ISP processor 91 serves as the processor 220) and a graphics processor GPU.
The image data captured by the camera 93 is first processed by the ISP processor 91, and the ISP processor 91 analyzes the image data to capture image statistics that may be used to determine one or more control parameters of the camera 93. The camera module 310 may include one or more lenses 932 and an image sensor 934. Image sensor 934 may include an array of color filters (e.g., Bayer filters), and image sensor 934 may acquire light intensity and wavelength information captured by each imaging pixel and provide a set of raw image data that may be processed by ISP processor 91. The sensor 94 (e.g., a gyroscope) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 91 based on the type of interface of the sensor 94. The sensor 94 interface may be a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interface, or a combination thereof.
In addition, the image sensor 934 may also send raw image data to the sensor 94, the sensor 94 may provide the raw image data to the ISP processor 91 based on the type of interface of the sensor 94, or the sensor 94 may store the raw image data in the image memory 95.
The ISP processor 91 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 91 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The ISP processor 91 may also receive image data from the image memory 95. For example, the sensor 94 interface sends raw image data to the image memory 95, and the raw image data in the image memory 95 is then provided to the ISP processor 91 for processing. The image Memory 95 may be the Memory 330, a portion of the Memory 330, a storage device, or a separate dedicated Memory within the electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the image sensor 934 interface or from the sensor 94 interface or from the image memory 95, the ISP processor 91 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 95 for additional processing before being displayed. The ISP processor 91 receives the processed data from the image memory 95 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 91 may be output to display 97 (display 97 may include display screen 83) for viewing by a user and/or further processed by a graphics engine or GPU. Further, the output of the ISP processor 91 may also be sent to an image memory 95, and the display 97 may read image data from the image memory 95. In one embodiment, image memory 95 may be configured to implement one or more frame buffers. Further, the output of the ISP processor 91 may be transmitted to an encoder/decoder 96 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on the display 97 device. The encoder/decoder 96 may be implemented by a CPU or GPU or coprocessor.
The statistical data determined by the ISP processor 91 may be sent to the control logic 92 unit. For example, the statistical data may include image sensor 934 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 932 shading correction, and the like. The control logic 92 may include a processing element and/or microcontroller that executes one or more routines (e.g., firmware) that determine control parameters of the camera 93 and control parameters of the ISP processor 91 based on the received statistical data. For example, the control parameters of camera 93 may include sensor 94 control parameters (e.g., gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 932 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), and lens 932 shading correction parameters.
The following steps are implemented by using the image processing technology in fig. 11 to realize the night scene image processing method: monitoring the available amount of system resources for image processing; if the available quantity is lower than the threshold value, generating a thumbnail according to the collected frame of image; synthesizing the collected multi-frame images to obtain a target image; and updating the thumbnail according to the target image.
In order to implement the above embodiments, the present application also proposes a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the night scene image processing method as described in the above embodiments.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (9)

1. A night scene image processing method, characterized by comprising the steps of:
monitoring the available amount of system resources for image processing;
responding to a shooting operation, and determining an exposure compensation mode according to the available amount; the exposure compensation mode is used for indicating the number of image frames and the exposure compensation level of each frame image;
determining corresponding reference sensitivity according to the jitter degree;
collecting the multi-frame night scene image which accords with the exposure compensation mode according to the reference sensitivity;
selecting one frame of image from the multiple frames of images, or taking a preview image acquired when the shooting operation is detected as one frame of image;
if the available quantity is lower than a threshold value, generating a thumbnail according to the acquired frame of image;
adjusting the frame number of the collected image according to the available amount;
synthesizing the collected multi-frame images to obtain a target image;
and updating the thumbnail according to the target image.
2. The night scene image processing method according to claim 1, wherein determining an exposure compensation mode according to the available amount comprises:
adjusting the image frame number according to the available amount;
identifying whether the preview image contains a human face;
if the human face is included, determining that the exposure compensation mode is a first mode according with the adjusted frame number;
if the human face is not included, determining the exposure compensation mode as a second mode according with the adjusted frame number;
and the value range of the exposure compensation level of the second mode is larger than that of the first mode.
3. The night scene image processing method according to claim 1, wherein after synthesizing the collected multi-frame images to obtain the target image, the method further comprises:
adopting a neural network model to identify the noise characteristics of the target image; the neural network model learns the mapping relation between the reference sensitivity and the noise characteristic;
and denoising the target image according to the identified noise characteristics.
4. The night scene image processing method according to claim 3, wherein the neural network model is trained by using sample images at respective sensitivities until the noise characteristics recognized by the neural network model match the noise characteristics labeled in the corresponding sample images, and the neural network model training is completed.
5. The night-scene image processing method according to any one of claims 1 to 4, wherein the multi-frame image includes at least two first images with the same exposure amount and at least one second image with an exposure amount lower than that of the first images;
the synthesizing of the collected multi-frame images to obtain the target image comprises the following steps:
performing multi-frame noise reduction on the at least two frames of first images to obtain a synthesized noise-reduced image;
and performing high-dynamic synthesis on the synthesized noise-reduced image and the at least one frame of second image to obtain the target image.
6. The night scene image processing method according to claim 5, wherein before performing multi-frame denoising on the at least two first images, the method further comprises:
and selecting block alignment or global alignment for the at least two frames of the first image according to the available amount.
7. An apparatus for processing an image of a night scene, the apparatus comprising:
the monitoring module is used for monitoring the available amount of system resources for image processing;
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module comprises a first determining unit, a second determining unit and an acquisition unit, and the first determining unit is used for responding to shooting operation and determining an exposure compensation mode according to available amount; the exposure compensation mode is used for indicating the number of image frames and the exposure compensation level of each frame image;
the second determining unit is used for determining corresponding reference sensitivity according to the jitter degree;
the acquisition unit is used for acquiring a multi-frame night scene image conforming to an exposure compensation mode according to the reference sensitivity;
the processing module is used for selecting one frame of image from the multiple frames of images, or taking a preview image acquired when the shooting operation is detected as one frame of image;
the generation module is used for generating a thumbnail according to the acquired frame of image if the available quantity is lower than a threshold value;
the first determining unit is further configured to adjust the number of frames of the acquired image according to the available amount;
the synthesis module is used for synthesizing the collected multi-frame images to obtain a target image;
and the updating module is used for updating the thumbnail according to the target image.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the night scene image processing method according to any one of claims 1 to 6 when executing the program.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the night-scene image processing method according to any one of claims 1 to 6.
CN201910509696.3A 2019-06-13 2019-06-13 Night scene image processing method and device, electronic equipment and storage medium Active CN110166708B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910509696.3A CN110166708B (en) 2019-06-13 2019-06-13 Night scene image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910509696.3A CN110166708B (en) 2019-06-13 2019-06-13 Night scene image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110166708A CN110166708A (en) 2019-08-23
CN110166708B true CN110166708B (en) 2021-06-11

Family

ID=67628874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910509696.3A Active CN110166708B (en) 2019-06-13 2019-06-13 Night scene image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110166708B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717871A (en) * 2019-09-30 2020-01-21 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110782413B (en) * 2019-10-30 2022-12-06 北京金山云网络技术有限公司 Image processing method, device, equipment and storage medium
CN110661960B (en) * 2019-10-30 2022-01-25 Oppo广东移动通信有限公司 Camera module and electronic equipment
CN112929558B (en) * 2019-12-06 2023-03-28 荣耀终端有限公司 Image processing method and electronic device
CN113744117A (en) * 2020-05-29 2021-12-03 Oppo广东移动通信有限公司 Multimedia processing chip, electronic equipment and dynamic image processing method
CN111726523A (en) * 2020-06-16 2020-09-29 Oppo广东移动通信有限公司 Image processing method and device and storage medium
US11928799B2 (en) 2020-06-29 2024-03-12 Samsung Electronics Co., Ltd. Electronic device and controlling method of electronic device
CN115706766B (en) * 2021-08-12 2023-12-15 荣耀终端有限公司 Video processing method, device, electronic equipment and storage medium
CN118175436A (en) * 2022-08-25 2024-06-11 荣耀终端有限公司 Image processing method and related device
CN116668837B (en) * 2022-11-22 2024-04-19 荣耀终端有限公司 Method for displaying thumbnail images and electronic device
CN116347229B (en) * 2022-12-21 2024-03-15 荣耀终端有限公司 Image shooting method and electronic equipment
CN116708996B (en) * 2023-08-07 2023-11-17 荣耀终端有限公司 Photographing method, image optimization model training method and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217643A (en) * 2007-12-26 2008-07-09 广东威创视讯科技股份有限公司 A method and corresponding device for dynamic capture and collection, display of images with different sizes and resolution
CN104125397A (en) * 2014-06-30 2014-10-29 联想(北京)有限公司 Data processing method and electronic equipment
CN108289172A (en) * 2018-01-20 2018-07-17 深圳天珑无线科技有限公司 Adjust the method, device and mobile terminal of shooting correlation function
CN109918190A (en) * 2017-12-13 2019-06-21 华为技术有限公司 A kind of collecting method and relevant device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9385807B2 (en) * 2014-03-28 2016-07-05 Intel Corporation Light wave communication
CN108419013B (en) * 2018-03-19 2020-07-28 浙江国自机器人技术有限公司 Image acquisition system and mobile robot
CN108989700B (en) * 2018-08-13 2020-05-15 Oppo广东移动通信有限公司 Imaging control method, imaging control device, electronic device, and computer-readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217643A (en) * 2007-12-26 2008-07-09 广东威创视讯科技股份有限公司 A method and corresponding device for dynamic capture and collection, display of images with different sizes and resolution
CN104125397A (en) * 2014-06-30 2014-10-29 联想(北京)有限公司 Data processing method and electronic equipment
CN109918190A (en) * 2017-12-13 2019-06-21 华为技术有限公司 A kind of collecting method and relevant device
CN108289172A (en) * 2018-01-20 2018-07-17 深圳天珑无线科技有限公司 Adjust the method, device and mobile terminal of shooting correlation function

Also Published As

Publication number Publication date
CN110166708A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN110072051B (en) Image processing method and device based on multi-frame images
CN110166708B (en) Night scene image processing method and device, electronic equipment and storage medium
CN110072052B (en) Image processing method and device based on multi-frame image and electronic equipment
CN110062160B (en) Image processing method and device
CN110191291B (en) Image processing method and device based on multi-frame images
CN109005366B (en) Night scene shooting processing method and device for camera module, electronic equipment and storage medium
CN108900782B (en) Exposure control method, exposure control device and electronic equipment
CN109068067B (en) Exposure control method and device and electronic equipment
CN110290289B (en) Image noise reduction method and device, electronic equipment and storage medium
CN109788207B (en) Image synthesis method and device, electronic equipment and readable storage medium
CN109040609B (en) Exposure control method, exposure control device, electronic equipment and computer-readable storage medium
CN110248106B (en) Image noise reduction method and device, electronic equipment and storage medium
WO2020207261A1 (en) Image processing method and apparatus based on multiple frames of images, and electronic device
CN110166707B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN110166709B (en) Night scene image processing method and device, electronic equipment and storage medium
CN110166706B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN109194882B (en) Image processing method, image processing device, electronic equipment and storage medium
CN109348088B (en) Image noise reduction method and device, electronic equipment and computer readable storage medium
CN109672819B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110264420B (en) Image processing method and device based on multi-frame images
CN109151333B (en) Exposure control method, exposure control device and electronic equipment
CN109005369B (en) Exposure control method, exposure control device, electronic apparatus, and computer-readable storage medium
CN110166711B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN109756680B (en) Image synthesis method and device, electronic equipment and readable storage medium
CN110213462B (en) Image processing method, image processing device, electronic apparatus, image processing circuit, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant