CN111970451B - Image processing method, image processing device and terminal equipment - Google Patents

Image processing method, image processing device and terminal equipment Download PDF

Info

Publication number
CN111970451B
CN111970451B CN202010899187.9A CN202010899187A CN111970451B CN 111970451 B CN111970451 B CN 111970451B CN 202010899187 A CN202010899187 A CN 202010899187A CN 111970451 B CN111970451 B CN 111970451B
Authority
CN
China
Prior art keywords
image
mode
current
current image
display mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010899187.9A
Other languages
Chinese (zh)
Other versions
CN111970451A (en
Inventor
卓海杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oppo Chongqing Intelligent Technology Co Ltd
Original Assignee
Oppo Chongqing Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo Chongqing Intelligent Technology Co Ltd filed Critical Oppo Chongqing Intelligent Technology Co Ltd
Priority to CN202010899187.9A priority Critical patent/CN111970451B/en
Publication of CN111970451A publication Critical patent/CN111970451A/en
Application granted granted Critical
Publication of CN111970451B publication Critical patent/CN111970451B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors

Abstract

The application is applicable to the technical field of image processing, and provides an image processing method, an image processing device, a terminal device and a computer readable storage medium, wherein the image processing method is applied to the terminal device with an off-screen camera, and comprises the following steps: acquiring a current image output by an image sensor of the under-screen camera; determining a current image display mode, wherein the current image display mode comprises a dynamic image display mode and a static image display mode; determining a target pixel arrangement mode according to the current image display mode, wherein the target pixel arrangement mode is different from the arrangement mode of the photodiode of the image sensor; and processing the current image according to the arrangement mode of the target pixels to obtain a target image. By the method, the image quality of the image output by the under-screen camera can be improved.

Description

Image processing method, image processing device and terminal equipment
Technical Field
The present application belongs to the field of image processing technologies, and in particular, relates to an image processing method, an image processing apparatus, a terminal device, and a computer-readable storage medium.
Background
In some terminal devices, in order to achieve the effect of a full screen, a camera is arranged below the screen of the terminal device. However, the screen camera is often interfered by the screen, for example, the screen may cause insufficient light transmittance for shooting, so that the image collected by the screen camera is fogged, and the like, and therefore, the quality of the image output by the screen camera is often poor, and it is difficult to meet the image display requirements of various application scenes. Moreover, the image with poor quality output by the under-screen camera can greatly limit the optimization of the subsequent digital image processing algorithm on the image, and the image presenting effect is difficult to be well improved.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a terminal device and a computer readable storage medium, which can improve the image quality of an image output by an off-screen camera.
In a first aspect, an embodiment of the present application provides an image processing method, which is applied to a terminal device having an off-screen camera, where the image processing method includes:
acquiring a current image output by an image sensor of the under-screen camera;
determining a current image display mode, wherein the current image display mode comprises a dynamic image display mode and a static image display mode;
determining a target pixel arrangement mode according to the current image display mode, wherein the target pixel arrangement mode is different from the arrangement mode of the photodiode of the image sensor;
and processing the current image according to the arrangement mode of the target pixels to obtain a target image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, which is applied to a terminal device having an off-screen camera, and the image processing apparatus includes:
the acquisition module is used for acquiring a current image output by an image sensor of the off-screen camera;
the device comprises a first determining module, a second determining module and a display module, wherein the first determining module is used for determining a current image display mode, and the current image display mode comprises a dynamic image display mode and a static image display mode;
a second determining module, configured to determine a target pixel arrangement manner according to the current image display mode, where the target pixel arrangement manner is different from an arrangement manner of photodiodes of the image sensor;
and the processing module is used for processing the current image according to the target pixel arrangement mode to obtain a target image.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, a display, and a computer program stored in the memory and executable on the processor, where the processor implements the image processing method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the image processing method according to the first aspect.
In a fifth aspect, the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to execute the image processing method described above in the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: in the embodiment of the application, a current image output by an image sensor of the under-screen camera can be acquired, and a current image display mode is determined, wherein the current image display mode comprises a dynamic image display mode and a static image display mode; at this time, it may be distinguished whether the current image is to be presented in a dynamic form such as video or preview or in a static form such as photographing, so as to determine a target pixel arrangement manner according to the determined current image display mode, and process the current image output by the image sensor according to the target pixel arrangement manner, so as to transform the pixel arrangement manner of the current image output by the image sensor according to different current image display modes, so that the pixel arrangement manner in the obtained target image can better match the requirement of the current image display mode, thereby improving the image quality of the image output by the camera under the screen.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of step S103 according to an embodiment of the present application;
fig. 3 is a schematic diagram of an exemplary pixel arrangement of a partial region in a current image and a corresponding region in a first image according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an exemplary pixel arrangement of a partial region in a current image and a corresponding region in a second image according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of an exemplary pixel arrangement of a partial region in a current image and a corresponding region in a third image according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The image processing method provided by the embodiment of the application can be applied to a server, a desktop computer, a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA) and other terminal devices, and the embodiment of the application does not limit the specific types of the terminal devices.
Specifically, fig. 1 shows a flowchart of an image processing method provided in an embodiment of the present application, where the image processing method can be applied to a terminal device having an off-screen camera.
The screen lower camera can be a camera arranged below the screen of the terminal equipment. The specific setting mode of the under-screen camera in the terminal equipment can be determined by research personnel according to actual requirements.
And in the in-service use, when shooing under the screen through the camera under the screen, because light will see through behind the screen again by the image sensor of camera under the screen gather, consequently, the screen often can influence the transmissivity of light to probably make the image that the camera gathered under the screen can appear such as the noise too big, facula diffraction and the lower scheduling problem of definition, thereby influence user's use and experience. At present, the image output by the image sensor in the under-screen camera is often optimized through some digital image processing algorithms. However, the image with poor quality output by the under-screen camera greatly limits the optimization of the subsequent digital image processing algorithm on the image, so that the image presenting effect is difficult to be improved well.
In the embodiment of the application, the arrangement mode of the target pixels is adjusted according to the requirements of different scenes such as dynamic image display and static image display, and the arrangement mode of the pixels of the image output by the under-screen camera can be adjusted according to different scenes, so that the image quality of the image output by the under-screen camera is improved, and a sufficient adjustment space is provided for a subsequent digital image processing algorithm.
Specifically, as shown in fig. 1, the image processing method may include:
and step S101, acquiring a current image output by an image sensor of the under-screen camera.
In the embodiment of the present application, the arrangement of the pixels in the current image output by the image sensor is determined according to the arrangement of the photodiodes in the image sensor. For example, each pixel point in the current image corresponds to a photodiode in the image sensor. Of course, in some scenarios, the pixel points in the current image may also be obtained based on the outputs of at least two photodiodes.
The arrangement of the photodiodes in the image sensor is not limited herein.
For example, the photodiodes in the image sensor may form a plurality of bayer arrays. As shown in the schematic diagram of fig. 3 for the current image, a bayer array in the current image may be a 4 × 4 array, consisting of 8 green, 4 blue, and 4 red pixels.
Step S102, determining a current image display mode, wherein the current image display mode comprises a dynamic image display mode and a static image display mode.
In the embodiment of the present application, the dynamic image display mode may refer to a mode in which a corresponding image is displayed in a dynamic image such as a video, a photo preview, or the like. Illustratively, the dynamic image display mode may include a preview mode and/or a video mode. The still image display mode may refer to a mode in which a corresponding image is displayed in a still form. Illustratively, the still image display mode includes a photographing mode.
Step S103, determining a target pixel arrangement mode according to the current image display mode, wherein the target pixel arrangement mode is different from the arrangement mode of the photodiode of the image sensor.
By determining the current image display mode, it can be determined whether the current image acquired by the image sensor of the off-screen camera is to be presented in a dynamic form such as video or preview or in a static form such as photographing, so as to know the requirement when the image is currently presented.
For example, in the dynamic image display mode, the corresponding image is often processed and displayed in real time, and the image display interface is continuously refreshed according to the frame rate of the video, so that in the dynamic image display mode, the requirement on the resolution of the image is often low, and certain requirements on the dynamic range, the contrast and the like of the image are required, so that a user can capture the content in the image quickly. For the static image display mode, information such as definition, dynamic range and/or noise needs to be considered for the corresponding image to ensure higher image quality.
Therefore, after determining the current image display mode, the target pixel arrangement mode may be determined according to the specific requirements of the current image display mode.
In this embodiment of the application, the pixel arrangement mode of the current image output by the image sensor may be adjusted according to the target pixel arrangement mode, so as to obtain a rearranged pixel array. And adjusting a hardware-based pixel array arrangement mode on a software level according to the target pixel arrangement mode, so that the image quality output by the camera under the screen in the current image display mode is improved.
It should be noted that the target pixel arrangement manner does not really change the real arrangement of the photodiodes in the image sensor, but performs software-level conversion on the hardware-based pixel arrangement manner in the current image through at least one of pixel combination, weight adjustment, interpolation calculation, and the like. For example, the photodiodes in the image sensor may form a plurality of bayer arrays. The pixel arrangement mode in the current image may also be an arrangement mode including a plurality of bayer arrays, and in the target pixel arrangement mode, the corresponding pixel points may be obtained by performing software algorithm rearrangement and/or merging on each pixel point in the bayer arrays based on at least one of modes such as pixel merging, weight adjustment, interpolation calculation, and the like.
And step S104, processing the current image according to the target pixel arrangement mode to obtain a target image.
In the embodiment of the application, the current image is processed according to the target pixel arrangement mode, and the pixel arrangement of the current image can be changed according to the current image display mode, so that the target image obtained after processing can better meet the image display requirement of the current image display mode. The target images obtained according to different target pixel arrangement modes can be different. For example, the target pixel arrangement mode may be set to a first mode, and a first image obtained by processing the current image in the first mode may be used as the target image; in addition, the target pixel arrangement mode may also be set to a second mode, and a second image obtained by processing the current image in the second mode may be used as the target image; alternatively, the target pixel arrangement mode may be set to a third mode, and a third image obtained by processing the current image in the third mode may be used as the target image.
After the current image is processed according to the target pixel arrangement mode, the obtained target image can be output as an original image file (RAW file) output by the under-screen camera. In some scenarios, the target image may be displayed at the terminal device. In some scenarios, the target image may be used as a processing basis for a subsequent digital image processing algorithm, so as to provide sufficient adjustment space for the subsequent digital image processing algorithm.
On the basis of the above embodiments, in some embodiments, the target pixel arrangement may include a first mode, a second mode, and/or a third mode;
as shown in fig. 2, the step S103 may include:
step S201, if the current image display mode is a dynamic image display mode, determining the target pixel arrangement mode according to the dynamic range of the current image;
step S202, if the current image display mode is a static image display mode, determining the target pixel arrangement mode according to the ambient brightness and/or the sensitivity of the current image.
In the embodiment of the application, in the dynamic image display mode, the requirement on the resolution of the image is often low, and certain requirements on the dynamic range, the contrast and the like of the image are provided, so that a user can capture the content in the image quickly. Therefore, the target pixel arrangement may be determined according to the dynamic range of the current image. Wherein the dynamic range may represent a range of darkest pixels (minimum pixel values) to brightest pixels (maximum pixel values) in the corresponding image. The larger the dynamic range is, the more rich the gradation that can be expressed by the corresponding image is, the wider the color space is included, and the more rich the dark portion details and the bright portion details are recorded. In practical applications, the dynamic range may be represented according to a first ratio between a maximum pixel value and a minimum pixel value in the current image, or a second ratio between a minimum pixel value and a maximum pixel value. Further, the dynamic range of the current image may be determined based on a pixel value distribution histogram of the current image or the like.
For the static image display mode, information such as definition, dynamic range and/or noise needs to be considered for the corresponding image to ensure higher image quality. Therefore, in some examples, the arrangement of the target pixels corresponding to the static image display mode may be determined according to information such as ambient brightness or sensitivity, so as to improve image quality in different scenes. Therefore, whether the light ray information capable of being collected in the current image is in accordance with the expectation can be determined according to the ambient brightness and/or the sensitivity of the current image. Wherein, the ambient brightness can be detected by an ambient light sensor and the like. The ambient brightness may be represented by a parameter such as illuminance. The sensitivity, also known as the ISO value, can measure the degree of sensitivity of a negative to light. Therefore, if the sensitivity is high, the light information can be collected relatively easily. If the current image display mode is a static image display mode, the target pixel arrangement mode may be determined only according to the ambient brightness, or only according to the sensitivity, or may be determined by combining the ambient brightness and the sensitivity.
The specific setting modes of the first mode, the second mode and the third mode can be various. For example, in a first mode, the resolution and dynamic range of the current image may be adjusted; in a second mode, the resolution of the current image may be adjusted; in the third mode, the chrominance channels of the pixels in the current image may be rearranged while maintaining the resolution of the current image.
In some embodiments, in a first image obtained by processing the current image in the first mode, the number of first pixel points included in the first image is one fourth of the number of current pixel points in the current image, and for each first pixel point, the pixel value of the first pixel point is obtained by synthesizing the current pixel points corresponding to the four same chrominance channels, each current pixel point corresponds to only one exposure parameter, and at least two of the current pixel points corresponding to the four same chrominance channels of the first pixel point have different exposure parameters;
in a second image obtained by processing the current image through the second mode, the number of second pixel points included in the second image is one fourth of the number of current pixel points in the current image, and for each second pixel point, the pixel value of the second pixel point is the average value of the current pixel points with the same four corresponding chrominance channels;
and in a third image obtained by processing the current image through the third mode, the number of third pixel points included in the third image is the same as the number of current pixel points in the current image, and the third pixel points are obtained by rearranging the current pixel points in the current image.
In the first mode, each first pixel point in the first image may be obtained according to 4 corresponding current pixel points in the current image, where the current pixel points have the same color. For each first pixel point, each current pixel point only corresponds to one exposure parameter in 4 corresponding current pixel points, and at least two current pixel points in four current pixel points corresponding to the first pixel point and having the same chromaticity channel have different exposure parameters. At this time, different exposure parameters may capture detailed information under different exposure conditions, respectively. For example, the long exposure time can capture dark part details, the low exposure time can capture bright part details, and the normal exposure time can capture intermediate brightness information, and then the detail information under different exposure conditions is combined, so that the dynamic range of the image can be improved. In some application scenarios, the first mode is also referred to as the 3HDR mode.
The specific synthesizing mode for synthesizing the current pixel points with the same four chrominance channels to obtain the pixel value of the corresponding first pixel point can be various. For example, the pixel value of the first pixel point may be obtained by averaging, or weights may be assigned to the 4 current pixel points, and then the pixel value of the corresponding first pixel point is obtained by a calculation method of averaging after weighting.
An exemplary pixel arrangement of the partial region in the current image and the corresponding region in the first image is shown in fig. 3.
Among the current pixel points of the 4 red pixel channels, the current pixel point at the upper left corner corresponds to long exposure time, the current pixel points at the upper right corner and the lower left corner correspond to medium exposure time, and the current pixel point at the lower right corner corresponds to short exposure time. At this time, weights may be assigned to the 4 current pixels, and then the weighted weights are averaged to obtain the pixel value of the corresponding first pixel.
In some examples, after obtaining the first image according to the first mode, operations of sharpening and the like to improve contrast may also be performed on the first image to further enhance detail information of the image.
In the second mode, the pixel value of each second pixel point in the first image is the average value of the current pixel points with the same four corresponding chrominance channels. And aiming at each second pixel point, the pixel value of the second pixel point is the average value of the current pixel points of the corresponding four photodiodes with the same color, so that the noise is well inhibited. In some application scenarios, the second mode is also referred to as a Binning mode.
An exemplary pixel arrangement of the partial region in the current image and the corresponding region in the second image is shown in fig. 4.
The average value of the current pixel points of the 4 red pixel channels can be used as the pixel value of the corresponding second pixel point. And the size of the second pixel point corresponds to the size of 4 current pixel points in the current image.
Compared with the arrangement mode of the current pixel points in the current image, the chromaticity channels of at least part of the third pixel points in each third pixel point in the third image can be changed. Illustratively, the pixel value of each third pixel point may be obtained by performing interpolation operation and/or averaging on current pixel points of different chrominance channels in the current pixel point, respectively, and then combining the current pixel points. Of course, the arrangement of the pixels in the third image may be in other manners. In some application scenarios, the third mode is also referred to as a Remosaic mode.
An exemplary pixel arrangement of the partial region in the current image and the corresponding region in the third image is shown in fig. 5.
The obtained third image may be divided into a plurality of target pixel arrays, each target pixel array may include one blue pixel, one red pixel and two green pixels, and each target pixel array is corresponding to a current pixel point having the same four chrominance channels at a corresponding position in the current image. At this time, the resolution of the third image is not lost, and more image information is retained, so that better image quality can be ensured.
Specific methods for determining the arrangement of the target pixels are exemplarily described below in two specific image display mode scenarios.
1. For the moving image display mode: in the dynamic image display mode, the requirement on the resolution of the image is often low, and certain requirements on the dynamic range, the contrast and the like of the image are required, so that a user can capture the content in the image quickly. Therefore, in some examples, the target pixel arrangement corresponding to the dynamic image display mode may be a first mode and/or a second mode.
In the second mode, the average value of current pixel points with the same 4 chrominance channels can be used as the equivalent pixel value of the corresponding pixel sub-region, and at this time, the resolution of the obtained target image is 1/4 of the resolution of the current image before processing, but noise can be well suppressed.
While the resolution of the target image obtained by the first mode becomes 1/4 that of the current image before processing, the contrast is improved so that the dynamic range of the processed image is higher. Meanwhile, as the resolution of the target image obtained through the first mode or the second mode is reduced, the file size of the target image is smaller, and dynamic display is easier to perform.
2. For the still image display mode: in the static image display mode, information such as sharpness, dynamic range and/or noise needs to be considered for the corresponding image to ensure high image quality. Therefore, in some examples, the arrangement of the target pixels corresponding to the static image display mode may be determined according to information such as ambient brightness or sensitivity, so as to improve image quality in different scenes. For example, for a daytime scene, the target pixel arrangement mode may be a third mode, and in this case, the resolution of the obtained target image is the same as that of the image before processing, so the image definition is better, but the noise suppression effect is weaker than in the first mode and the second mode. For a night scene, the arrangement mode of the target pixels may be a second mode. In addition, in a dim light scene, if the dynamic range is small, the arrangement mode of the target pixels can be a first mode; if the static range is larger, the target pixel arrangement mode may be the second mode.
Therefore, in the embodiment of the application, whether the current image is to be presented in a dynamic form such as video or preview or in a static form such as photographing can be distinguished, so that different parameters are adopted to determine the arrangement mode of the target pixels according to each current image display mode, the determined arrangement mode of the target pixels is more targeted, and the quality of the image output by the camera under the screen can be improved.
In some embodiments, the step S201 includes:
if the current image display mode is a dynamic image display mode, acquiring the dynamic range of the current image;
if the dynamic range meets a first condition, setting a target pixel arrangement mode of the image sensor to be a first mode;
and if the dynamic range does not meet the first condition, setting the target pixel arrangement mode of the image sensor to be a second mode.
In an embodiment of the present application, the first condition may indicate that the dynamic range is smaller than a first preset dynamic range. In practical applications, whether the dynamic range satisfies the first condition may be determined according to a first ratio between a maximum pixel value and a minimum pixel value in the current image, or a second ratio between the minimum pixel value and the maximum pixel value. For example, if the first ratio is smaller than a first preset ratio threshold, or the second ratio is larger than a second preset ratio threshold, the dynamic range may be considered to satisfy the first condition. Further, the dynamic range of the current image may be determined based on a pixel value distribution histogram of the current image or the like. The pixel value distribution histogram may identify a color distribution condition in the current image, and if a range covered by a vertical stripe indicating a distribution condition of each pixel value in the histogram is small and/or a fluctuation formed by the vertical stripe is too narrow, it may be considered that the dynamic range of the current image meets the first condition.
In the embodiment of the present application, specific setting manners of the first mode and the second mode may be various. For example, if the dynamic range satisfies a first condition, the dynamic range of the image in the current image may be considered to be smaller, and at this time, the target pixel arrangement manner of the image sensor may be set to a first mode to adjust the dynamic range of the current image and reduce the resolution of the current image to obtain the first image, and the first image may be taken as the target image. Illustratively, in a first image obtained by processing the current image in the first mode, the number of first pixel points included in the first image is one fourth of the number of current pixel points in the current image, and for each first pixel point, the pixel value of the first pixel point is obtained by synthesizing the current pixel points corresponding to the four same chrominance channels, each current pixel point corresponds to only one exposure parameter, and at least two of the current pixel points corresponding to the four same chrominance channels of the first pixel point have different exposure parameters.
If the dynamic range does not satisfy the first condition, the dynamic range of the image in the current image may be considered to be larger, and at this time, the target pixel arrangement manner of the image sensor may be set to the second mode, so as to reduce the resolution of the current image, reduce the noise of the current image, and suppress the noise, so as to obtain the second image, and the second image is taken as the target image. Illustratively, in a second image obtained by processing the current image in the second mode, the number of second pixel points included in the second image is one fourth of the number of current pixel points in the current image, and for each second pixel point, the pixel value of the second pixel point is an average value of current pixel points corresponding to four same chrominance channels.
In the embodiment of the application, in the dynamic image display mode, if the dynamic range is insufficient, the dynamic range of the image can be improved by adopting the first mode, meanwhile, the image noise can be inhibited, and the transmission speed and the processing speed in the dynamic image display mode can also be improved by reducing the resolution, so that the situations of blocking and the like of the dynamic image are avoided; if the dynamic range satisfies the requirement, the second mode can be adopted to suppress image noise, and the resolution can be reduced to increase the transmission speed and processing speed in the dynamic image display mode. Therefore, the target image obtained by the embodiment of the application can meet the requirements of transmission and display in a dynamic image display mode.
In some embodiments, the step S202 includes:
if the current image display mode is a static image display mode, acquiring brightness information of the current image, wherein the brightness information comprises environment brightness and/or sensitivity corresponding to the current image;
if the brightness information meets a first brightness condition, setting a target pixel arrangement mode of the image sensor to be a second mode;
if the brightness information meets a second brightness condition, setting a target pixel arrangement mode of the image sensor to be a third mode;
and if the brightness information meets a third brightness condition, determining the target pixel arrangement mode according to the dynamic range and/or the noise information of the current image.
In an embodiment of the present application, the luminance information satisfying the first luminance condition may include that the ambient luminance satisfies the first ambient luminance condition, and/or the sensitivity satisfies the first sensitivity condition. The first ambient brightness condition may indicate that the ambient brightness is less than a first preset brightness, and the first sensitivity condition may indicate that the current capability of capturing light information is weak. If the ambient brightness meets the first ambient brightness condition, or if the sensitivity meets the first sensitivity condition, the current scene may be considered as a night scene. At this time, the target pixel arrangement may be set to the second mode, so that noise may be effectively suppressed.
The luminance information satisfying the second luminance condition may include the ambient luminance satisfying a second ambient luminance condition, and/or the sensitivity satisfies a second sensitivity condition. The second ambient brightness condition may indicate that the ambient brightness is greater than a second preset brightness, and the second sensitivity condition may indicate that the current capability of capturing the light information is stronger, for example, the second sensitivity condition may indicate that the sensitivity is greater than a preset sensitivity threshold. If the ambient brightness meets the second ambient brightness condition, or if the sensitivity meets the second sensitivity condition, the current scene may be considered as a daytime scene. At this time, the target pixel arrangement of the image sensor may be set to the third mode. In the third mode, under the condition of ensuring the resolution of the current image, the chrominance channels of all current pixel points in the current image are rearranged, so that the definition of the obtained third image can be ensured, and more image details are saved. Illustratively, in a third image obtained by processing the current image in the third mode, the number of third pixel points included in the third image is the same as the number of current pixel points in the current image, and the third pixel points are obtained by rearranging the current pixel points in the current image.
The luminance information satisfying the third luminance condition may include the ambient luminance satisfying a third ambient luminance condition, and/or the sensitivity satisfies a third sensitivity condition. The third ambient brightness condition may refer to that the ambient brightness is in a preset brightness interval, for example, the ambient brightness value may be not greater than the second preset brightness but greater than the first preset brightness. The third sensitivity condition may indicate a current ability to capture light information, for example, the third sensitivity condition may indicate that the sensitivity is in a preset sensitivity interval. If the ambient brightness meets a third ambient brightness condition, or if the light sensitivity meets a third light sensitivity condition, the current scene may be considered as a general dim light scene, where the general dim light scene may be distinguished from a night scene, and the ambient brightness in the general dim light scene is often brighter than in the night scene. At this time, the image information acquisition state in the current scene can be further distinguished according to the dynamic range and/or the noise information of the current image, so that a reasonable target pixel arrangement mode is determined.
In some embodiments, if the luminance information satisfies a third luminance condition, determining the target pixel arrangement manner according to the dynamic range and/or the noise information of the current image includes:
if the brightness information meets a third brightness condition, judging whether the noise information meets a preset noise condition and/or judging whether the dynamic range meets a second condition;
and if the noise information meets a preset noise condition and/or if the dynamic range meets a second condition, setting the target pixel arrangement mode of the image sensor to be a first mode.
In this embodiment of the application, the preset noise condition may be determined according to the type and the determination manner of the noise information. For example, in some embodiments, the noise intensity of the current image may be determined by a preset machine learning model, and at this time, whether the noise information satisfies a preset noise condition may be determined according to an output result of the preset machine learning model. In addition, an FFT spectrogram including noise information of the current image may also be obtained by Fast Fourier Transform (FFT) or other methods, and then, according to a spectrum distribution in the FFT spectrogram, whether the noise information meets a preset noise condition may be determined, for example, whether a frequency distribution in the FFT spectrogram meets a preset frequency distribution may be determined.
The second condition may refer to indicating that the dynamic range is less than a second preset dynamic range. In practical applications, whether the dynamic range satisfies the second condition may be determined according to a first ratio between a maximum pixel value and a minimum pixel value in the current image, or a second ratio between the minimum pixel value and the maximum pixel value. For example, if the first ratio is smaller than a third preset ratio threshold, or the second ratio is larger than a fourth preset ratio threshold, the dynamic range may be considered to satisfy the second condition. Further, the dynamic range of the current image may be determined based on a pixel value distribution histogram of the current image or the like. The pixel value distribution histogram may identify a color distribution condition in the current image, and if a range covered by a vertical stripe indicating a distribution condition of each pixel value in the histogram is small and/or a fluctuation formed by the vertical stripe is narrow, it may be considered that the dynamic range of the current image meets the second condition. The second condition may be the same as or different from the first condition.
In the embodiment of the application, if the luminance information satisfies a third luminance condition, and if the noise information satisfies a preset noise condition, and/or if the dynamic range satisfies a second condition, the arrangement mode of the target pixels of the image sensor is set to the first mode, so that the dynamic range of the output of the camera under the screen is improved in a general dim light scene with too large noise and/or too small dynamic range, and simultaneously, the noise can be well suppressed.
In some embodiments, if the luminance information satisfies a third luminance condition, and the noise information does not satisfy a preset noise condition, and/or the dynamic range does not satisfy a second condition, the target pixel arrangement of the image sensor is set to the second mode, so that the noise of the output of the under-screen camera in a general dim scene is smaller.
In some embodiments, before determining whether the noise information satisfies a preset noise condition, the method further includes:
and inputting the current image into a preset machine learning model, and obtaining an output result of the preset machine learning model aiming at the current image, wherein the output result comprises noise information of the current image.
For example, the preset machine learning model may include a Convolutional Neural Network (CNN) model such as VGG, ResNet, or google net, and may also include a Generative Adaptive Network (GAN) model, and the type of the preset machine learning model is not limited herein.
The preset machine learning model can be obtained by training based on a training image marked with real noise information, and can also be obtained by training based on a denoising image-noise image pair.
In the embodiment of the application, a current image display mode can be determined, wherein the current image display mode comprises a dynamic image display mode and a static image display mode; at this time, it may be distinguished whether the current image is to be presented in a dynamic form such as video or preview or in a static form such as photographing, so as to determine a target pixel arrangement manner of an image sensor of the off-screen camera according to the determined current image display mode, and process the current image output by the image sensor according to the target pixel arrangement manner, so as to transform the pixel arrangement manner of the current image output by the image sensor according to different current image display modes, so that the pixel arrangement manner in the obtained target image can better match the requirement of the current image display mode, thereby improving the image quality of the image output by the off-screen camera.
After the target image is obtained, the obtained target image may be output as an original image file (RAW file) output by the off-screen camera. In some scenarios, the target image may be displayed at the terminal device. In some scenarios, the target image may be used as a processing basis for a subsequent digital image processing algorithm, so as to provide sufficient adjustment space for the subsequent digital image processing algorithm. At this point, subsequent digital image processing algorithms may perform image processing operations on the target image, such as target detection, denoising, local blurring, sharpening, and others.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 6 shows a block diagram of an image processing apparatus according to an embodiment of the present application, which corresponds to the image processing method described above in the foregoing embodiment, and only the relevant portions of the embodiment of the present application are shown for convenience of description.
Referring to fig. 6, the image processing apparatus 6 includes:
an obtaining module 601, configured to obtain a current image output by an image sensor of the off-screen camera;
a first determining module 602, configured to determine a current image display mode, where the current image display mode includes a dynamic image display mode and a static image display mode;
a second determining module 603, configured to determine a target pixel arrangement manner according to the current image display mode, where the target pixel arrangement manner is different from an arrangement manner of photodiodes of the image sensor;
a processing module 604, configured to process the current image according to the target pixel arrangement manner, so as to obtain a target image.
Optionally, the target pixel arrangement mode includes a first mode, a second mode and/or a third mode;
the second determining module 603 specifically includes:
a first determining unit, configured to determine the target pixel arrangement manner according to a dynamic range of the current image if the current image display mode is a dynamic image display mode;
and a second determining unit, configured to determine the target pixel arrangement manner according to ambient brightness and/or sensitivity of the current image if the current image display mode is a static image display mode.
Optionally, the first determining unit specifically includes:
a first obtaining subunit, configured to obtain a dynamic range of the current image if the current image display mode is a dynamic image display mode;
the first setting subunit is used for setting the target pixel arrangement mode of the image sensor to be a first mode if the dynamic range meets a first condition;
and the second setting subunit is used for setting the target pixel arrangement mode of the image sensor to be in a second mode if the dynamic range does not meet the first condition.
Optionally, the second determining unit specifically includes:
a second obtaining subunit, configured to obtain, if the current image display mode is a static image display mode, luminance information of the current image, where the luminance information includes ambient luminance and/or sensitivity corresponding to the current image;
a third setting subunit, configured to set a target pixel arrangement manner of the image sensor to a second mode if the luminance information satisfies the first luminance condition;
a fourth setting subunit, configured to set a target pixel arrangement manner of the image sensor to a third mode if the luminance information satisfies a second luminance condition;
and the determining subunit is configured to determine the target pixel arrangement manner according to the dynamic range and/or the noise information of the current image if the luminance information satisfies a third luminance condition.
Optionally, the determining subunit is specifically configured to:
if the brightness information meets a third brightness condition, judging whether the noise information meets a preset noise condition and/or judging whether the dynamic range meets a second condition;
and if the noise information meets a preset noise condition and/or if the dynamic range meets a second condition, setting the target pixel arrangement mode of the image sensor to be a first mode.
Optionally, the image processing apparatus 6 further includes:
and the second processing module is used for inputting the current image into a preset machine learning model and obtaining an output result of the preset machine learning model aiming at the current image, wherein the output result comprises noise information of the current image.
Optionally, in a first image obtained by processing the current image in the first mode, the number of first pixel points included in the first image is one fourth of the number of current pixel points in the current image, and for each first pixel point, the pixel value of the first pixel point is obtained by synthesizing the current pixel points corresponding to the four same chrominance channels, each current pixel point corresponds to only one exposure parameter, and at least two of the current pixel points corresponding to the four same chrominance channels of the first pixel point have different exposure parameters;
in a second image obtained by processing the current image through the second mode, the number of second pixel points included in the second image is one fourth of the number of current pixel points in the current image, and for each second pixel point, the pixel value of the second pixel point is the average value of the current pixel points with the same four corresponding chrominance channels;
and in a third image obtained by processing the current image through the third mode, the number of third pixel points included in the third image is the same as the number of current pixel points in the current image, and the third pixel points are obtained by rearranging the current pixel points in the current image.
In the embodiment of the application, a current image output by an image sensor of the under-screen camera can be acquired, and a current image display mode is determined, wherein the current image display mode comprises a dynamic image display mode and a static image display mode; at this time, it may be distinguished whether the current image is to be presented in a dynamic form such as video or preview or in a static form such as photographing, so as to determine a target pixel arrangement manner according to the determined current image display mode, and process the current image output by the image sensor according to the target pixel arrangement manner, so as to transform the pixel arrangement manner of the current image output by the image sensor according to different current image display modes, so that the pixel arrangement manner in the obtained target image can better match the requirement of the current image display mode, thereby improving the image quality of the image output by the camera under the screen.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 7, the terminal device 7 of this embodiment includes: at least one processor 70 (only one is shown in fig. 7), a memory 71, and a computer program 72 stored in the memory 71 and executable on the at least one processor 70, wherein the steps of any of the above-described embodiments of the image processing method are implemented when the processor 70 executes the computer program 72.
The terminal device 7 may be a server, a mobile phone, a wearable device, an Augmented Reality (AR)/Virtual Reality (VR) device, a desktop computer, a notebook, a desktop computer, a palmtop computer, or other computing devices. The terminal device 7 may include, but is not limited to, a processor 70 and a memory 71. In addition, the terminal device 7 may further include an off-screen camera. Those skilled in the art will appreciate that fig. 7 is merely an example of the terminal device 7, and does not constitute a limitation of the terminal device 7, and may include more or less components than those shown, or combine some of the components, or different components, such as may also include input devices, output devices, network access devices, etc. The input device may include a keyboard, a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of a fingerprint), a microphone, a camera, and the like, and the output device may include a display, a speaker, and the like.
The Processor 70 may be a Central Processing Unit (CPU), and the Processor 70 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 71 may be an internal storage unit of the terminal device 7, such as a hard disk or a memory of the terminal device 7. In other embodiments, the memory 71 may also be an external storage device of the terminal device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 7. Further, the memory 71 may include both an internal storage unit and an external storage device of the terminal device 7. The memory 71 is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as program codes of the computer programs. The above-mentioned memory 71 may also be used to temporarily store data that has been output or is to be output.
In addition, although not shown, the terminal device 7 may further include a network connection module, such as a bluetooth module Wi-Fi module, a cellular network module, and the like, which is not described herein again.
In this embodiment, when the processor 70 executes the computer program 72 to implement the steps in any of the image processing method embodiments, it may acquire a current image output by an image sensor of the off-screen camera, and determine a current image display mode, where the current image display mode includes a dynamic image display mode and a static image display mode; at this time, it may be distinguished whether the current image is to be presented in a dynamic form such as video or preview or in a static form such as photographing, so as to determine a target pixel arrangement manner according to the determined current image display mode, and process the current image output by the image sensor according to the target pixel arrangement manner, so as to transform the pixel arrangement manner of the current image output by the image sensor according to different current image display modes, so that the pixel arrangement manner in the obtained target image can better match the requirement of the current image display mode, thereby improving the image quality of the image output by the camera under the screen.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above method embodiments.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer-readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An image processing method is applied to a terminal device with an off-screen camera, and comprises the following steps:
acquiring a current image output by an image sensor of the under-screen camera;
determining a current image display mode, wherein the current image display mode comprises a dynamic image display mode and a static image display mode;
determining a target pixel arrangement mode according to the current image display mode, wherein the target pixel arrangement mode is different from the arrangement mode of the photodiode of the image sensor;
and processing the current image according to the target pixel arrangement mode to obtain a target image, and outputting the obtained target image as an original image file output by the under-screen camera after processing the current image according to the target pixel arrangement mode.
2. The image processing method of claim 1, wherein the target pixel arrangement comprises a first mode, a second mode, and/or a third mode;
the determining the arrangement mode of the target pixels according to the current image display mode comprises the following steps:
if the current image display mode is a dynamic image display mode, determining the target pixel arrangement mode according to the dynamic range of the current image;
and if the current image display mode is a static image display mode, determining the target pixel arrangement mode according to the ambient brightness and/or the sensitivity of the current image.
3. The image processing method according to claim 2, wherein determining the arrangement of the target pixels according to the dynamic range of the current image if the current image display mode is a dynamic image display mode comprises:
if the current image display mode is a dynamic image display mode, acquiring the dynamic range of the current image;
if the dynamic range meets a first condition, setting a target pixel arrangement mode of the image sensor to be a first mode;
and if the dynamic range does not meet the first condition, setting the target pixel arrangement mode of the image sensor to be a second mode.
4. The image processing method according to claim 2, wherein determining the target pixel arrangement according to the ambient brightness and/or the sensitivity of the current image if the current image display mode is a static image display mode comprises:
if the current image display mode is a static image display mode, acquiring brightness information of the current image, wherein the brightness information comprises environment brightness and/or sensitivity corresponding to the current image;
if the brightness information meets a first brightness condition, setting a target pixel arrangement mode of the image sensor to be a second mode;
if the brightness information meets a second brightness condition, setting a target pixel arrangement mode of the image sensor to be a third mode;
and if the brightness information meets a third brightness condition, determining the target pixel arrangement mode according to the dynamic range and/or the noise information of the current image.
5. The image processing method according to claim 4, wherein determining the arrangement of the target pixels according to the dynamic range and/or the noise information of the current image if the luminance information satisfies a third luminance condition comprises:
if the brightness information meets a third brightness condition, judging whether the noise information meets a preset noise condition and/or judging whether the dynamic range meets a second condition;
and if the noise information meets a preset noise condition and/or if the dynamic range meets a second condition, setting the target pixel arrangement mode of the image sensor to be a first mode.
6. The image processing method of claim 5, before determining whether the noise information satisfies a preset noise condition, further comprising:
and inputting the current image into a preset machine learning model, and obtaining an output result of the preset machine learning model aiming at the current image, wherein the output result comprises noise information of the current image.
7. The image processing method according to claim 2, wherein in a first image obtained by processing the current image in the first mode, the number of first pixel points included in the first image is one fourth of the number of current pixel points in the current image, and for each first pixel point, the pixel value of the first pixel point is obtained by synthesizing the current pixel points corresponding to four same chrominance channels, each current pixel point corresponds to only one exposure parameter, and at least two of the current pixel points corresponding to the first pixel point have different exposure parameters;
in a second image obtained by processing the current image through the second mode, the number of second pixel points included in the second image is one fourth of the number of current pixel points in the current image, and for each second pixel point, the pixel value of the second pixel point is the average value of the current pixel points with the same four corresponding chrominance channels;
and in a third image obtained by processing the current image through the third mode, the number of third pixel points included in the third image is the same as the number of current pixel points in the current image, and the third pixel points are obtained by rearranging the current pixel points in the current image.
8. An image processing apparatus, applied to a terminal device having an off-screen camera, comprising:
the acquisition module is used for acquiring a current image output by an image sensor of the off-screen camera;
the device comprises a first determining module, a second determining module and a display module, wherein the first determining module is used for determining a current image display mode, and the current image display mode comprises a dynamic image display mode and a static image display mode;
a second determining module, configured to determine a target pixel arrangement manner according to the current image display mode, where the target pixel arrangement manner is different from an arrangement manner of photodiodes of the image sensor;
and the processing module is used for processing the current image according to the target pixel arrangement mode to obtain a target image, and outputting the obtained target image as an original image file output by the under-screen camera after processing the current image according to the target pixel arrangement mode.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the image processing method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the image processing method according to any one of claims 1 to 7.
CN202010899187.9A 2020-08-31 2020-08-31 Image processing method, image processing device and terminal equipment Active CN111970451B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010899187.9A CN111970451B (en) 2020-08-31 2020-08-31 Image processing method, image processing device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010899187.9A CN111970451B (en) 2020-08-31 2020-08-31 Image processing method, image processing device and terminal equipment

Publications (2)

Publication Number Publication Date
CN111970451A CN111970451A (en) 2020-11-20
CN111970451B true CN111970451B (en) 2022-01-07

Family

ID=73400168

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010899187.9A Active CN111970451B (en) 2020-08-31 2020-08-31 Image processing method, image processing device and terminal equipment

Country Status (1)

Country Link
CN (1) CN111970451B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116416656A (en) * 2021-12-29 2023-07-11 荣耀终端有限公司 Image processing method, device and storage medium based on under-screen image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1883397A (en) * 2005-06-20 2006-12-27 株式会社东芝 Ultrasonic diagnosis device and ultrasonic metering method
CN101305397A (en) * 2005-10-12 2008-11-12 有源光学有限公司 Method for forming image based on a plurality of image framesImage processing system and digital camera
CN103369239A (en) * 2012-03-28 2013-10-23 三星电子株式会社 Image processing apparatus and method for camera
CN103576832A (en) * 2012-07-13 2014-02-12 三星电子株式会社 Display control method and apparatus for power saving
JP2015095799A (en) * 2013-11-13 2015-05-18 株式会社ザクティ Imaging apparatus and image transfer method
CN109005363A (en) * 2018-08-13 2018-12-14 Oppo广东移动通信有限公司 Image formation control method, device, electronic equipment and storage medium
CN110475066A (en) * 2019-08-20 2019-11-19 Oppo广东移动通信有限公司 Control method, imaging device and electronic equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4530277B2 (en) * 2005-02-25 2010-08-25 株式会社リコー Image reproducing apparatus, image reproducing method, program, and recording medium
WO2010089830A1 (en) * 2009-02-03 2010-08-12 パナソニック株式会社 Image pick-up device
CN105472266A (en) * 2015-12-18 2016-04-06 广东欧珀移动通信有限公司 High dynamic range image generation method, photographing device and terminal
CN105611185B (en) * 2015-12-18 2017-10-31 广东欧珀移动通信有限公司 image generating method, device and terminal device
CN106023894B (en) * 2016-08-09 2019-01-22 深圳市华星光电技术有限公司 A kind of driving method and drive system for reducing AMOLED and showing ghost
CN106412407B (en) * 2016-11-29 2019-06-07 Oppo广东移动通信有限公司 Control method, control device and electronic device
CN108391059A (en) * 2018-03-23 2018-08-10 华为技术有限公司 A kind of method and apparatus of image procossing
CN110381263B (en) * 2019-08-20 2021-04-13 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1883397A (en) * 2005-06-20 2006-12-27 株式会社东芝 Ultrasonic diagnosis device and ultrasonic metering method
CN101305397A (en) * 2005-10-12 2008-11-12 有源光学有限公司 Method for forming image based on a plurality of image framesImage processing system and digital camera
CN103369239A (en) * 2012-03-28 2013-10-23 三星电子株式会社 Image processing apparatus and method for camera
CN103576832A (en) * 2012-07-13 2014-02-12 三星电子株式会社 Display control method and apparatus for power saving
JP2015095799A (en) * 2013-11-13 2015-05-18 株式会社ザクティ Imaging apparatus and image transfer method
CN109005363A (en) * 2018-08-13 2018-12-14 Oppo广东移动通信有限公司 Image formation control method, device, electronic equipment and storage medium
CN110475066A (en) * 2019-08-20 2019-11-19 Oppo广东移动通信有限公司 Control method, imaging device and electronic equipment

Also Published As

Publication number Publication date
CN111970451A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
US11228720B2 (en) Method for imaging controlling, electronic device, and non-transitory computer-readable storage medium
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
JP6615917B2 (en) Real-time video enhancement method, terminal, and non-transitory computer-readable storage medium
CN110766621B (en) Image processing method, image processing device, storage medium and electronic equipment
KR101662846B1 (en) Apparatus and method for generating bokeh in out-of-focus shooting
CN109089046B (en) Image noise reduction method and device, computer readable storage medium and electronic equipment
CN110796600B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN108510557B (en) Image tone mapping method and device
CN107690804B (en) Image processing method and user terminal
EP3820141A1 (en) Imaging control method and apparatus, electronic device, and readable storage medium
EP3839878A1 (en) Image denoising method and apparatus, and device and storage medium
CN116416122B (en) Image processing method and related device
CN112262427A (en) Smear evaluation method, smear improvement method, and electronic device
WO2020171300A1 (en) Processing image data in a composite image
US20220174222A1 (en) Method for marking focused pixel, electronic device, storage medium, and chip
CN115496668A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110740266B (en) Image frame selection method and device, storage medium and electronic equipment
WO2023137956A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN112822413B (en) Shooting preview method, shooting preview device, terminal and computer readable storage medium
CN111986116A (en) Video noise reduction method and device, electronic equipment and computer readable storage medium
CN111970451B (en) Image processing method, image processing device and terminal equipment
CN112437237B (en) Shooting method and device
CN112950499B (en) Image processing method, device, electronic equipment and storage medium
CN114429476A (en) Image processing method, image processing apparatus, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant