CN115699785A - Screen shooting control method, terminal device and storage medium - Google Patents

Screen shooting control method, terminal device and storage medium Download PDF

Info

Publication number
CN115699785A
CN115699785A CN202080101595.3A CN202080101595A CN115699785A CN 115699785 A CN115699785 A CN 115699785A CN 202080101595 A CN202080101595 A CN 202080101595A CN 115699785 A CN115699785 A CN 115699785A
Authority
CN
China
Prior art keywords
light source
shooting
screen
imaging area
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080101595.3A
Other languages
Chinese (zh)
Inventor
张学勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN115699785A publication Critical patent/CN115699785A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)

Abstract

A method for controlling shooting under a screen, terminal equipment and a storage medium are provided, wherein the method is applied to the terminal equipment which comprises a full-screen and a camera module arranged below the full-screen; when the target shooting scene contains a light source, matched shooting parameters can be respectively set for a light source imaging area and a non-light source imaging area to acquire image information, and the acquired first image information and second image information are utilized to synthesize images to obtain a final target image, so that the light source diffraction phenomenon caused by the comprehensive screen structure during shooting under a screen is eliminated, and the imaging quality is improved on the basis of not changing the display quality of the comprehensive screen.

Description

Screen shooting control method, terminal device and storage medium Technical Field
The present invention relates to an off-screen shooting technology, and in particular, to an off-screen shooting control method, a terminal device, and a storage medium.
Background
In order to realize comprehensive screen display of the electronic equipment, the under-screen camera shooting technology is developed, and the under-screen camera shooting technology camera is arranged below the screen of the mobile terminal. As shown in fig. 1, in a photographing scene, light can transmit through a camera area at the top end of the full-face screen, and then enter a camera to form an image on an image sensor, so that photographing is realized. As shown in fig. 2, the camera area in the full-screen display scene can be normally displayed as a part of the display screen, so as to achieve the full-screen display effect. The camera shooting technology under the screen can display and shoot a common screen camera area, and a user can not see the camera under the screen through the screen under any condition, so that the full-screen display of electronic products such as mobile phones is realized.
However, when a common screen camera area is displayed and shot, a screen structure is introduced into the screen camera area, and the screen structure influences the light collection of the camera, so that the imaging quality is influenced; if the imaging quality is to be improved, the light transmittance of the screen camera area needs to be improved, but the display quality is affected. Therefore, the application of the existing under-screen camera technology is still imperfect, and the problems of imaging quality and display quality need to be solved urgently.
Disclosure of Invention
In order to solve the technical problem, embodiments of the present invention provide an off-screen shooting control method, a terminal device, and a storage medium.
In a first aspect, an embodiment of the present application provides an off-screen shooting control method, which is applied to a terminal device, where the terminal device includes a full-screen and a camera module disposed below the full-screen; the method comprises the following steps:
when a target shooting scene is detected to contain a light source, determining a light source imaging area and a non-light source imaging area;
acquiring a first shooting parameter matched with the light source imaging area and a second shooting parameter matched with the non-light source imaging area;
acquiring first image information of the light source imaging area acquired by the camera module through the full screen based on the first shooting parameter;
acquiring second image information of the non-light source imaging area acquired by the camera module through the full screen based on the second shooting parameter;
and obtaining a target image of the target shooting scene based on the first image information and the second image information.
In a second aspect, an embodiment of the present application further provides a terminal device, where the terminal device includes a full-screen and a camera module disposed below the full-screen; the terminal device further includes:
a detection section configured to determine a light source imaging area and a non-light source imaging area when detecting that a light source is included in a target photographic scene;
an acquisition section configured to acquire a first photographing parameter matching the light source imaging region and a second photographing parameter matching the non-light source imaging region;
the control part is configured to acquire first image information of the light source imaging area acquired by the camera module through the full-screen based on the first shooting parameter; acquiring second image information of the non-light source imaging area acquired by the camera module through the full screen based on the second shooting parameter;
and the processing part is configured to obtain a target image of the target shooting scene based on the first image information and the second image information.
In a third aspect, an embodiment of the present application further provides a terminal device, where the terminal device includes a full-screen and a camera module disposed below the full-screen; the terminal device further includes: a processor and a memory configured to store a computer program operable on the processor;
wherein the processor is configured to perform the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, the present embodiments also provide a computer-readable storage medium for storing a computer program, where the computer program makes a computer execute the steps of the method according to the first aspect.
The application provides an off-screen shooting control method, terminal equipment and a storage medium, wherein the method is applied to the terminal equipment, and the terminal equipment comprises a full-screen and a camera module arranged below the full-screen; the method comprises the following steps: when a light source is detected to be contained in a target shooting scene, determining a light source imaging area and a non-light source imaging area; acquiring a first shooting parameter matched with the light source imaging area and a second shooting parameter matched with the non-light source imaging area; acquiring first image information of the light source imaging area acquired by the camera module through the full screen based on the first shooting parameter; acquiring second image information of the non-light source imaging area acquired by the camera module through the full screen based on the second shooting parameter; and obtaining a target image of the target shooting scene based on the first image information and the second image information. Therefore, when the target shooting scene contains a light source, matched shooting parameters can be set for a light source imaging area and a non-light source imaging area respectively for image information acquisition, and the acquired first image information and second image information are utilized for image synthesis to obtain a final target image, so that the light source diffraction phenomenon caused by the comprehensive screen structure during shooting under the screen is eliminated, and the imaging quality is improved on the basis of not changing the display quality of the comprehensive screen.
Drawings
FIG. 1 is a schematic diagram of a composition structure of a full screen in a shooting scene;
FIG. 2 is a schematic diagram of a composition structure of a full screen in a full screen display scene;
fig. 3 is a first flowchart of an off-screen shooting control method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an ideal imaging result of a camera;
FIG. 5 is a schematic diagram of an actual imaging result of an off-screen camera;
FIG. 6 is a schematic view illustrating a window division of an imaging region of a light source according to an embodiment of the present disclosure;
fig. 7 is a second flowchart of the off-screen shooting control method in the embodiment of the present application;
FIG. 8 is a schematic diagram of a second window division of an imaging area of a light source according to an embodiment of the present disclosure;
FIG. 9 is a schematic view of a third window division of the light source imaging area in the embodiment of the present application;
fig. 10 is a third flowchart of an off-screen shooting control method in the embodiment of the present application;
fig. 11 is a schematic diagram of a shooting effect of a conventional under-screen shooting device;
fig. 12 is a schematic diagram illustrating a shooting effect of the optimized off-screen shooting device according to the present application;
fig. 13 is a schematic diagram of a first composition structure of a terminal device in the embodiment of the present invention;
fig. 14 is a schematic diagram of a second component structure of the terminal device in the embodiment of the present invention;
fig. 15 is a schematic structural diagram of a chip in the embodiment of the present application.
Detailed Description
So that the manner in which the features and aspects of the embodiments of the present invention can be understood in detail, a more particular description of the embodiments of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings.
The embodiment of the invention provides a method for controlling shooting under a screen, which is applied to terminal equipment, wherein the terminal equipment comprises a comprehensive screen and a camera module arranged below the comprehensive screen, and the comprehensive screen comprises a first display area and a second display area; the camera module is arranged below the first display area, the second display area is a main display area of the full screen, the first display area and the second display area form the whole display area of the full screen, and screen structures corresponding to the first display area and the second display area can be the same. For example, the position relationship between the first display area and the second display area may be the relationship shown in fig. 1, and the position of the first display area may also be flexibly set according to actual requirements.
The terminal device may be a device having an off-screen shooting function, and may include, for example, a mobile phone, a tablet computer, a notebook computer, a palm computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a camera, and the like.
As shown in fig. 3, the method for controlling the off-screen shooting implemented in the terminal device may specifically include:
step 101: when a light source is detected to be contained in a target shooting scene, determining a light source imaging area and a non-light source imaging area;
the light source in the actual shooting scene may include: daylight, tungsten filament lamps, led lamps, fluorescent lamps, incandescent lamps, etc. The target shooting scene can comprise one or more light sources, and the processing method of the application is adopted for each light source to inhibit the diffraction phenomenon of the light source in the imaging area.
In some embodiments, the method of detecting whether the target shooting scene contains a light source may include: acquiring a preview image of a target shooting scene, and performing light source detection on the preview image by adopting a light source detection algorithm; when the light source is contained in the target shooting scene, determining a light source imaging area; and when the target shooting scene is determined not to contain the light source, performing conventional shooting operation on the target shooting scene.
For example, the conventional shooting operation may determine the shooting parameters of the target shooting scene according to the existing full-automatic shooting mode, or the user manually sets the shooting parameters according to the target shooting scene and then shoots to obtain the target image.
Illustratively, the light source detection algorithm may be an image gray scale based detection algorithm, such as: acquiring statistical information of each pixel in a preview image; determining a white point on the preview image according to the statistical information; and clustering white points on the target image by using a preset clustering algorithm to determine a light source imaging area, wherein the rest area is a non-light source imaging area. In practical application, the gray value of the pixel in the light source imaging area is generally larger, the gray value ranges of the light source imaging areas with different colors can be obtained according to the imaging characteristics of the light sources with different colors, and whether the light sources are included or not and the light source imaging areas are determined based on the gray value characteristics of the pixel.
For example, the light source detection algorithm may also be an artificial intelligence detection algorithm, for example, a detection model is constructed by using the artificial intelligence detection algorithm, then the detection model is trained by using image samples containing different light sources, the detection model is configured in the terminal device, and the light source imaging area in the preview image is detected by using the detection model. In practical application, the detection model can be continuously optimized along with the increase of the light source image samples, so that the accuracy of the detection model is improved.
Step 102: acquiring a first shooting parameter matched with the light source imaging area and a second shooting parameter matched with the non-light source imaging area;
fig. 4 is a schematic diagram of an ideal imaging result of the camera, as shown in fig. 4, an ideal optical system of the camera has a characteristic that a point object is imaged in a point space, that is, an actual object in a target shooting scene corresponds to an image in an imaging area.
However, due to the limitation of the optical aperture of the camera and the influence of the overall screen structure, the point object image in the object space is actually a spot on the image plane. I.e. a point object in object space will be imaged on one or several pixels of the image sensor. Fig. 5 is a schematic diagram of an actual imaging result of the under-screen camera, and as shown in fig. 5, the under-screen camera is compared with a common camera, only one display screen is introduced in front of a lens, the display screen is essentially a micro-nano periodic structure device similar to a mesh, and transparent areas and opaque areas of the screen are periodically arranged. The screen is placed in front of the camera, so that the pupil function of an optical system of the camera is changed, and the modulation transfer function and the imaging quality of the optical system are seriously influenced. The net effect is that a point object in object space is imaged onto a large number of pixels of the image sensor (i.e. the imaging area). And the stronger the light intensity of the object space point object is, the more obvious the effect of the diffraction light spot is, when the light intensity of the object space point object reaches a certain degree, the secondary diffraction light spot formed on the image plane by the point completely annihilates the effective signal of the adjacent pixel.
Because the light intensity of light source is great and diffraction facula appears in the image forming region easily, consequently, this application embodiment eliminates the influence of diffraction facula through setting up different shooting parameters for light source image forming region and non-light source image forming region, improves the shooting quality of camera under the screen.
In practical application, the light source imaging area contains an effective light source and diffraction spots, the diffraction spots are positioned on two sides or around the effective light source according to the shape of the effective light source, as shown in fig. 5, for a point light source, because the screen structure is mainly a grid-shaped periodic structure, the diffraction spots of the screen lower camera imaging under the influence of the screen structure are mainly distributed in a cross shape, namely secondary diffraction spots are generated in the horizontal and vertical directions which are perpendicular to each other, and therefore, only a plurality of secondary diffraction spots which have the largest influence need to be avoided.
In some embodiments, the acquiring a first photographing parameter matched with the light source imaging region includes: determining a light source distribution characteristic within the light source imaging area; dividing the light source imaging area into N sub-windows; wherein N is a positive integer; and setting first shooting parameters corresponding to the N sub-windows respectively based on the light source distribution characteristics.
In practical application, because the light source imaging area contains the effective light source and the diffraction light spot, the light source imaging area needs to be divided into N sub-windows, and the matched first shooting parameters are set for different sub-windows, so that the diffraction light spot of the effective light source in the light source imaging area is reserved.
Specifically, the setting of the first shooting parameters corresponding to the N sub-windows respectively based on the light spot distribution characteristics includes: determining a first type of sub-window containing an effective light source and a second type of sub-window containing diffraction light spots in the N sub-windows based on the light source distribution characteristics; and setting a first shooting parameter corresponding to the first type of sub-window and a first shooting parameter corresponding to the second type of sub-window based on a diffraction light spot suppression strategy.
Here, the diffraction spot suppression strategy is used to suppress the brightness of the diffraction spot, for example, the diffraction spot suppression strategy reduces the exposure time to a preset exposure time or reduces the ADC gain to a preset gain according to the diffraction spot brightness. Here, the brighter the diffraction spot, the greater the exposure time or the degree of ADC gain reduction. The darker the diffraction spot, the less the exposure time or ADC gain is reduced.
Fig. 6 is a schematic diagram illustrating division of a first window of a light source imaging area in the embodiment of the present application, and as shown in fig. 6, the light source imaging area includes an effective light source, a first-order diffraction spot and a second-order diffraction spot thereof, and a light source distribution characteristic is in a cross shape. Taking fig. 6 as an example, the middle four sub-windows include effective light sources, which are first-type sub-windows, the 8 sub-windows coplanar with the middle four sub-windows include diffraction spots, which are second-type sub-windows, and the four corner sub-windows include neither effective light sources nor diffraction spots, which can be defined as third-type sub-windows.
That is to say, the setting of the first shooting parameters corresponding to the N sub-windows respectively based on the light spot distribution characteristics further includes: and determining a third type of sub-window which does not contain an effective light source and does not contain diffraction spots based on the light source distribution characteristics, and setting a first shooting parameter corresponding to the third type of sub-window.
In practical application, under the influence of the intensity of a light source, structural characteristics of a screen and the like, a light source may have multi-level diffraction light spots, and the brightness of the diffraction light spots in different levels is different, so that different first shooting parameters can be set for secondary light spots with different brightness.
Specifically, the second type of sub-window is subdivided into at least two types of sub-windows based on the light spot distribution characteristics, and first shooting parameters corresponding to the different types of sub-windows are set based on a diffraction light spot suppression strategy.
Step 103: acquiring first image information of the light source imaging area acquired by the camera module through the full screen based on the first shooting parameter;
in some embodiments, the photographing parameters (i.e., the first photographing parameter and the second photographing parameter) include an exposure parameter and/or a readout circuit gain. That is, the diffraction spots of the light source can be suppressed by adjusting the exposure parameters, or the electrical signals of the diffraction spots can be suppressed by reducing the gain of the readout circuit, or the diffraction spots can be suppressed by adjusting the exposure parameters and reducing the gain of the readout circuit together.
In practical application, when the first shooting parameter is an exposure parameter, the first image information of different sub-windows is obtained by adopting a time-sharing exposure strategy for different sub-windows in a light source imaging area. And when the first shooting parameter is the gain of the reading circuit, adjusting the gain of the reading circuit of different sub-windows, adopting global exposure to shoot, and then, for the sub-window where the diffraction light spot is located, reducing the gain of the reading circuit, and reading out the first image information after the diffraction light spot is suppressed by the reading circuit.
Step 104: acquiring second image information of the non-light source imaging area acquired by the camera module through the full screen based on the second shooting parameter;
in practical application, when the second shooting parameter is an exposure parameter, the second shooting parameter may be an exposure parameter determined by the terminal device in an automatic shooting mode for the target shooting scene, and the second shooting parameter is used to expose the non-light source imaging area to obtain second image information of the non-light source imaging area. Here, the method of acquiring the second image information may be to control only the pixels in the non-light source imaging region to operate and acquire the second image information, or to control the pixels in the entire imaging region to operate and acquire the second image information.
When the second photographing parameter is a readout circuit gain, the readout circuit gain may be a default gain of a readout circuit in the image sensor. That is, the gain of the corresponding readout circuit in the non-light source imaging region is not adjusted, and only the gain of the corresponding readout circuit in the light source imaging region is adjusted.
Step 105: and obtaining a target image of the target shooting scene based on the first image information and the second image information.
In practical application, the first image information is pixel information in a light source imaging area, and the second image information is pixel information in a non-light source imaging area. All pixel information can be combined into a full pixel image using a pixel-level combining algorithm.
In some embodiments, the method for obtaining the target image may specifically include: performing image synthesis by using the first image information and the second image information to obtain a synthesized image; and optimizing the synthesized image by adopting a preset image optimization algorithm to obtain the target image. Here, after the composite image is obtained, the image is optimized by using an image optimization algorithm, and the shooting effect can be further improved.
In practical applications, the image optimization algorithm can be used to optimize white balance, glare, saturation, contrast, sharpness, and the like.
Here, the execution subject of steps 101 to 105 may be a processor of the terminal device.
By adopting the technical scheme, when the target shooting scene contains a light source, matched shooting parameters can be respectively set for a light source imaging area and a non-light source imaging area to acquire image information, and the acquired first image information and second image information are utilized to perform image synthesis to obtain a final target image, so that the light source diffraction phenomenon caused by the overall screen structure during screen shooting is eliminated, and the imaging quality is improved on the basis of not changing the overall screen display quality.
Based on the foregoing embodiment, taking adjustment of the exposure parameter as an example for further illustration, fig. 7 is a second flowchart of the off-screen shooting control method in the embodiment of the present application, and as shown in fig. 7, the method specifically includes:
step 201: starting an off-screen shooting function and collecting a preview image;
in practical application, the camera module under the screen can be a front-facing camera module of the mobile phone, and after the front-facing camera is opened, the front-facing camera collects preview images and displays the preview images on a preview interface.
Step 202: judging whether the preview image contains a light source, if so, executing step 203; if not, go to step 209;
adopting a light source detection algorithm to carry out light source detection on the preview image; when the light source is contained in the target shooting scene, determining a light source imaging area; and when the target shooting scene is determined not to contain the light source, performing conventional shooting operation on the target shooting scene.
For example, the conventional shooting operation may determine the shooting parameters of the target shooting scene according to the existing full-automatic shooting mode, or the user manually sets the shooting parameters according to the target shooting scene and then shoots to obtain the target image.
Step 203: determining a light source imaging area and a non-light source imaging area;
step 204: acquiring a first shooting parameter matched with a light source imaging area and a second shooting parameter matched with a non-light source imaging area;
because the light intensity of the light source is high and diffraction light spots are easy to appear in the imaging area, different exposure parameters are set for the light source imaging area and the non-light source imaging area, and the time-sharing exposure method is adopted to carry out time-sharing multiple exposure on the target shooting scene containing the light source, so that the influence of the diffraction light spots is eliminated, and the shooting quality of the camera under the screen is improved.
Specifically, determining a light source distribution characteristic in the light source imaging area; dividing the light source imaging area into N sub-windows; wherein N is a positive integer; and setting first shooting parameters corresponding to the N sub-windows respectively based on the light source distribution characteristics.
In practical application, because the effective light source and the diffraction light spot are included in the light source imaging area, the light source imaging area needs to be divided into N sub-windows, and matched exposure parameters are set for different sub-windows, so that the diffraction light spot elimination of the effective light source in the light source imaging area is reserved.
Step 205: controlling the camera module to perform time-sharing exposure on the N sub-windows by using the first shooting parameters to obtain first image information corresponding to the N sub-windows respectively;
specifically, M windows with the same first shooting parameters in the N sub-windows are determined; and controlling the M windows to carry out exposure for M times based on the first shooting parameters corresponding to the M windows respectively.
In practical applications, the light source imaging area may be divided into a plurality of pixel sub-windows, such as 3 × 3,5 × 5,7 × 7, etc., and then each sub-window starts a time-division exposure mode according to different exposure parameters, that is, sub-windows with the same exposure parameters are exposed at the same time, and sub-windows with different exposure parameters are exposed at different times.
Fig. 8 is a schematic diagram illustrating a second window division of the light source imaging area in the embodiment of the present application, as shown in fig. 8, for a point light source, exposure is performed on a first frame, only the pixel shutters of the sub-windows labeled "1" are opened, and the pixel shutters of the other sub-windows are closed; similarly, the second frame is exposed, only the pixel shutter of the sub-window marked as '2' is opened, and the other pixel shutters are closed; similarly, the third frame is exposed, only the pixel shutter of the sub-window marked as '3' is opened, and the other pixel shutters are closed; and repeating the pixel shutter control process for M times until all the pixel shutters of the whole sub-window are opened once in a time-sharing manner, thereby completing the acquisition work of the image information of the light source imaging area.
Fig. 9 is a schematic diagram illustrating a third window division of the light source imaging area in the embodiment of the present application, as shown in fig. 9, for a line light source, exposure is performed for a first frame, only the pixel shutters of the sub-windows labeled "1" are opened, and the pixel shutters of the other sub-windows are closed; similarly, the second frame is exposed, only the pixel shutter of the sub-window marked as '2' is opened, and the other pixel shutters are closed; similarly, the third frame is exposed, only the pixel shutter of the sub-window marked as '3' is opened, and the other pixel shutters are closed; and repeating the pixel shutter control process for M times until all the pixel shutters of the whole sub-window are opened once in a time-sharing manner, thereby completing the acquisition work of the image information of the light source imaging area.
In practical applications, the exposure time of the effective light source should be longer than the exposure time of the diffraction spots around the effective light source, for example, the exposure time of the sub-window labeled "1" is 1 microsecond, the exposure time of the sub-window labeled "2" is 0.5 microsecond, and the exposure time of the sub-window labeled "3" is 0.2 microsecond.
In practical application, a pixel shutter is exposed for several times, mainly depending on the brightness of a bright light area, the higher the brightness is, the more obvious the diffraction effect is shown on an image photographed under a screen, the more serious the diffraction interference of a neighborhood pixel is, if the influence of diffraction spots is to be weakened, the more times of time-sharing exposure are, the better the time-sharing exposure is, but in order to improve the photographing efficiency, only a few secondary diffraction spots with the largest influence need to be avoided, and the time-sharing exposure process between adjacent pixels can be compressed and optimized to several times of exposure.
Step 206: controlling the camera module to carry out global exposure by using the second shooting parameter to obtain second image information;
here, the second photographing parameter may determine an exposure parameter of the entire target photographing scene according to an existing full-automatic photographing mode, and expose the entire imaging area using the exposure parameter to obtain the second image information.
It should be noted that, the exposure sequence of step 205 and step 206 may also be adjusted, that is, the global exposure is performed first, and then the time-sharing exposure is performed on the light source imaging area, and the exposure sequence in practical application may be flexibly set, which is the limitation of the embodiment of the present application.
Step 207: carrying out image synthesis by using the first image information and the second image information to obtain a synthesized image;
in practical application, when the second image information is only the image information of the non-light source imaging area, combining the first image information and the second image information to obtain a composite image;
and when the second image information is the image information of the whole imaging area, replacing the information of the same pixel position of the second image information with the first image information to obtain a composite image.
Step 208: optimizing the composite image by adopting a preset image optimization algorithm to obtain a target image;
in practical applications, the image optimization algorithm can be used to optimize white balance, glare, saturation, contrast, sharpness, and the like.
Step 209: and controlling the camera module to carry out global exposure by utilizing the third shooting parameter, and carrying out optimization processing on the acquired image by adopting a preset image optimization algorithm to obtain a target image.
Here, the third shooting parameter may determine an exposure parameter of the entire target shooting scene according to an existing full-automatic shooting mode, perform global exposure on the entire imaging area using the exposure parameter to obtain an entire image, and perform optimization processing on the entire image to obtain the target image.
In practical applications, the second shooting parameter and the third shooting parameter may be the same because both parameters are exposure parameters determined according to the full-automatic shooting mode in the european style.
By adopting the time-sharing exposure scheme, different exposure parameters are set for the light source imaging area and the non-light source imaging area, and the time-sharing multiple exposure is carried out on the target shooting scene containing the light source, so that the influence of diffraction spots can be eliminated, and the shooting quality of the camera under the screen is improved.
Based on the foregoing embodiment, taking adjustment of the gain of the readout circuit as an example for further illustration, fig. 10 is a third flowchart of the off-screen shooting control method in the embodiment of the present application, and as shown in fig. 10, the method specifically includes:
step 301: starting an off-screen shooting function and collecting a preview image;
in practical application, the camera module under the screen can be a front camera module of a mobile phone, and after the front camera is opened, the front camera collects preview images and displays the preview images on a preview interface.
Step 302: judging whether the preview image contains a light source, if so, executing step 303; if not, go to step 309;
adopting a light source detection algorithm to carry out light source detection on the preview image; when the light source is contained in the target shooting scene, determining a light source imaging area; and when the light source is determined not to be contained in the target shooting scene, carrying out conventional shooting operation on the target shooting scene.
For example, the conventional shooting operation may determine the shooting parameters of the target shooting scene according to the existing full-automatic shooting mode, or the user manually sets the shooting parameters according to the target shooting scene and then shoots the target image.
Step 303: determining a light source imaging area and a non-light source imaging area;
step 304: acquiring first shooting parameters matched with the light source imaging area and second shooting parameters matched with the non-light source imaging area;
because the light intensity of the light source is large and diffraction spots are easy to appear in the imaging area, different reading circuit gains are set for the light source imaging area and the non-light source imaging area, the reading circuit gain of the sub-window where the diffraction spots are located is reduced, the image information of the diffraction spots is restrained, the purpose of eliminating the diffraction spots is achieved, and the shooting quality of the camera under the screen is improved.
Specifically, determining a light source distribution characteristic in the light source imaging area; dividing the light source imaging area into N sub-windows; wherein N is a positive integer; and setting first shooting parameters corresponding to the N sub-windows respectively based on the light source distribution characteristics.
In practical application, because the effective light source and the diffraction light spot are included in the light source imaging area, the light source imaging area needs to be divided into N sub-windows, and matched readout circuit gains are set for different sub-windows, so that the diffraction light spot elimination of the effective light source in the light source imaging area is reserved.
In practical application, if the light source imaging area can be detected in the picture preview stage of the off-screen photo, the image sensor can be driven to drive the image sensor to reduce the gain of a reading circuit of a pixel electric signal of the light source imaging area, specifically, the gain of an Analog-to-Digital Converter (ADC), so as to reduce the influence of the diffraction effect of the off-screen image of the bright light source.
Step 305: adjusting the gain of a reading circuit of the N sub-windows by utilizing the first shooting parameters corresponding to the N sub-windows respectively; adjusting the gain of a reading circuit of a non-light source imaging area by using the second shooting parameter;
step 306: the method comprises the steps that a camera module is controlled to expose a light source imaging area by using preset exposure parameters, and N pieces of first image information read by a reading circuit of N sub-windows and second image information read by a reading circuit of a non-light source imaging area are obtained;
here, the preset exposure parameter may be an exposure parameter that determines a target photographing scene according to an existing full-automatic photographing mode, or an exposure parameter that a user manually sets according to the target photographing scene.
Step 307: carrying out image synthesis by using the first image information and the second image information to obtain a synthesized image;
in practical application, when the second image information is only the image information of the non-light source imaging area, combining the first image information and the second image information to obtain a composite image;
and when the second image information is the image information of the whole imaging area, replacing the information of the same pixel position of the second image information with the first image information to obtain a composite image.
Step 308: optimizing the composite image by adopting a preset image optimization algorithm to obtain a target image;
in practical applications, the image optimization algorithm can be used to optimize white balance, glare, saturation, contrast, sharpness, and the like.
Step 309: and the gain adjustment is not carried out on the reading circuit, the camera module is controlled to carry out global exposure on the light source imaging area by utilizing preset exposure parameters, and the acquired image is optimized by adopting a preset image optimization algorithm to obtain a target image.
Fig. 11 is a schematic diagram of the shooting effect of the conventional under-screen shooting device, in fig. 11, for shooting a night scene, the light source is inevitable, and the light source framed by five square frames in the diagram has obvious diffraction spots.
Fig. 12 is a schematic diagram of a shooting effect of the optimized off-screen shooting device in the present application, and fig. 12 shows that, for the same target shooting scene, diffraction spots of five light sources are significantly suppressed by using the off-screen shooting control method provided in the present application, so that the off-screen shooting effect is greatly improved.
By adopting the scheme for adjusting the gain of the reading circuit, different gains of the reading circuit are set for the light source imaging area and the non-light source imaging area, namely, the gain of the reading circuit of the sub-window where the diffraction light spot is located is reduced, the purpose of eliminating the diffraction light spot can be realized by only carrying out global exposure once, and the shooting quality of the camera under the screen is improved.
In order to implement the method according to the embodiment of the present application, based on the same inventive concept, an embodiment of the present application further provides a terminal device, where the terminal device includes a full-screen and a camera module disposed below the full-screen, as shown in fig. 13, the terminal device further includes:
a detection part 1301 configured to determine a light source imaging region and a non-light source imaging region when detecting that a light source is included in a target shooting scene;
an acquisition section 1302 configured to acquire a first photographing parameter matching the light source imaging area and a second photographing parameter matching the non-light source imaging area;
the control part 1303 is configured to acquire first image information of the light source imaging area acquired by the camera module through the full-face screen based on the first shooting parameter; acquiring second image information of the non-light source imaging area acquired by the camera module through the full screen based on the second shooting parameter;
a processing section 1304 configured to obtain a target image of the target photographic scene based on the first image information and the second image information.
In some embodiments, the control portion 1303 is specifically configured to determine a light source distribution characteristic within the light source imaging area; dividing the light source imaging area into N sub-windows; wherein N is a positive integer; and setting first shooting parameters corresponding to the N sub-windows respectively based on the light source distribution characteristics.
In some embodiments, the control part 1303 is specifically configured to determine, based on the light source distribution feature, a first sub-window of the N sub-windows that includes an effective light source and a second sub-window of the N sub-windows that includes a diffraction spot; and setting a first shooting parameter corresponding to the first type of sub-window and a first shooting parameter corresponding to the second type of sub-window based on a diffraction light spot suppression strategy.
In some embodiments, when the first shooting parameter is an exposure parameter, the control part 1303 is specifically configured to control the camera module to perform time-sharing exposure on the N sub-windows by using the first shooting parameter, so as to obtain first image information corresponding to each of the N sub-windows.
In some embodiments, the control section 1303 is specifically configured to determine M kinds of windows having the same first photographing parameter from among the N sub-windows; and controlling the M windows to carry out exposure for M times based on the first shooting parameters corresponding to the M windows respectively.
In some embodiments, when the first shooting parameter is a readout circuit gain, the control part 1303 is specifically configured to adjust the readout circuit gains of the N sub-windows by using the first shooting parameters corresponding to the N sub-windows respectively; and controlling the camera module to expose the light source imaging area by using preset exposure parameters, and acquiring N pieces of first image information read by the reading circuits of the N sub-windows.
In some embodiments, the processing part 1304 is specifically configured to perform image synthesis by using the first image information and the second image information to obtain a synthesized image; and optimizing the synthesized image by adopting a preset image optimization algorithm to obtain the target image.
By adopting the terminal equipment, when a target shooting scene contains a light source, matched shooting parameters can be respectively set for a light source imaging area and a non-light source imaging area for image information acquisition, and the acquired first image information and second image information are utilized for image synthesis to obtain a final target image, so that the phenomenon of light source diffraction caused by the overall screen structure during screen shooting is eliminated, and the imaging quality is improved on the basis of not changing the overall display quality of the screen.
Based on the hardware implementation of each part in the above-mentioned terminal equipment, this application embodiment still provides another kind of terminal equipment, and this terminal equipment includes comprehensive screen and sets up the camera module in comprehensive screen below, as shown in fig. 14, this terminal equipment still includes: a processor 1401 and a memory 1402 configured to store a computer program capable of running on the processor;
wherein the processor 1401 is configured to execute the method steps in the previous embodiments when running the computer program.
In practice, of course, the various components in the terminal device are coupled together by a bus system 1403, as shown in fig. 14. It is understood that bus system 1403 is used to enable connection communication between these components. The bus system 1403 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are labeled as bus system 1403 in FIG. 14.
In an embodiment of the present invention, a computer storage medium is provided, where computer-executable instructions are stored, and when executed, the computer-executable instructions implement the method steps of the first or second embodiment.
The device according to the embodiment of the present invention may also be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as an independent product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
Fig. 15 is a schematic structural diagram of a chip in the embodiment of the present application. The chip 1500 shown in fig. 15 includes a processor 1510, and the processor 1510 can call and run a computer program from a memory to implement the method in the embodiment of the present application.
Optionally, as shown in fig. 15, the chip 1500 may further include a memory 1520. From the memory 1520, the processor 1510 can call and execute a computer program to implement the method in the embodiment of the present application.
The memory 1520 may be a separate device from the processor 1510 or may be integrated into the processor 1510.
Optionally, the chip 1500 may also include an input interface 1530. The processor 1510 can control the input interface 1530 to communicate with other devices or chips, and in particular, can obtain information or data transmitted by other devices or chips.
Optionally, the chip 1500 may also include an output interface 1540. The processor 1510 may control the output interface 1540 to communicate with other devices or chips, and in particular, may output information or data to the other devices or chips.
Optionally, the chip may be applied to the network device in the embodiment of the present application, and the chip may implement a corresponding process implemented by the network device in each method in the embodiment of the present application, and for brevity, details are not described here again.
Optionally, the chip may be applied to the terminal device in the embodiment of the present application, and the chip may implement the corresponding process implemented by the terminal device in each method in the embodiment of the present application, and for brevity, details are not described here again.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip or a system-on-chip, etc.
Correspondingly, the embodiment of the present invention further provides a computer storage medium, in which a computer program is stored, and the computer program is configured to execute the data scheduling method of the embodiment of the present invention.
It should be noted that: "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to arrive at new method embodiments. Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict. The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Industrial applicability
The application provides an off-screen shooting control method, terminal equipment and a storage medium, wherein the method is applied to the terminal equipment, and the terminal equipment comprises a full-screen and a camera module arranged below the full-screen; when the target shooting scene contains a light source, matched shooting parameters can be respectively set for a light source imaging area and a non-light source imaging area to acquire image information, and the acquired first image information and second image information are utilized to synthesize images to obtain a final target image, so that the light source diffraction phenomenon caused by the comprehensive screen structure during shooting under a screen is eliminated, and the imaging quality is improved on the basis of not changing the display quality of the comprehensive screen.

Claims (10)

  1. A method for controlling shooting under a screen is applied to terminal equipment, wherein the terminal equipment comprises a full-screen and a camera module arranged below the full-screen; the method comprises the following steps:
    when a light source is detected to be contained in a target shooting scene, determining a light source imaging area and a non-light source imaging area;
    acquiring a first shooting parameter matched with the light source imaging area and a second shooting parameter matched with the non-light source imaging area;
    acquiring first image information of the light source imaging area acquired by the camera module through the full screen based on the first shooting parameter;
    acquiring second image information of the non-light source imaging area acquired by the camera module through the full screen based on the second shooting parameter;
    and obtaining a target image of the target shooting scene based on the first image information and the second image information.
  2. The method of claim 1, wherein the acquiring first photographing parameters matching the light source imaging region comprises:
    determining a light source distribution characteristic within the light source imaging area;
    dividing the light source imaging area into N sub-windows; wherein N is a positive integer;
    and setting first shooting parameters corresponding to the N sub-windows respectively based on the light source distribution characteristics.
  3. The method according to claim 2, wherein the setting of the first shooting parameters corresponding to the N sub-windows respectively based on the light source distribution characteristics comprises:
    determining a first type of sub-window containing an effective light source and a second type of sub-window containing diffraction light spots in the N sub-windows based on the light source distribution characteristics;
    and setting a first shooting parameter corresponding to the first type of sub-window and a first shooting parameter corresponding to the second type of sub-window based on a diffraction light spot suppression strategy.
  4. The method of claim 2, wherein when the first shooting parameter is an exposure parameter, the acquiring first image information of the light source imaging area acquired by the camera module through the full-screen based on the first shooting parameter comprises:
    and controlling the camera module to perform time-sharing exposure on the N sub-windows by using the first shooting parameter to obtain first image information corresponding to the N sub-windows respectively.
  5. The method of claim 4, wherein the controlling the camera module to time-division expose the N sub-windows using the first shooting parameters comprises:
    determining M types of windows with the same first shooting parameters in the N sub-windows;
    and controlling the M windows to carry out exposure for M times based on the first shooting parameters corresponding to the M windows respectively.
  6. The method of claim 2, wherein when the first photographing parameter is a gain of a readout circuit, the acquiring first image information of the light source imaging area acquired by the camera module through the full-screen based on the first photographing parameter comprises:
    adjusting the gain of a reading circuit of the N sub-windows by utilizing the first shooting parameters corresponding to the N sub-windows respectively;
    and controlling the camera module to expose the light source imaging area by using preset exposure parameters, and acquiring N pieces of first image information read by the reading circuits of the N sub-windows.
  7. The method of claim 1, wherein the deriving the target image of the target shooting scene based on the first image information and the second image information comprises:
    performing image synthesis by using the first image information and the second image information to obtain a synthesized image;
    and optimizing the synthesized image by adopting a preset image optimization algorithm to obtain the target image.
  8. The terminal equipment comprises a full-screen and a camera module arranged below the full-screen; the terminal device further includes:
    a detection section configured to determine a light source imaging area and a non-light source imaging area when detecting that a light source is included in a target photographic scene;
    an acquisition section configured to acquire a first photographing parameter matching the light source imaging region and a second photographing parameter matching the non-light source imaging region;
    the control part is configured to acquire first image information of the light source imaging area acquired by the camera module through the full screen based on the first shooting parameter; acquiring second image information of the non-light source imaging area acquired by the camera module through the full screen based on the second shooting parameter;
    and the processing part is configured to obtain a target image of the target shooting scene based on the first image information and the second image information.
  9. The terminal equipment comprises a full-screen and a camera module arranged below the full-screen; the terminal device further includes: a processor and a memory configured to store a computer program operable on the processor;
    wherein the processor is configured to perform the steps of the method according to any of the claims 1-7 when running the computer program.
  10. A computer-readable storage medium for storing a computer program which causes a computer to perform the steps of the method according to any one of claims 1-7.
CN202080101595.3A 2020-06-23 2020-06-23 Screen shooting control method, terminal device and storage medium Pending CN115699785A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/097776 WO2021258300A1 (en) 2020-06-23 2020-06-23 In-screen photography control method, terminal device and storage medium

Publications (1)

Publication Number Publication Date
CN115699785A true CN115699785A (en) 2023-02-03

Family

ID=79282701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080101595.3A Pending CN115699785A (en) 2020-06-23 2020-06-23 Screen shooting control method, terminal device and storage medium

Country Status (2)

Country Link
CN (1) CN115699785A (en)
WO (1) WO2021258300A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115580690B (en) * 2022-01-24 2023-10-20 荣耀终端有限公司 Image processing method and electronic equipment
CN115565213B (en) * 2022-01-28 2023-10-27 荣耀终端有限公司 Image processing method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11019326B2 (en) * 2018-02-09 2021-05-25 Jenoptik Optical Systems, Llc Light-source characterizer and associated methods
CN108366186B (en) * 2018-02-09 2020-07-24 Oppo广东移动通信有限公司 Electronic device, display screen and photographing control method
CN108322651B (en) * 2018-02-11 2020-07-31 Oppo广东移动通信有限公司 Photographing method and device, electronic equipment and computer readable storage medium
CN108989496A (en) * 2018-07-18 2018-12-11 苏州天为幕烟花科技有限公司 The mobile phone of the complementary realization of dot matrix shields technology comprehensively
CN110248004A (en) * 2019-06-25 2019-09-17 Oppo广东移动通信有限公司 Terminal, the control method of terminal and image acquiring method
CN110430375B (en) * 2019-07-23 2021-07-20 Oppo广东移动通信有限公司 Imaging method, imaging device, storage medium and electronic equipment

Also Published As

Publication number Publication date
WO2021258300A1 (en) 2021-12-30

Similar Documents

Publication Publication Date Title
US11228720B2 (en) Method for imaging controlling, electronic device, and non-transitory computer-readable storage medium
EP3611916B1 (en) Imaging control method and electronic device
RU2397542C2 (en) Method and device for creating images with high dynamic range from multiple exposures
US10021313B1 (en) Image adjustment techniques for multiple-frame images
US10074165B2 (en) Image composition device, image composition method, and recording medium
CN110602467B (en) Image noise reduction method and device, storage medium and electronic equipment
US11431915B2 (en) Image acquisition method, electronic device, and non-transitory computer readable storage medium
CN110213502B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110445988A (en) Image processing method, device, storage medium and electronic equipment
CN109040609A (en) Exposal control method, device and electronic equipment
US8982251B2 (en) Image processing apparatus, image processing method, photographic imaging apparatus, and recording device recording image processing program
CN107509044B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN110266954B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110445989A (en) Image processing method, device, storage medium and electronic equipment
US20210160416A1 (en) Method for imaging controlling, electronic device, and non-transitory computer-readable storage medium
BR112013007146B1 (en) METHOD, IMAGE SIGNAL PROCESSING SYSTEM AND ELECTRONIC DEVICE FOR FLASH SYNCHRONIZATION USING IMAGE SENSOR INTERFACE TIME SIGNAL
US10798358B2 (en) Image processing method and device for accomplishing white balance regulation, computer-readable storage medium and computer device
CN110381263A (en) Image processing method, device, storage medium and electronic equipment
CN106358030A (en) Image processing apparatus and image processing method
US11601600B2 (en) Control method and electronic device
CN108833802A (en) Exposal control method, device and electronic equipment
CN113163127B (en) Image processing method, device, electronic equipment and storage medium
CN107800971A (en) Auto-exposure control processing method, device and the equipment of pan-shot
CN115699785A (en) Screen shooting control method, terminal device and storage medium
CN108259754B (en) Image processing method and device, computer readable storage medium and computer device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination