WO2023029715A1 - 基于屏下摄像的图像处理方法、设备、系统和存储介质 - Google Patents

基于屏下摄像的图像处理方法、设备、系统和存储介质 Download PDF

Info

Publication number
WO2023029715A1
WO2023029715A1 PCT/CN2022/102973 CN2022102973W WO2023029715A1 WO 2023029715 A1 WO2023029715 A1 WO 2023029715A1 CN 2022102973 W CN2022102973 W CN 2022102973W WO 2023029715 A1 WO2023029715 A1 WO 2023029715A1
Authority
WO
WIPO (PCT)
Prior art keywords
screen
camera
image
under
display screen
Prior art date
Application number
PCT/CN2022/102973
Other languages
English (en)
French (fr)
Inventor
吴宏超
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023029715A1 publication Critical patent/WO2023029715A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems

Definitions

  • the present disclosure relates to the image field, and in particular to an image processing method, device, system and storage medium based on off-screen camera.
  • the under-screen camera technology has improved the screen-to-body ratio of mobile terminals (mobile phones, tablets, notebook computers, etc.), that is, placing the camera under the display screen and shooting through the camera under the display screen.
  • the under-screen camera technology improves the transmittance of the display screen in this area by changing the display circuit in the overlapping area with the camera, so that the display screen in this area can be used for shooting while displaying normally.
  • There are light-shielding structures such as display pixels and lines that drive the pixels to display in the area where the display screen and the camera of the existing under-screen camera overlap, which prevents light from passing through normally. Therefore, the captured image quality of the existing under-screen camera technology is low.
  • the present disclosure provides an image processing method, device, system, and storage medium based on under-screen camera.
  • image processing method By acquiring two images before and after the relative position change between the display screen and the under-screen camera, different areas affected by the shading structure can be obtained. image, and then image fusion is performed to eliminate the influence of the shading structure on the captured image, and improve the clarity and quality of the captured image of the under-screen camera technology.
  • the present disclosure provides an image processing method based on off-screen camera, the method includes: controlling the under-screen camera to take pictures of the first image and the second picture through the first picture window and the second picture window in the display screen, respectively.
  • Two images wherein, the first photographing window is the photographing window before the relative position of the display screen and the camera under the screen changes, and the second photographing window is the relative position of the display screen and the camera under the screen A photographing window whose position has been changed; performing image fusion processing on the first image and the second image to generate a target image.
  • the present disclosure also provides an image processing device based on off-screen camera, including a memory, a processor, a program stored in the memory and operable on the processor, and a program for realizing the processing A data bus for connection and communication between the processor and the memory, and when the program is executed by the processor, the above-mentioned image processing method based on off-screen camera is realized.
  • the present disclosure also provides an image processing system based on under-screen camera, including a display screen, an under-screen camera, a motor, and the above-mentioned image processing device based on under-screen camera.
  • the present disclosure also provides a storage medium for readable storage, the storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to Realize the above-mentioned image processing method based on the camera under the screen.
  • FIG. 1 is a schematic structural diagram of an image processing device based on off-screen camera provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of an image processing method based on off-screen camera provided by an embodiment of the present disclosure
  • Fig. 3 is a schematic flow chart of an overlapping area between an off-screen camera and a display screen provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic cross-sectional view of an under-screen camera area provided by an embodiment of the present disclosure
  • Fig. 5 is a schematic diagram of a mobile display screen provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of an under-display camera after the light receiving surface moves according to an embodiment of the present disclosure.
  • Embodiments of the present disclosure provide an image processing method, device, system and storage medium based on off-screen camera. Among them, by obtaining two images before and after the relative position change of the display screen and the camera under the screen, that is, obtaining images of different areas affected by the light-shielding structure, and then performing image fusion to eliminate the influence of the light-shielding structure on the captured image, improving the The clarity and quality of the camera image of the under-screen camera technology.
  • FIG. 1 is a schematic structural diagram of an image processing device based on off-screen camera provided by an embodiment of the present disclosure.
  • the image processing device 100 based on off-screen camera can include a processor 101 and a memory 102, wherein the processor 101 and the memory 102 can be connected by a bus, such as any applicable bus such as an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the memory 102 may include a non-volatile storage medium and an internal memory.
  • Non-volatile storage media can store operating systems and computer programs.
  • the computer program includes program instructions. When the program instructions are executed, the processor 101 can execute any image processing method based on off-screen camera.
  • the processor 101 is used to provide calculation and control capabilities, and support the operation of the entire image processing device 100 based on off-screen camera.
  • the processor 101 can be a central processing unit (Central Processing Unit, CPU), and the processor can also be other general processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (application specific integrated circuits, ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the processor 101 is configured to run a computer program stored in the memory 102, and implement the following steps when executing the computer program: control the camera under the screen to pass through the first picture-taking window and the second picture-taking window in the display screen, Take the first image and the second image respectively, wherein, the first photographing window is the photographing window before the relative position of the display screen and the camera under the screen changes, and the second photographing window is the photographing window between the display screen and the camera under the screen.
  • a photographing window after the relative position of the under-screen camera is changed; performing image fusion processing on the first image and the second image to generate a target image.
  • the processor 101 executes the computer program to implement the step of controlling the camera under the screen to capture the first image and the second image respectively through the first photographing window and the second photographing window in the display screen
  • the steps are as follows: based on the photographing window before moving in the display screen, determine the first photographing window, and control the camera under the screen to capture the first image through the first photographing window; control the motor to drive the display screen And/or the camera under the screen moves, so that the display screen moves relative to the light receiving surface of the camera under the screen; based on the moved camera window in the display screen, determine the second camera window , and controlling the under-screen camera to capture the second image through the second photographing window.
  • the processor 101 when the processor 101 executes the computer program to realize that the control motor drives the display screen and/or the camera under the screen to move, it is used to: control the display screen motor to drive the display screen or the The photographing window in the display screen moves horizontally relative to the light receiving surface.
  • the processor 101 when the processor 101 executes the computer program to realize that the control motor drives the display screen and/or the camera under the screen to move, it is used to: control the camera motor to drive the light receiving surface, the The lens of the under-screen camera or the module to which the under-screen camera belongs moves horizontally relative to the display screen.
  • the processor 101 when the processor 101 executes the computer program to realize that the control motor drives the display screen and/or the under-screen camera to move, it is used to: control the camera motor to drive the under-screen camera or the The module to which the camera under the screen belongs is flipped relative to the display screen.
  • the processor 101 when the processor 101 executes the computer program to realize the image fusion processing of the first image and the second image, it is used to realize: according to a preset image fusion rule, the first The image is fused with the second image, wherein the image fusion rule includes any one of pixel-level image fusion, feature-level image fusion, and decision-level image fusion, and combinations thereof.
  • FIG. 2 is a schematic flowchart of an image processing method based on off-screen camera provided by an embodiment of the present disclosure.
  • the image processing method based on the under-screen camera is applied to an image processing system based on the under-screen camera, and the image processing system based on the under-screen camera includes a display screen, an under-screen camera, a motor, and an image processing device based on the under-screen camera.
  • the under-screen camera includes one or more cameras.
  • the motors include but are not limited to display screen motors and camera motors.
  • the image processing device based on off-screen camera includes a memory, a processor, a program stored on the memory and operable on the processor, and a program for realizing connection and communication between the processor and the memory data bus.
  • the memory stores the images or images captured by the camera or displayed on the display screen.
  • the processor controls the display screen, the camera under the screen, the motor and the memory through a communicable data bus.
  • Step S100 controlling the camera under the screen to capture the first image and the second image respectively through the first photographing window and the second photographing window in the display screen, wherein the first photographing window is the display screen and the screen The photographing window before the relative position of the camera is changed, and the second photographing window is the photographing window after the relative position of the display screen and the camera under the screen is changed.
  • the off-screen camera technology that simultaneously realizes the display at the position of the front camera, in order to ensure the shooting effect of the front camera and the normal display of the corresponding area, the display pixels (RGB pixel array) and the circuit for driving the pixel display must be set.
  • the overlapping area between the camera under the screen and the display screen is similar to a screen window.
  • the black part is the opaque or low light-transmitting area (setting pixels and lines), and the white part is the light-transmitting area.
  • 1 represents the display screen
  • 2 represents the light-receiving surface of the camera. Only the part without pixels and circuits can normally illuminate the light-receiving surface of the camera, and the pixels and circuits block part of the light from passing through.
  • the small hole array it is also prone to diffraction problems, resulting in bright spots or bright lines in the image received by the camera.
  • the above problems affect the imaging quality of the under-screen camera technology.
  • a motor (or other driving device) is added in this embodiment, and the motor is controlled to drive the display screen and/or the camera to move, so as to change the relative position of the display screen and the camera, and before and after the change of the relative position, shoot at least One image, and the images before and after the relative position change are fused to complement each other's image information, which improves the imaging quality of the under-screen camera and improves the clarity of the image captured by the under-screen camera.
  • the area of the display screen corresponding to the camera (lens) under the screen is the photographing window, and the photographing window before the relative position of the display screen and the camera under the screen is changed is used as the first photographing window.
  • the photographing window after the relative position is changed is used as the second photographing window.
  • the camera under the control screen takes at least one image as the first image through the first photographing window in the display screen; the camera under the control screen captures at least one image as the second image through the second photographing window in the display screen.
  • Step S200 performing image fusion processing on the first image and the second image to generate a target image.
  • the two groups of images of the first image and the second image are fused according to a preset fusion rule to generate a target image.
  • the fused target image has more image details, thereby improving the image quality of the image captured by the camera under the screen.
  • the preset fusion rules can be any one of pixel-level image fusion, feature-level image fusion, and decision-level image fusion and their combination, or any of pixel-based, block-based, and region-based image fusion methods. One and its combination.
  • the camera under the screen is located on the imaging plane of the camera, and is at the horizontal position of the imaging plane.
  • the camera under the screen moves horizontally on the imaging surface, and the camera under the screen moves a new set of images after the relative position changes.
  • the two sets of images captured by the under-screen camera are fused according to the set rules, such as cross fusion to form an image with more pixels, thereby improving the quality of the under-screen camera image.
  • performing image fusion processing on the first image and the second image includes: performing image fusion on the first image and the second image according to a preset image fusion rule, wherein , the image fusion rules include any one of pixel-level image fusion, feature-level image fusion, and decision-level image fusion, and combinations thereof.
  • pixel-level fusion is the process of directly fusing the pixel-based features of the source image, and finally generating a fused image.
  • This rule retains the most original information of the source image and has the highest fusion accuracy, but this rule has the disadvantages of the largest amount of information, high requirements for hardware equipment and registration, long calculation time and poor real-time processing.
  • Feature-level image fusion is to firstly perform simple preprocessing on the source image, and then extract, select and fuse feature information such as corners, edges, and shapes of the source image to generate a fused image.
  • the fusion object of this rule is the feature information of the source image, and the requirements for image registration are not as strict as those for pixel-level fusion.
  • each source image has independently completed its own decision-making tasks such as classification and recognition.
  • the fusion process is to generate a global optimal decision and In this way, the process of forming a fused image is formed.
  • This rule has the advantages of high flexibility, small communication volume, best real-time performance, strong fault tolerance, and strong anti-interference ability, etc.
  • the preprocessing cost in the early stage is reduced.
  • the user can set the image fusion rules according to actual needs.
  • a display screen motor is added to the display screen, or a motor is added to the camera, and then the display screen or the camera is driven by the motor to move, thereby changing the relative position of the camera window and the camera in the display screen, and the change of the relative position
  • the light intensity distribution of the light passing through the display screen (structures such as pixels and circuits) on the light receiving surface of the camera is changed. Therefore, after the relative position is changed, the affected area of the captured image is changed.
  • a captured image By fusing the captured images with different affected areas, through the unaffected image area in one (or multiple) captured images, a captured image The image area affected by the screen is supplemented, thereby eliminating the impact of light occlusion or light diffraction on the captured image, and improving the image quality of the off-screen camera.
  • the camera under the control screen captures a group of images (one or more) through the first shooting window in the display screen as the first image. Then send a driving instruction to the display screen motor or camera motor, drive the display screen or drive the camera to move relatively, and change the relative position of the display screen photo window and the camera.
  • the movement includes relative horizontal movement, telescopic movement of the flexible screen, or flip movement of the camera relative to the display screen. And control the camera under the screen to shoot another group of images as the second image through the changed second shooting window in the display screen.
  • the display screen and the camera can also be driven to move relative to each other at the same time.
  • the motor can also be used to complete the camera anti-shake mechanism while completing the above driving.
  • controlling the motor to drive the display screen and/or the camera under the screen to move includes: controlling the display screen motor to drive the display screen or the camera window in the display screen relative to the light The receiving surface moves horizontally.
  • 1 represents the display screen
  • 2 represents the light receiving surface of the camera
  • the arrow represents one direction of moving the display screen. Control the motor of the display screen to drive the entire display screen to move horizontally relative to the light-receiving surface of the camera under the screen in the direction of the arrow (or the opposite direction of the arrow), so that the photographing window in the display screen moves horizontally relative to the light-receiving surface.
  • the motor of the display screen can be controlled to directly drive the photographing window in the display screen (that is, the photographable area of the display screen) to move horizontally relative to the light receiving surface.
  • the motor of the display screen can be directly connected to the photographing window in the display screen, so that the motor of the display screen directly drives the photographing window to move.
  • the shooting window area in the display screen can be driven to expand and contract by the display screen motor, so that the display screen can move horizontally relative to the light receiving surface of the camera.
  • controlling the motor to drive the display screen and/or the camera under the screen to move includes:
  • the camera motor is controlled to drive the light receiving surface, the lens of the under-display camera or the module to which the under-display camera belongs to move horizontally relative to the display screen.
  • the light receiving surface of the camera under the screen may also be driven to move relative to the display screen.
  • the camera motor is controlled to drive the light-receiving surface or the components where the light-receiving surface is located, such as the lens of the under-screen camera, the camera sensor, or the module (camera module) to which the under-screen camera belongs, to move horizontally.
  • 1 represents the display screen
  • 2 represents the light-receiving surface of the camera, which drives the light-receiving surface, the lens of the camera under the screen or the module to which the camera under the screen belongs to horizontally
  • the position after moving is shown in the figure.
  • the camera module (or the camera lens or the camera sensor) moves horizontally relative to the photographing window of the display screen.
  • controlling the motor to drive the display screen and/or the under-display camera to move includes: controlling the camera motor to drive the under-display camera or the module to which the under-display camera belongs relative to the display screen to flip.
  • the camera motor is controlled to drive the under-display camera to turn over, thereby changing the relative position of the display screen and the light receiving surface.
  • the under-screen camera can be driven to flip in two opposite directions, such as flipping 30 degrees, 45 degrees or 90 degrees, etc., and take images at each flip angle in each direction as the second image.
  • Embodiments of the present disclosure also provide a storage medium for readable storage, the storage medium stores a program, the program includes program instructions, and the processor executes the program instructions to implement the embodiments of the present disclosure Provide any image processing method based on off-screen camera.
  • the program is loaded by the processor, and may perform the following steps: control the camera under the screen to take a first image and a second image respectively through the first photo window and the second photo window in the display screen, wherein the first photo The window is the photographing window before the relative position of the display screen and the camera under the screen is changed, and the second photographing window is the photographing window after the relative position of the display screen and the camera under the screen is changed; performing image fusion processing on the first image and the second image to generate a target image.
  • the storage medium may be an internal storage unit of the image processing device based on off-screen camera described in the foregoing embodiments, for example, a hard disk or a memory of the image processing device based on off-screen camera.
  • the storage medium may also be an external storage device of the image processing device based on the camera under the screen, for example, a plug-in hard disk equipped on the image processing device based on the camera under the screen, a smart memory card (Smart Media Card, SMC ), Secure Digital Card (Secure Digital Card, SD Card), Flash Card (Flash Card), etc.
  • Embodiments of the present disclosure also provide an image processing system based on off-screen camera, which includes a display screen, an under-screen camera, a motor, and the above-mentioned image processing system based on off-screen camera equipment.
  • This embodiment discloses an image processing method, device, system, and storage medium based on off-screen photography.
  • the under-screen camera By controlling the under-screen camera to pass through the first photo window and the second photo window in the display screen, the first image and the second photo window are shot respectively.
  • Two images wherein, the first photographing window is the photographing window before the relative position of the display screen and the camera under the screen changes, and the second photographing window is the relative position of the display screen and the camera under the screen A photographing window whose position has been changed; performing image fusion processing on the first image and the second image to generate a target image.
  • the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components. Components cooperate to execute.
  • Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application-specific integrated circuit .
  • a processor such as a central processing unit, digital signal processor, or microprocessor
  • Such software may be distributed on storable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • storage medium includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. , removable and non-removable media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Abstract

本公开涉及图像处理领域,尤其涉及一种基于屏下摄像的图像处理方法、设备、系统和存储介质,通过控制屏下摄像头通过显示屏中的第一拍照窗口以及第二拍照窗口,分别拍摄第一图像和第二图像(S100),其中,所述第一拍照窗口为所述显示屏和所述屏下摄像头的相对位置改变前的拍照窗口,所述第二拍照窗口为所述显示屏和所述屏下摄像头的相对位置改变后的拍照窗口;将所述第一图像与所述第二图像进行图像融合处理,生成目标图像(S200)。

Description

基于屏下摄像的图像处理方法、设备、系统和存储介质
相关申请的交叉引用
本申请要求享有2021年08月31日提交的名称为“基于屏下摄像的图像处理方法、设备、系统和存储介质”的中国专利申请CN202111016597.5的优先权,其全部内容通过引用并入本申请中。
技术领域
本公开涉及图像领域,尤其涉及一种基于屏下摄像的图像处理方法、设备、系统和存储介质。
背景技术
目前,屏下摄像技术提高了移动终端(手机、平板、笔记本电脑等)的屏占比,即将摄像头放置在显示屏之下,并通过显示屏之下的摄像头进行拍摄。屏下摄像技术,通过改变和摄像头重叠区域的显示屏电路,提高了该区域的显示屏的透过率,从而使该区域的显示屏在正常显示的同时,可以进行拍摄。现有屏下摄像的显示屏与摄像头重叠的区域存在显示像素以及驱动像素显示的线路等遮光结构,导致光线无法正常通过。因此,导致现有的屏下摄像技术的拍摄图像质量低下。
发明内容
本公开提供了一种基于屏下摄像的图像处理方法、设备、系统和存储介质,通过获取显示屏和所述屏下摄像头的相对位置改变前后的两幅图像,即获取被遮光结构影响不同区域的图像,然后进行图像融合,以消除遮光结构对拍摄图像的影响,提高了屏下摄像技术的摄像图像的清晰度以及质量。
第一方面,本公开提供了一种基于屏下摄像的图像处理方法,所述方法包括:控制屏下摄像头通过显示屏中的第一拍照窗口以及第二拍照窗口,分别拍摄第一图像和第二图像,其中,所述第一拍照窗口为所述显示屏和所述屏下摄像头的相对位置改变前的拍照窗口,所述第二拍照窗口为所述显示屏和所述屏下摄像头的相对位置改变后的拍照窗口;将所述第一图像与所述第二图像进行图像融合处理,生成目标图像。
第二方面,本公开还提供了一种基于屏下摄像的图像处理设备,包括存储器、处理器、存储在所述存储器上并可在所述处理器上运行的程序以及用于实现所述处理器和所述存储器之间的连接通信的数据总线,所述程序被所述处理器执行时实现如上所述的基于屏下摄 像的图像处理方法。
第三方面,本公开还提供了一种基于屏下摄像的图像处理系统,包括显示屏、屏下摄像头、马达以及如上所述的基于屏下摄像的图像处理设备。
第四方面,本公开还提供了一种存储介质,用于可读存储,所述存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如上述的基于屏下摄像的图像处理方法。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
为了更清楚地说明本公开实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本公开实施例提供的一种基于屏下摄像的图像处理设备的结构示意图;
图2是本公开实施例提供的一种基于屏下摄像的图像处理方法的示意性流程图;
图3是本公开实施例提供的一种屏下摄像头与显示屏重叠区域的示意性流程图;
图4是本公开实施例提供的一种屏下摄像区域剖面示意图;
图5是本公开实施例提供的一种移动显示屏的示意图;
图6是本公开实施例提供的一种屏下摄像头的光接收面移动后的示意图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
应当理解,在此本公开说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本公开。如在本公开说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当理解,在本公开说明书和所附权利要求书中使用的术语“和/或”是指相关联列 出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
应当理解,此处所描述的具体实施例仅仅用以解释本公开,并不用于限定本公开。
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本公开的说明,其本身没有特有的意义。因此,“模块”、“部件”或“单元”可以混合地使用。
本公开的实施例提供了一种基于屏下摄像的图像处理方法、设备、系统和存储介质。其中,通过获取显示屏和所述屏下摄像头的相对位置改变前后的两幅图像,即获取被遮光结构影响不同区域的图像,然后进行图像融合,以消除遮光结构对拍摄图像的影响,提高了屏下摄像技术的摄像图像的清晰度以及质量。
请参阅图1,图1是本公开实施例提供的一种基于屏下摄像的图像处理设备的结构示意图。基于屏下摄像的图像处理设备100可以包括处理器101以及存储器102,其中,处理器101以及存储器102可以通过总线连接,该总线比如为I2C(Inter-integrated Circuit)总线等任意适用的总线。
其中,存储器102可以包括非易失性存储介质和内存储器。非易失性存储介质可存储操作系统和计算机程序。该计算机程序包括程序指令,该程序指令被执行时,可使得处理器101执行任意一种基于屏下摄像的图像处理方法。
其中,处理器101用于提供计算和控制能力,支撑整个基于屏下摄像的图像处理设备100的运行。
其中,处理器101可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
在一实施例中,处理器101用于运行存储在存储器102中的计算机程序,并在执行计算机程序时实现如下步骤:控制屏下摄像头通过显示屏中的第一拍照窗口以及第二拍照窗口,分别拍摄第一图像和第二图像,其中,所述第一拍照窗口为所述显示屏和所述屏下摄像头的相对位置改变前的拍照窗口,所述第二拍照窗口为所述显示屏和所述屏下摄像头的相对位置改变后的拍照窗口;将所述第一图像与所述第二图像进行图像融合处理,生成目标图像。
在一个实施例中,处理器101在执行计算机程序实现所述控制屏下摄像头通过显示屏中的第一拍照窗口以及第二拍照窗口,分别拍摄第一图像和第二图像的步骤时,用于如下步骤:基于所述显示屏中移动前的拍照窗口,确定所述第一拍照窗口,并控制所述屏下摄 像头通过所述第一拍照窗口拍摄所述第一图像;控制马达驱动所述显示屏和/或所述屏下摄像头进行移动,以使所述显示屏相对于所述屏下摄像头的光线接收面进行相对移动;基于所述显示屏中移动后的拍照窗口,确定所述第二拍照窗口,并控制所述屏下摄像头通过所述第二拍照窗口拍摄所述第二图像。
在一个实施例中,处理器101在执行计算机程序实现所述控制马达驱动所述显示屏和/或所述屏下摄像头进行移动时,用于实现:控制显示屏马达驱动所述显示屏或所述显示屏中的拍照窗口相对于所述光线接收面进行水平移动。
在一个实施例中,处理器101在执行计算机程序实现所述控制马达驱动所述显示屏和/或所述屏下摄像头进行移动时,用于实现:控制摄像头马达驱动所述光线接收面、所述屏下摄像头的镜头或所述屏下摄像头所属模组相对于所述显示屏进行水平移动。
在一个实施例中,处理器101在执行计算机程序实现所述控制马达驱动所述显示屏和/或所述屏下摄像头进行移动时,用于实现:控制摄像头马达驱动所述屏下摄像头或所述屏下摄像头所属模组相对于所述显示屏进行翻转。
在一个实施例中,处理器101在执行计算机程序实现所述将所述第一图像与所述第二图像进行图像融合处理时,用于实现:根据预设图像融合规则,将所述第一图像与所述第二图像进行图像融合,其中,所述图像融合规则包括像素级图像融合、特征极图像融合、决策级图像融合中的任一种及其组合。
下面结合附图,对本公开的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
如图2所示,图2是本公开实施例提供的一种基于屏下摄像的图像处理方法的示意性流程图。该基于屏下摄像的图像处理方法应用于基于屏下摄像的图像处理系统,所述基于屏下摄像的图像处理系统包括显示屏、屏下摄像头、马达以及基于屏下摄像的图像处理设备。所述屏下摄像头包括一个或多个摄像头。所述马达包括但不限于显示屏马达、摄像头马达。所述基于屏下摄像的图像处理设备包括存储器、处理器、存储在所述存储器上并可在所述处理器上运行的程序以及用于实现所述处理器和所述存储器之间的连接通信的数据总线。存储器中存储摄像头拍下或需要在显示屏上显示的画面或影。处理器通过可通讯的数据总线对显示屏、屏下摄像头、马达和存储器进行控制。
步骤S100、控制屏下摄像头通过显示屏中的第一拍照窗口以及第二拍照窗口,分别拍摄第一图像和第二图像,其中,所述第一拍照窗口为所述显示屏和所述屏下摄像头的相对位置改变前的拍照窗口,所述第二拍照窗口为所述显示屏和所述屏下摄像头的相对位置改变后的拍照窗口。
在前置摄像头位置上同时实现显示的屏下摄像技术,为了保证前摄像拍摄效果以及确 保对应区域的正常显示,显示像素(RGB像素阵列)和驱动像素显示的线路是必须设置的。如图3所示,屏下摄像头与显示屏的重叠区域类似于纱窗,黑色部分为不透光或透光较低区域(设置像素点和线路),白色部分为透光区域。图4中,1表示显示屏,2表示摄像头光线接收面,光线只有在没有像素点和电路的部分才能正常照射在摄像头光线接收面上,像素点和电路阻挡了部分光线的通过。另外,光线在通过小孔阵列时也容易产生衍射问题,导致摄像头接收到的图像出现亮斑或亮纹,上述问题都影响了屏下摄像技术的成像质量。
需要说明的是,目前的终端的屏幕和摄像头的位置是固定的,因此,通过目前终端的摄像头多次获取图像,并进行融合也无法提高成像质量。
为了解决上述问题,本实施例中增加马达(或其他驱动装置),控制马达驱动显示屏和/或摄像头进行移动,以改变显示屏与摄像头的相对位置,并在相对位置改变前后,分别拍摄至少一幅图像,并将相对位置改变前后的图像进行融合,互相补充图像信息,提高了屏下摄像的成像质量,提高了屏下摄像头拍摄图像的清晰度。
具体地,显示屏中与屏下摄像头(镜头)对应的区域为拍照窗口,将显示屏与屏下摄像头的相对位置改变前的拍照窗口,作为第一拍摄窗口,将显示屏与屏下摄像头的相对位置改变后的拍照窗口,作为第二拍摄窗口。控制屏下摄像头通过显示屏中的第一拍照窗口,拍摄至少一幅图像,作为第一图像;控制屏下摄像头通过显示屏中的第二拍照窗口,拍摄至少一幅图像,作为第二图像。
步骤S200、将所述第一图像与所述第二图像进行图像融合处理,生成目标图像。
具体地,将第一图像与第二图像两组图像通过预设融合规则进行图像融合,生成目标图像。融合后的目标图像增加了更多的图像细节,从而提高了屏下摄像拍摄图像的图像质量。其中,预设融合规则可以是像素级图像融合、特征极图像融合、决策级图像融合中的任一种及其组合,还可以是基于像素点、基于块和基于区域的图像融合方法中的任一种及其组合。
通过上述方式,显示屏和屏下摄像头的相对位置发生变动前,屏下摄像头位于摄像头成像面上,处于成像面的水平方向位置。当显示屏和摄像头的相对水平位置发生变动后,屏下摄像头在成像面上发生水平移动,屏下摄像头在相对位置发生变动后,重新拍摄一组新的图像。然后将相对位置变动前后,屏下摄像头拍摄的两组图像通过设定的规则进行融合,如交叉融合形成一幅像素更多的图像,从而提高屏下摄像图像的质量。
在一些实施例中,所述将所述第一图像与所述第二图像进行图像融合处理包括:根据预设图像融合规则,将所述第一图像与所述第二图像进行图像融合,其中,所述图像融合规则包括像素级图像融合、特征极图像融合、决策级图像融合中的任一种及其组合。
本实施例中,像素级融合是直接对源图像基于像素的特征进行融合,最后生成一幅融 合图像的过程。该规则保留源图像的原始信息最多、融合准确性最高,但该规则存在着信息量最大、对硬件设备和配准的要求较高、计算时间长和实时处理差等缺点。特征级图像融合是首先对源图像进行简单的预处理,然后对源图像的角点、边缘、形状等特征信息进行提取、选取和融合,生成一幅融合图像。该规则融合的对象是源图像的特征信息,对图像配准环节的要求没有像素级融合要求的严格。对图像的细节信息进行了压缩,增强其自身实时处理能力,并尽最大可能为决策分析提供所需要的特征信息。相对于像素级图像融合规则,特征级图像融合规则的精度一般。决策级图像是在进行融合之前,每一源图像都已独立地完成了分类、识别等自身的决策任务,融合过程是通过对前面每一独立决策结果进行综合分析,从而生成全局最优决策并依此形成融合图像的过程。该规则具有灵活度高、通信量小、实时性最好、容错能力强、抗干扰能力强等优点,但该规则需要对各个图像分别进行决策判断,导致融合前需要处理的任务太多、增加了前期的预处理代价。由此,用户可根据实际需要进行图像融合规则的设定。
在一些实施例中,所述控制屏下摄像头通过显示屏中的第一拍照窗口以及第二拍照窗口,分别拍摄第一图像和第二图像包括:基于所述显示屏中移动前的拍照窗口,确定所述第一拍照窗口,并控制所述屏下摄像头通过所述第一拍照窗口拍摄所述第一图像;控制马达驱动所述显示屏和/或所述屏下摄像头进行移动,以使所述显示屏相对于所述屏下摄像头的光线接收面进行相对移动;基于所述显示屏中移动后的拍照窗口,确定所述第二拍照窗口,并控制所述屏下摄像头通过所述第二拍照窗口拍摄所述第二图像。
本实施例中,为显示屏增加了显示屏马达,或为摄像头增加马达,然后通过马达驱动显示屏或摄像头进行移动,从而改变显示屏中得拍照窗口和摄像头的相对位置,而相对位置的改变使得通过显示屏(像素点及电路等结构)的光线在摄像头的光线接收面上的光强分布情况发生了改变。因此,相对位置改变后,对拍摄图像的影响区域发生了改变,通过将影响区域不同的拍摄图像进行融合,通过一幅(或多幅)拍摄图像中未被影响的图像区域,对一拍摄图像中被影响的图像区域进行补充,由此消除光线遮挡或光线衍射等问题对拍摄图像产生的影响,提高了屏下摄像的图像质量。
在一些实施例中,控制屏下摄像头透过显示屏中的第一拍摄窗口,拍摄一组图像(一幅或多幅),作为第一图像。然后发出驱动指令至显示屏马达或摄像头马达,驱动显示屏或驱动摄像头进行相对移动,改变显示屏拍照窗口和摄像头的相对位置。其中,移动包括相对水平移动、柔性屏的伸缩移动或摄像头相对于显示屏进行的翻转移动。并控制屏下摄像头透过显示屏中改变后的第二拍摄窗口,拍摄另一组图像,作为第二图像。
具体实施例中,还可以驱动显示屏以及摄像头同时进行相对移动。
更多实施例中,所述马达在完成上述驱动的同时还可以用来完成摄像头防抖机制。
在一些实施例中,所述控制马达驱动所述显示屏和/或所述屏下摄像头进行移动包括:控制显示屏马达驱动所述显示屏或所述显示屏中的拍照窗口相对于所述光线接收面进行水平移动。
本实施例中,如图5所示,1表示显示屏,2表示摄像头光线接收面,箭头表示移动显示屏的其中一个方向。控制显示屏马达按照箭头方向(或箭头相反方向)驱动整个显示屏相对于屏下摄像头的光线接收面进行水平移动,从而使得显示屏中的拍照窗口相对于光线接收面进行水平移动。
具体实施例中,为了避免影响显示效果,可控制显示屏马达直接驱动显示屏中的拍照窗口(即显示屏中可拍照区域)相对于光线接收面进行水平移动。其中,显示屏马达可直接连接显示屏中的拍照窗口,从而通过显示屏马达直接驱动拍照窗口进行移动。
更多实施例中,若显示屏为柔性屏,则可通过显示屏马达驱动显示屏中的拍摄窗口区域进行伸缩移动,从而使显示屏相对于摄像头的光线接收面进行水平移动。
在一些实施例中,所述控制马达驱动所述显示屏和/或所述屏下摄像头进行移动包括:
控制摄像头马达驱动所述光线接收面、所述屏下摄像头的镜头或所述屏下摄像头所属模组相对于所述显示屏进行水平移动。
本实施例中,除了驱动显示屏中的拍摄窗口进行移动,还可以驱动所述屏下摄像头的光线接收面相对于所述显示屏进行移动。具体地,控制摄像头马达驱动所述光线接收面或所述光线接收面所在的部件,如屏下摄像头的镜头、摄像头传感器或所述屏下摄像头所属模组(摄像头模组),进行水平移动。如图6所示,1表示显示屏,2表示摄像头的光线接收面,驱动所述光线接收面、所述屏下摄像头的镜头或所述屏下摄像头所属模组相对于所述显示屏进行水平移动后位置如图所示。由此,通过改变光线接收面与显示屏的相对位置,使得通过显示屏(像素点及电路等结构)的光线在摄像头的光线接收面上的光强分布情况发生了改变。
摄像头模组(或摄像头镜头或者是摄像头传感器)相对于显示屏拍照窗口进行水平移动。
在一些实施例中,所述控制马达驱动所述显示屏和/或所述屏下摄像头进行移动包括:控制摄像头马达驱动所述屏下摄像头或所述屏下摄像头所属模组相对于所述显示屏进行翻转。
本实施例中,控制摄像头马达驱动所述屏下摄像头进行翻转,从而改变所述显示屏与所述光线接收面的相对位置。具体地,可驱动所述屏下摄像头向两个相反方向进行翻转,如翻转30度、45度或90度等,并在每个方向的每个翻转角度分别拍摄图像,作为第二图像。
本公开的实施例中还提供一种存储介质,用于可读存储,所述存储介质存储有程序,所述程序中包括程序指令,所述处理器执行所述程序指令,实现本公开实施例提供的任一项基于屏下摄像的图像处理方法。
例如,该程序被处理器加载,可以执行如下步骤:控制屏下摄像头通过显示屏中的第一拍照窗口以及第二拍照窗口,分别拍摄第一图像和第二图像,其中,所述第一拍照窗口为所述显示屏和所述屏下摄像头的相对位置改变前的拍照窗口,所述第二拍照窗口为所述显示屏和所述屏下摄像头的相对位置改变后的拍照窗口;将所述第一图像与所述第二图像进行图像融合处理,生成目标图像。
其中,所述存储介质可以是前述实施例所述基于屏下摄像的图像处理设备的内部存储单元,例如所述基于屏下摄像的图像处理设备的硬盘或内存。所述存储介质也可以是所述基于屏下摄像的图像处理设备的外部存储设备,例如所述基于屏下摄像的图像处理设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字卡(Secure Digital Card,SD Card),闪存卡(Flash Card)等。
本公开的实施例中还提供一种基于屏下摄像的图像处理系统,所述基于屏下摄像的图像处理系统包括显示屏、屏下摄像头、马达以及如上所述的基于屏下摄像的图像处理设备。
本实施例公开了一种基于屏下摄像的图像处理方法、设备、系统和存储介质,通过控制屏下摄像头通过显示屏中的第一拍照窗口以及第二拍照窗口,分别拍摄第一图像和第二图像,其中,所述第一拍照窗口为所述显示屏和所述屏下摄像头的相对位置改变前的拍照窗口,所述第二拍照窗口为所述显示屏和所述屏下摄像头的相对位置改变后的拍照窗口;将所述第一图像与所述第二图像进行图像融合处理,生成目标图像。通过上述方式,获取显示屏和所述屏下摄像头的相对位置改变前后的两幅图像,即获取被遮光结构影响不同区域的图像,然后进行图像融合,以消除遮光结构对拍摄图像的影响,提高了屏下摄像技术的摄像图像的清晰度以及质量。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、设备中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。
在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在可存储介质上,存储介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的 任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
以上参照附图说明了本公开的优选实施例,并非因此局限本公开的权利范围。本领域技术人员不脱离本公开的范围和实质内所作的任何修改、等同替换和改进,均应在本公开的权利范围之内。

Claims (10)

  1. 一种基于屏下摄像的图像处理方法,其中,所述方法包括:
    控制屏下摄像头通过显示屏中的第一拍照窗口以及第二拍照窗口,分别拍摄第一图像和第二图像,其中,所述第一拍照窗口为所述显示屏和所述屏下摄像头的相对位置改变前的拍照窗口,所述第二拍照窗口为所述显示屏和所述屏下摄像头的相对位置改变后的拍照窗口;
    将所述第一图像与所述第二图像进行图像融合处理,生成目标图像。
  2. 根据权利要求1所述的基于屏下摄像的图像处理方法,其中,所述控制屏下摄像头通过显示屏中的第一拍照窗口以及第二拍照窗口,分别拍摄第一图像和第二图像包括:
    基于所述显示屏中移动前的拍照窗口,确定所述第一拍照窗口,并控制所述屏下摄像头通过所述第一拍照窗口拍摄所述第一图像;
    控制马达驱动所述显示屏和所述屏下摄像头中的至少一个进行移动,以使所述显示屏相对于所述屏下摄像头的光线接收面进行相对移动;
    基于所述显示屏中移动后的拍照窗口,确定所述第二拍照窗口,并控制所述屏下摄像头通过所述第二拍照窗口拍摄所述第二图像。
  3. 根据权利要求2所述的基于屏下摄像的图像处理方法,其中,所述控制马达驱动所述显示屏和所述屏下摄像头中的至少一个进行移动包括:
    控制显示屏马达驱动所述显示屏或所述显示屏中的拍照窗口相对于所述光线接收面进行水平移动。
  4. 根据权利要求2所述的基于屏下摄像的图像处理方法,其中,所述控制马达驱动所述显示屏和所述屏下摄像头中的至少一个进行移动包括:
    控制摄像头马达驱动所述光线接收面、所述屏下摄像头的镜头或所述屏下摄像头所属模组相对于所述显示屏进行水平移动。
  5. 根据权利要求2所述的基于屏下摄像的图像处理方法,其中,所述控制马达驱动所述显示屏和所述屏下摄像头中的至少一个进行移动包括:
    控制摄像头马达驱动所述屏下摄像头或所述屏下摄像头所属模组相对于所述显示屏进行翻转。
  6. 根据权利要求1所述的基于屏下摄像的图像处理方法,其中,所述将所述第一 图像与所述第二图像进行图像融合处理包括:
    根据预设图像融合规则,将所述第一图像与所述第二图像进行图像融合,其中,所述图像融合规则包括像素级图像融合、特征极图像融合、决策级图像融合中的任一种及其组合。
  7. 根据权利要求1-6中任一项所述的基于屏下摄像的图像处理方法,其中,所述屏下摄像头包括一个或多个摄像头。
  8. 一种基于屏下摄像的图像处理设备,其中,所述设备包括存储器、处理器、存储在所述存储器上并可在所述处理器上运行的程序以及用于实现所述处理器和所述存储器之间的连接通信的数据总线,所述程序被所述处理器执行时实现如权利要求1-7任一项所述的基于屏下摄像的图像处理方法的步骤。
  9. 一种基于屏下摄像的图像处理系统,其中,所述系统包括显示屏、屏下摄像头、马达以及如权利要求8所述的基于屏下摄像的图像处理设备。
  10. 一种存储介质,用于可读存储,其中,所述存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如权利要求1至7中任一项所述的基于屏下摄像的图像处理方法。
PCT/CN2022/102973 2021-08-31 2022-06-30 基于屏下摄像的图像处理方法、设备、系统和存储介质 WO2023029715A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111016597.5 2021-08-31
CN202111016597.5A CN115734086A (zh) 2021-08-31 2021-08-31 基于屏下摄像的图像处理方法、设备、系统和存储介质

Publications (1)

Publication Number Publication Date
WO2023029715A1 true WO2023029715A1 (zh) 2023-03-09

Family

ID=85291738

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/102973 WO2023029715A1 (zh) 2021-08-31 2022-06-30 基于屏下摄像的图像处理方法、设备、系统和存储介质

Country Status (2)

Country Link
CN (1) CN115734086A (zh)
WO (1) WO2023029715A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117354409A (zh) * 2023-12-04 2024-01-05 深圳市华维诺电子有限公司 一种屏下摄像头组件及手机

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005051620A (ja) * 2003-07-30 2005-02-24 Aisin Seiki Co Ltd 車両周辺確認装置
CN110047380A (zh) * 2019-03-26 2019-07-23 武汉华星光电半导体显示技术有限公司 一种显示面板及显示装置
CN111031253A (zh) * 2020-01-13 2020-04-17 维沃移动通信有限公司 一种拍摄方法及电子设备
CN111314610A (zh) * 2020-02-26 2020-06-19 维沃移动通信有限公司 一种控制方法及电子设备
CN111405117A (zh) * 2020-03-24 2020-07-10 维沃移动通信有限公司 一种控制方法及电子设备
CN111953879A (zh) * 2020-08-31 2020-11-17 京东方科技集团股份有限公司 屏下摄像头模组、电子设备及照片拍摄方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005051620A (ja) * 2003-07-30 2005-02-24 Aisin Seiki Co Ltd 車両周辺確認装置
CN110047380A (zh) * 2019-03-26 2019-07-23 武汉华星光电半导体显示技术有限公司 一种显示面板及显示装置
CN111031253A (zh) * 2020-01-13 2020-04-17 维沃移动通信有限公司 一种拍摄方法及电子设备
CN111314610A (zh) * 2020-02-26 2020-06-19 维沃移动通信有限公司 一种控制方法及电子设备
CN111405117A (zh) * 2020-03-24 2020-07-10 维沃移动通信有限公司 一种控制方法及电子设备
CN111953879A (zh) * 2020-08-31 2020-11-17 京东方科技集团股份有限公司 屏下摄像头模组、电子设备及照片拍摄方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117354409A (zh) * 2023-12-04 2024-01-05 深圳市华维诺电子有限公司 一种屏下摄像头组件及手机
CN117354409B (zh) * 2023-12-04 2024-02-02 深圳市华维诺电子有限公司 一种屏下摄像头组件及手机

Also Published As

Publication number Publication date
CN115734086A (zh) 2023-03-03

Similar Documents

Publication Publication Date Title
WO2018201809A1 (zh) 基于双摄像头的图像处理装置及方法
EP3627821B1 (en) Focusing method and apparatus for realizing clear human face, and computer device
US9549126B2 (en) Digital photographing apparatus and control method thereof
WO2019056527A1 (zh) 一种拍摄方法及装置
US11516391B2 (en) Multiple camera system for wide angle imaging
WO2018058934A1 (zh) 拍摄方法、拍摄装置和存储介质
US20220222830A1 (en) Subject detecting method and device, electronic device, and non-transitory computer-readable storage medium
JP2008092548A (ja) 撮像装置、撮影方法及びプログラム
CN104823219A (zh) 全景图像的环形视图
US20170111574A1 (en) Imaging apparatus and imaging method
US20150124131A1 (en) Camera and method of controlling operation of same
US20130120635A1 (en) Subject detecting method and apparatus, and digital photographing apparatus
CN110290299B (zh) 成像方法、装置、存储介质及电子设备
CN109690568A (zh) 一种处理方法及移动设备
WO2023029715A1 (zh) 基于屏下摄像的图像处理方法、设备、系统和存储介质
WO2022022633A1 (zh) 图像显示方法、装置和电子设备
US9088720B2 (en) Apparatus and method of displaying camera view area in portable terminal
CN115866394A (zh) 用于在第一镜头和第二镜头之间切换的方法和电子设备
CN103108115A (zh) 拍摄设备和拍摄方法
CN105210362B (zh) 图像调整设备、图像调整方法和图像捕获设备
US11006041B1 (en) Multiple camera system for wide angle imaging
US9900503B1 (en) Methods to automatically fix flash reflection at capture time
US20230033956A1 (en) Estimating depth based on iris size
CN112272267A (zh) 拍摄控制方法、拍摄控制装置及电子设备
US20230025380A1 (en) Multiple camera system