WO2018161759A1 - 逆光图像处理方法、逆光图像处理装置及电子装置 - Google Patents

逆光图像处理方法、逆光图像处理装置及电子装置 Download PDF

Info

Publication number
WO2018161759A1
WO2018161759A1 PCT/CN2018/075492 CN2018075492W WO2018161759A1 WO 2018161759 A1 WO2018161759 A1 WO 2018161759A1 CN 2018075492 W CN2018075492 W CN 2018075492W WO 2018161759 A1 WO2018161759 A1 WO 2018161759A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
processing
image
main image
depth
Prior art date
Application number
PCT/CN2018/075492
Other languages
English (en)
French (fr)
Inventor
孙剑波
Original Assignee
广东欧珀移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东欧珀移动通信有限公司 filed Critical 广东欧珀移动通信有限公司
Priority to EP18763396.1A priority Critical patent/EP3591615B1/en
Publication of WO2018161759A1 publication Critical patent/WO2018161759A1/zh
Priority to US16/563,370 priority patent/US11295421B2/en

Links

Images

Classifications

    • G06T5/94
    • G06T5/92
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing

Definitions

  • the present invention relates to imaging technology, and in particular, to a backlight image processing method, a backlight image processing device, and an electronic device.
  • the back-light effect image is an effect of image-backed image shooting by image processing.
  • the body of the produced back-light effect image is not easily protruded, and the visual effect is poor.
  • the present invention aims to solve at least one of the technical problems existing in the prior art.
  • embodiments of the present invention provide a backlight image processing method, a backlight image processing device, and an electronic device.
  • the scene main image is processed according to the main body to draw an effect of backlighting the main body.
  • a depth-based backlight image processing apparatus for processing scene data collected by an imaging device, the scene data comprising a scene main image, the backlight image processing apparatus comprising a first processing module and a second processing module.
  • the first processing module is configured to process the scene data to obtain a body of the scene main image.
  • the second processing module is configured to process the scene main image according to the main body to draw an effect of backlighting the main body.
  • An electronic device includes an imaging device and the backlight image processing device.
  • the backlight image processing method, the backlight image processing device, and the electronic device of the present invention use the scene main image according to the depth information to obtain a back-light effect image with better visual effect.
  • FIG. 1 is a flow chart showing a method of processing a backlight image according to an embodiment of the present invention.
  • FIG. 2 is a schematic plan view of an electronic device according to an embodiment of the present invention.
  • FIG. 3 is another schematic flowchart of a backlight image processing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of functional modules of a first processing module according to an embodiment of the present invention.
  • FIG. 5 is a schematic flow chart of still another method of processing a backlight image according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of functional modules of a first processing sub-module according to an embodiment of the present invention.
  • FIG. 7 is still another schematic flowchart of a method for processing a backlight image according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of functional modules of a processing unit according to an embodiment of the present invention.
  • FIG. 9 is a schematic flow chart of still another method of processing a backlight image according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of another functional module of a processing unit according to an embodiment of the present invention.
  • FIG. 11 is a schematic flow chart of still another method of processing a backlight image according to an embodiment of the present invention.
  • FIG. 12 is a schematic diagram of functional modules of an acquisition unit according to an embodiment of the present invention.
  • FIG. 13 is still another flow chart of the backlight image processing method according to an embodiment of the present invention.
  • FIG. 14 is a schematic flow chart of still another method of processing a backlight image according to an embodiment of the present invention.
  • Fig. 15 is a schematic diagram showing the functional blocks of the backlight image processing apparatus according to the embodiment of the present invention.
  • 16 is a schematic flow chart of still another method of processing a backlight image according to an embodiment of the present invention.
  • Fig. 17 is a block diagram showing another functional block of the backlight image processing apparatus according to the embodiment of the present invention.
  • the fourth processing sub-module 26 the output module 30, the determining module 40, and the imaging device 500 The fourth processing sub-module 26, the output module 30, the determining module 40, and the imaging device 500.
  • the depth-based backlight image processing method of the embodiment of the present invention may be used to process scene data collected by the imaging device 500.
  • the scene data includes the scene main image.
  • the backlight image processing method includes the following steps:
  • S20 The main image of the scene is processed according to the main body to draw the effect of the backlight to illuminate the main body.
  • the depth-based backlit image processing apparatus 100 of the embodiment of the present invention may be used to process scene data collected by the imaging apparatus 500.
  • the scene data includes the scene main image.
  • the backlight image processing apparatus 100 includes a first processing module 10 and a second processing module 20.
  • the first processing module 10 is configured to process the scene data to obtain a subject of the scene main image.
  • the second processing module 20 is configured to process the scene main image according to the main body to draw an effect of backlighting the main body.
  • the backlight image processing method of the embodiment of the present invention can be implemented by the backlight image processing apparatus 100 of the embodiment of the present invention, wherein step S10 can be implemented by the first processing module 10, and step S20 can be implemented by the second processing module 20. .
  • the backlight image processing apparatus 100 of the embodiment of the present invention may be applied to the electronic device 1000 of the embodiment of the present invention, or the electronic device 1000 of the embodiment of the present invention may include the backlight image processing apparatus of the embodiment of the present invention. 100. Furthermore, the electronic device 1000 of the embodiment of the present invention further includes an imaging device 500 that is electrically connected to the backlight image processing device 100.
  • the backlight image processing method, the backlight image processing apparatus 100, and the electronic apparatus 1000 of the embodiment of the present invention use the scene main image according to the depth information to obtain a backlight effect image with better visual effect.
  • the electronic device 1000 includes a mobile phone, a tablet, a smart watch, a laptop, a smart bracelet, smart glasses, or a smart helmet. In an embodiment of the invention, the electronic device 1000 is a mobile phone.
  • imaging device 500 includes a front camera and/or a rear camera, without limitation. In an embodiment of the invention, the imaging device 500 is a front camera.
  • step S10 includes the following steps:
  • the first processing module 10 includes a first processing sub-module 12, a determining sub-module 14, and a determining sub-module 16.
  • the first processing sub-module 12 is configured to process the scene data to obtain a foreground portion of the scene main image.
  • the judging sub-module 14 is configured to judge whether the area ratio of the foreground portion to the scene main image falls within a predetermined range.
  • the determination sub-module 16 is configured to determine that the foreground portion is the main body when the area ratio falls within a predetermined range.
  • step S12 can be implemented by the first processing sub-module 12
  • step S14 can be implemented by the judging sub-module 14
  • step S16 can be implemented by the determining sub-module 16.
  • the backlight effect image processed by the backlight image processing method is not effective, for example, when the foreground portion is relatively small, the body of the backlight effect image is not prominent enough. Therefore, when the foreground portion is appropriately sized, it is judged that the scene main image exists as a subject.
  • the predetermined range is 15-60.
  • step S12 includes the following steps:
  • S122 Process the scene data to obtain depth information of the scene main image
  • S124 Acquire a foreground part of the main image of the scene according to the depth information.
  • the first processing sub-module 12 includes a processing unit 122 and an acquisition unit 124.
  • the processing unit 122 is configured to process the scene data to obtain depth information of the scene main image.
  • the obtaining unit 124 is configured to acquire a foreground portion of the scene main image according to the depth information.
  • step S122 can be implemented by the processing unit 122
  • step S124 can be implemented by the obtaining unit 124.
  • the foreground portion of the scene main image can be acquired from the depth information.
  • the scene data includes a depth image corresponding to the scene main image
  • step S122 includes the following steps:
  • S1224 Process the depth data to obtain depth information.
  • processing unit 122 includes a first processing sub-unit 1222 and a second processing sub-unit 1224.
  • the first processing sub-unit 1222 is configured to process the depth image to obtain depth data of the scene main image.
  • the second processing sub-unit 1224 is configured to process the depth data to obtain depth information.
  • step S1222 can be implemented by the first processing sub-unit 1222
  • step S1224 can be implemented by the second processing sub-unit 1224.
  • the depth information of the scene main image can be quickly obtained by using the depth image.
  • the scene main image is an RGB color image
  • the depth image contains depth information of each person or object in the scene. Since the color information of the scene main image and the depth information of the depth image have a one-to-one correspondence, the depth information of the scene main image can be obtained.
  • the manner of acquiring the depth image corresponding to the main image of the scene includes acquiring the depth image by using structured light depth ranging and acquiring the depth image by using a time of flight (TOF) depth camera.
  • TOF time of flight
  • the imaging device 500 When a depth image is acquired using structured light depth ranging, the imaging device 500 includes a camera and a projector.
  • the structured light depth ranging is to project a certain mode light structure onto the surface of the object by using a projector, and form a three-dimensional image of the light strip modulated by the shape of the measured object on the surface.
  • the three-dimensional image of the light strip is detected by the camera to obtain a two-dimensional distortion image of the light strip.
  • the degree of distortion of the strip depends on the relative position between the projector and the camera and the surface profile or height of the object.
  • the displacement along the light bar is proportional to the height of the surface of the object, and the kink indicates a change in the plane, showing the physical gap of the surface discontinuously.
  • the relative position between the projector and the camera is constant, the three-dimensional contour of the surface of the object can be reproduced by the distorted two-dimensional strip image coordinates, so that the depth information can be acquired.
  • Structured light depth ranging has high resolution and measurement accuracy.
  • the imaging device 500 When a depth image is acquired using a TOF depth camera, the imaging device 500 includes a TOF depth camera.
  • the TOF depth camera records the phase change of the modulated infrared light emitted from the light emitting unit to the object through the sensor, and then reflects back from the object, and the depth distance of the entire scene can be obtained in real time according to the speed of light in a range of wavelengths.
  • the TOF depth camera calculates the depth information without being affected by the gray level and features of the surface of the object, and can quickly calculate the depth information, which has high real-time performance.
  • the scene data includes a scene sub-image corresponding to the scene main image
  • step S122 includes the following steps:
  • S1226 processing the scene main image and the scene sub-image to obtain depth data of the scene main image
  • S1228 Process the depth data to obtain depth information.
  • processing unit 122 includes a third processing sub-unit 1226 and a fourth processing sub-unit 1228.
  • the third processing sub-unit 1226 is configured to process the scene main image and the scene sub-image to obtain depth data of the scene main image.
  • the fourth processing sub-unit 1228 is configured to process the depth data to obtain depth information.
  • step S1226 can be implemented by the third processing sub-unit 1226
  • step S1228 can be implemented by the fourth processing sub-unit 1228.
  • the depth information of the scene main image can be acquired by processing the scene main image and the scene sub-image.
  • imaging device 500 includes a primary camera and a secondary camera.
  • the depth information can be acquired by the binocular stereo vision ranging method, and the scene data includes the scene main image and the scene sub-image.
  • the main image of the scene is captured by the main camera, and the sub-image of the scene is captured by the sub-camera.
  • Binocular stereo vision ranging is to use two identical cameras to image the same subject from different positions to obtain a stereo image pair of the subject, and then algorithm to match the corresponding image points of the stereo image pair to calculate the parallax.
  • the triangulation-based method is used to recover the depth information. In this way, the depth information of the scene main image can be obtained by matching the stereo image pair of the scene main image and the scene sub image.
  • step S124 includes the following steps:
  • S1244 Find an area adjacent to the foremost point and continuously varying in depth as a foreground part.
  • the obtaining unit 124 includes a fifth processing sub-unit 1242 and a finding sub-unit 1244.
  • the fifth processing sub-unit 1242 is configured to obtain a foremost point of the scene main image according to the depth information.
  • the lookup subunit 1244 is used to find an area adjacent to the foremost point and continuously varying in depth as the foreground portion.
  • step S1242 can be implemented by the fifth processing sub-unit 1242
  • step S1244 can be implemented by the finding sub-unit 1244.
  • the foreground portion of the physical connection of the scene main image can be obtained.
  • the foreground parts are usually connected together. Taking the foreground part of the physical connection as the main body, the relationship of the foreground part can be intuitively obtained.
  • the first point of the main image of the scene is obtained according to the depth information, and the foremost point is equivalent to the beginning of the foreground part, and is diffused from the foremost point to obtain an area adjacent to the foremost point and continuously changing in depth, and the area and the foremost point are merged into the foreground. region.
  • the foremost point refers to a pixel point corresponding to the object with the smallest depth, that is, a pixel point corresponding to the object with the smallest object distance or closest to the imaging device 500.
  • Adjacency means that two pixels are connected together.
  • the depth continuous change means that the depth difference between two adjacent pixel points is smaller than a predetermined difference, or the depth difference between two adjacent pixel points whose difference in depth is smaller than a predetermined difference.
  • step S124 may include the following steps:
  • S1248 Find an area where the difference from the depth of the foremost point is less than a predetermined threshold as the foreground part.
  • the foreground portion of the logical connection of the scene main image can be obtained.
  • the foreground parts may not be connected together, but in a logical relationship, such as the scene where the eagle swoops down to catch the chicken, the eagle and the chick may not be physically connected, but logically, they can be judged. It is linked.
  • the foremost point of the main image of the scene is obtained according to the depth information, and the foremost point is equivalent to the beginning of the foreground portion, and is diffused from the foremost point to obtain an area where the difference from the depth of the foremost point is less than a predetermined threshold, and the areas are merged with the foremost point. For the foreground area.
  • the predetermined threshold can be a value set by the user. In this way, the user can determine the range of the foreground part according to his own needs, thereby obtaining an ideal composition suggestion and achieving an ideal composition.
  • the predetermined threshold may be a value determined by the backlighting image processing apparatus 100, without any limitation.
  • the predetermined threshold determined by the backlight image processing apparatus 100 may be a fixed value stored internally, or may be a value calculated according to a different situation, such as the depth of the foremost point.
  • step S124 can include the following steps:
  • the foreground part is not the front part, but the part of the front part that is slightly behind.
  • the computer is relatively front, but the talent is the main part, so The area where the depth is in the predetermined interval is used as the foreground part, and the problem that the subject selection is incorrect can be effectively avoided.
  • step S10 includes the following steps:
  • the backlight image processing method includes the following steps:
  • the determining sub-module 16 is further configured to determine that the scene main image does not have a subject when the area ratio exceeds a predetermined range.
  • the backlight image processing apparatus 100 includes an output module 30.
  • the output module 30 is configured to directly output the scene main image when the scene main image does not have a main body.
  • step S18 can be implemented by the determination sub-module 16, which can be implemented by the output module 30.
  • the backlight image processing method includes the following steps:
  • Step S20 includes the following steps:
  • S26 The scene main image is processed to cause a strong light scattering effect on the outline of the main body.
  • the backlighting image processing apparatus 100 includes a determination module 40.
  • the determining module 40 is configured to determine that the area of the scene main image other than the subject is a background portion.
  • the second processing module 20 includes a second processing sub-module 22, a third processing sub-module 24, and a fourth processing sub-module 26.
  • the second processing sub-module 22 is configured to process the scene main image to overexpose the background portion.
  • the third processing sub-module 24 is for processing the scene main image to increase the brightness of the subject.
  • the fourth processing sub-module 26 is configured to process the scene main image such that the contour of the subject exhibits a strong light scattering effect.
  • step S40 can be implemented by the determining module 40
  • step S22 can be implemented by the second processing sub-module 22
  • step S24 can be implemented by the third processing sub-module 24
  • step S26 can be implemented by the fourth processing sub-module 26.
  • first and second are used for descriptive purposes only, and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include one or more of the described features either explicitly or implicitly.
  • the meaning of "a plurality" is two or more unless specifically defined otherwise.
  • the terms “installation”, “connected”, and “connected” should be understood broadly, and may be a fixed connection, for example, or They are detachable or integrally connected; they can be mechanically connected, they can be electrically connected or can communicate with each other; they can be connected directly or indirectly through an intermediate medium, which can be internal or two components of two components. Interaction relationship.
  • an intermediate medium which can be internal or two components of two components. Interaction relationship.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the embodiments of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Abstract

本发明公开了一种基于深度的逆光图像处理方法。逆光图像处理方法包括:(S10)处理场景数据以获得场景主图像的主体;和(S20)根据主体处理场景主图像以绘制逆光照射主体的效果。本发明还公开了一种逆光图像处理装置(100)及电子装置(1000)。

Description

逆光图像处理方法、逆光图像处理装置及电子装置
优先权信息
本申请请求2017年03月09日向中国国家知识产权局提交的、专利申请号为201710138846.5的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本发明涉及成像技术,特别涉及一种逆光图像处理方法、逆光图像处理装置及电子装置。
背景技术
逆光特效图像是通过图像处理使得图像出现逆光拍摄的效果,在没有确定或者无法确定图像主体时,容易使得生产的逆光特效图像的主体不突出,视觉效果差。
发明内容
本发明旨在至少解决现有技术中存在的技术问题之一。为此,本发明的实施方式提供了一种逆光图像处理方法、逆光图像处理装置及电子装置。
一种基于深度的逆光图像处理方法,用于处理成像装置采集的场景数据,所述场景数据包括场景主图像,所述逆光图像处理方法包括以下步骤:
处理所述场景数据以获得所述场景主图像的主体;和
根据所述主体处理所述场景主图像以绘制逆光照射所述主体的效果。
一种基于深度的逆光图像处理装置,用于处理成像装置采集的场景数据,所述场景数据包括场景主图像,所述逆光图像处理装置包括第一处理模块和第二处理模块。
所述第一处理模块用于处理所述场景数据以获得所述场景主图像的主体。
所述第二处理模块用于根据所述主体处理所述场景主图像以绘制逆光照射所述主体的效果。
一种电子装置包括成像装置和所述逆光图像处理装置。
本发明的逆光图像处理方法、逆光图像处理装置及电子装置利用根据深度信息处理场景主图像,从而获得视觉效果更好的逆光特效图像。
本发明的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本发明实施方式的逆光图像处理方法的流程示意图。
图2是本发明实施方式的电子装置的平面示意图。
图3是本发明实施方式的逆光图像处理方法的另一个流程示意图。
图4是本发明实施方式的第一处理模块的功能模块示意图。
图5是本发明实施方式的逆光图像处理方法的再一个流程示意图。
图6是本发明实施方式的第一处理子模块的功能模块示意图。
图7是本发明实施方式的逆光图像处理方法的又一个流程示意图。
图8是本发明实施方式的处理单元的功能模块示意图。
图9是本发明实施方式的逆光图像处理方法的又一个流程示意图。
图10是本发明实施方式的处理单元的另一个功能模块示意图。
图11是本发明实施方式的逆光图像处理方法的又一个流程示意图。
图12是本发明实施方式的获取单元的功能模块示意图。
图13是本发明实施方式的逆光图像处理方法的又一个流程示意图。
图14是本发明实施方式的逆光图像处理方法的又一个流程示意图。
图15是本发明实施方式的逆光图像处理装置的功能模块示意图。
图16是本发明实施方式的逆光图像处理方法的又一个流程示意图。
图17是本发明实施方式的逆光图像处理装置的另一个功能模块示意图。
主要元件符号说明:
电子装置1000、逆光图像处理装置100、第一处理模块10、第一处理子模块12、处理单元122、第一处理子单元1222、第二处理子单元1224、第三处理子单元1226、第四处理子单元1228、获取单元124、第五处理子单元1242、寻找子单元1244、判断子模块14、确定子模块16、第二处理模块20、第二处理子模块22、第三处理子模块24、第四处理子模块26、输出模块30、确定模块40、成像装置500。
具体实施方式
下面详细描述本发明的实施方式,所述实施方式的实施方式在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。
请一并参阅图1和图2,本发明实施方式的基于深度的逆光图像处理方法可以用于处理成像装置500采集的场景数据。场景数据包括场景主图像。逆光图像处理方法包括以下步骤:
S10:处理场景数据以获得场景主图像的主体;和
S20:根据主体处理场景主图像以绘制逆光照射主体的效果。
请再次参阅图2,本发明实施方式的基于深度的逆光图像处理装置100可以用于处理成像装置500采集的场景数据。场景数据包括场景主图像。逆光图像处理装置100包括第一处理模块10和第二处理模块20。第一处理模块10用于处理场景数据以获得场景主图像的主体。第二处理模块20用于根据主体处理场景主图像以绘制逆光照射主体的效果。
也即是说,本发明实施方式的逆光图像处理方法可以由本发明实施方式的逆光图像处理装置100实现,其中,步骤S10可以由第一处理模块10实现,步骤S20可以由第二处理模块20实现。
在某些实施方式中,本发明实施方式的逆光图像处理装置100可以应用于本发明实施方式的电子装置1000,或者说本发明实施方式的电子装置1000可以包括本发明实施方式的逆光图像处理装置100。此外,本发明实施方式的电子装置1000还包括成像装置500,成像装置500和逆光图像处理装置100电连接。
本发明实施方式的逆光图像处理方法、逆光图像处理装置100及电子装置1000利用根据深度信息处理场景主图像,从而获得视觉效果更好的逆光特效图像。
在某些实施方式中,电子装置1000包括手机、平板电脑、智能手表、笔记本电脑、智能手环、智能眼镜或智能头盔。在本发明实施方式中,电子装置1000是手机。
在某些实施方式中,成像装置500包括前置相机和/或后置相机,在此不做任何限制。在本发明实施方式中,成像装置500是前置相机。
请参阅图3,在某些实施方式中,步骤S10包括以下步骤:
S12:处理场景数据以获取场景主图像的前景部分;
S14:判断前景部分占场景主图像的面积比是否落入预定范围;和
S16:在面积比落入预定范围时确定前景部分为主体。
请参阅图4,在某些实施方式中,第一处理模块10包括第一处理子模块12、判断子模块14和确定子模块16。第一处理子模块12用于处理场景数据以获取场景主图像的前景部分。判断子模块14用于判断前景部分占场景主图像的面积比是否落入预定范围。确定子模块16用于在面积比落入预定范围时确定前景部分为主体。
也即是说,步骤S12可以由第一处理子模块12实现,步骤S14可以由判断子模块14实现,步骤S16可以由确定子模块16实现。
如此,可以准确地获得场景主图像中的主体。
可以理解,在前景部分太小或者太大时,经过逆光图像处理方法处理后的逆光特效图像效果不佳,比如在前景部分比较小时,逆光特效图像的主体不够突出。因此,在前景部分大小合适时,判断场景主图像存在主体。
在某些实施方式中,预定范围为15-60。
如此,能够获得视觉效果比较好的逆光特效图像。
请参阅图5,在某些实施方式中,步骤S12包括以下步骤:
S122:处理场景数据以获取场景主图像的深度信息;和
S124:根据深度信息获取场景主图像的前景部分。
请参阅图6,在某些实施方式中,第一处理子模块12包括处理单元122和获取单元124。处理单元122用于处理场景数据以获取场景主图像的深度信息。获取单元124用于根据深度信息获取场景主图像的前景部分。
也即是说,步骤S122可以由处理单元122实现,步骤S124可以由获取单元124实现。
如此,可以根据深度信息获取场景主图像的前景部分。
请参阅图7,在某些实施方式中,场景数据包括与场景主图像对应的深度图像,步骤S122包括以下步骤:
S1222:处理深度图像以获取场景主图像的深度数据;和
S1224:处理深度数据以得到深度信息。
请参阅图8,在某些实施方式中,处理单元122包括第一处理子单元1222和第二处理子单元1224。第一处理子单元1222用于处理深度图像以获取场景主图像的深度数据。第二处理子单元1224用于处理深度数据以得到深度信息。
也即是说,步骤S1222可以由第一处理子单元1222实现,步骤S1224可以由第二处理子单元1224实现。
如此,可以利用深度图像快速获得场景主图像的深度信息。
可以理解,场景主图像为RGB彩色图像,深度图像包含场景中各个人或物体的深度信息。由于场景主图像的色彩信息与深度图像的深度信息是一一对应的关系,因此,可获得场景主图像的深度信息。
在某些实施方式中,与场景主图像对应的深度图像的获取方式包括采用结构光深度测距获取深度图像及采用飞行时间(time of flight,TOF)深度摄像头获取深度图像两种方式。
采用结构光深度测距获取深度图像时,成像装置500包括摄像头和投射器。
可以理解,结构光深度测距是利用投射器将一定模式的光结构投射于物体表面,在表面形成由被测物体形状所调制的光条三维图像。光条三维图像由摄像头探测从而获得光条 二维畸变图像。光条的畸变程度取决于投射器与摄像头之间的相对位置和物体表面形廓或高度。沿光条显示出的位移与物体表面的高度成比例,扭结表示了平面的变化,不连续显示表面的物理间隙。当投射器与摄像头之间的相对位置一定时,由畸变的二维光条图像坐标便可重现物体表面的三维轮廓,从而可以获取深度信息。结构光深度测距具有较高的分辨率和测量精度。
采用TOF深度摄像头获取深度图像时,成像装置500包括TOF深度摄像头。
可以理解,TOF深度摄像头通过传感器记录从发光单元发出的调制红外光发射到物体,再从物体反射回来的相位变化,在一个波长的范围内根据光速,可以实时的获取整个场景深度距离。TOF深度摄像头计算深度信息时不受被摄物表面的灰度和特征的影响,且可以快速地计算深度信息,具有很高的实时性。
请参阅图9,在某些实施方式中,场景数据包括与场景主图像对应的场景副图像,步骤S122包括以下步骤:
S1226:处理场景主图像和场景副图像以得到场景主图像的深度数据;和
S1228:处理深度数据以得到深度信息。
请参阅图10,在某些实施方式中,处理单元122包括第三处理子单元1226和第四处理子单元1228。第三处理子单元1226用于处理场景主图像和场景副图像以得到场景主图像的深度数据。第四处理子单元1228用于处理深度数据以得到深度信息。
也即是说,步骤S1226可以由第三处理子单元1226实现,步骤S1228可以由第四处理子单元1228实现。
如此,可以通过处理场景主图像和场景副图像获取场景主图像的深度信息。
在某些实施方式中,成像装置500包括主摄像头和副摄像头。
可以理解,深度信息可以通过双目立体视觉测距方法进行获取,此时场景数据包括场景主图像和场景副图像。其中,场景主图像由主摄像头拍摄得到,场景副图像由副摄像头拍摄得到。双目立体视觉测距是运用两个相同的摄像头对同一被摄物从不同的位置成像以获得被摄物的立体图像对,再通过算法匹配出立体图像对的相应像点,从而计算出视差,最后采用基于三角测量的方法恢复深度信息。如此,通过对场景主图像和场景副图像这一立体图像对进行匹配便可获得场景主图像的深度信息。
请参阅图11,在某些实施方式中,步骤S124包括以下步骤:
S1242:根据深度信息获得场景主图像的最前点;和
S1244:寻找与最前点邻接且深度连续变化的区域作为前景部分。
请参阅图12,在某些实施方式中,获取单元124包括第五处理子单元1242和寻找子单元1244。第五处理子单元1242用于根据深度信息获得场景主图像的最前点。寻找子单 元1244用于寻找与最前点邻接且深度连续变化的区域作为前景部分。
也即是说,步骤S1242可以由第五处理子单元1242实现,步骤S1244可以由寻找子单元1244实现。
如此,可以获得场景主图像物理联系的前景部分。在现实场景中,通常前景部分是连接在一起的。以物理联系的前景部分作为主体,可以直观地获得前景部分的关系。
具体地,先根据深度信息获得场景主图像的最前点,最前点相当于前景部分的开端,从最前点进行扩散,获取与最前点邻接并且深度连续变化的区域,这些区域和最前点归并为前景区域。
需要说明的是,最前点指的是深度最小的物体对应的像素点,即物距最小或者离成像装置500最近的物体对应的像素点。邻接是指两个像素点连接在一起。深度连续变化是指邻接的两个像素点的深度差值小于预定差值,或者说深度之差小于预定差值的两个邻接的像素点的深度连续变化。
请参阅图13,在某些实施方式中,步骤S124可以包括以下步骤:
S1246:根据深度信息获得场景主图像的最前点;和
S1248:寻找与最前点的深度之差小于预定阈值的区域作为前景部分。
如此,可以获得场景主图像逻辑联系的前景部分。在现实场景中,前景部分可能没有连接在一起,但是符合某种逻辑关系,比如老鹰俯冲下来抓小鸡的场景,老鹰和小鸡物理上可能没连接在一起,但是从逻辑上,可以判断它们是联系起来的。
具体地,先根据深度信息获得场景主图像的最前点,最前点相当于前景部分的开端,从最前点进行扩散,获取与最前点的深度之差小于预定阈值的区域,这些区域和最前点归并为前景区域。
在某些实施方式中,预定阈值可以是由用户设置的一个值。如此,用户可根据自身的需求来确定前景部分的范围,从而获得理想的构图建议,实现理想的构图。
在某些实施方式中,预定阈值可以是逆光图像处理装置100确定的一个值,在此不做任何限制。逆光图像处理装置100确定的预定阈值可以是内部存储的一个固定值,也可以是根据不同情况,例如最前点的深度,计算出来的数值。
在某些实施方式中,步骤S124可以包括以下步骤:
寻找深度处于预定区间的区域作为前景部分。
如此,可以获得深度处于合适范围的前景部分。
可以理解,有些拍摄情况下,前景部分并不是最前面的部分,而是最前面部分稍微靠后一点的部分,例如,人坐在电脑后面,电脑比较靠前,但是人才是主体部分,所以将深度处于预定区间的区域作为前景部分,可以有效地避免主体选择不正确的问题。
请参阅图14,在某些实施方式中,步骤S10包括以下步骤:
S18:在面积比超出预定范围时判断场景主图像不存在主体;
逆光图像处理方法包括以下步骤:
S30:在场景主图像不存在主体时直接输出场景主图像。
请参阅图15,在某些实施方式中,确定子模块16还用于在面积比超出预定范围时判断场景主图像不存在主体。逆光图像处理装置100包括输出模块30。输出模块30用于在场景主图像不存在主体时直接输出场景主图像。
也即是说,步骤S18可以由确定子模块16实现,步骤S30可以由输出模块30实现。
如此,可以在前景部分大小不合适时,判断场景主图像不存在主体并直接输出场景主图像,从而减少图像处理时间。
请参阅图16,在某些实施方式中,逆光图像处理方法包括以下步骤:
S40:确定场景主图像除主体外的区域为背景部分;
步骤S20包括以下步骤:
S22:处理场景主图像以使得背景部分过曝;
S24:处理场景主图像以使得主体亮度增大;和
S26:处理场景主图像以使得主体的轮廓出现强光散射效果。
请参阅图17,在某些实施方式中,逆光图像处理装置100包括确定模块40。确定模块40用于确定场景主图像除主体外的区域为背景部分。第二处理模块20包括第二处理子模块22、第三处理子模块24和第四处理子模块26。第二处理子模块22用于处理场景主图像以使得背景部分过曝。第三处理子模块24用于处理场景主图像以使得主体亮度增大。第四处理子模块26用于处理场景主图像以使得主体的轮廓出现强光散射效果。
也即是说,步骤S40可以由确定模块40实现,步骤S22可以由第二处理子模块22实现,步骤S24可以由第三处理子模块24实现,步骤S26可以由第四处理子模块26实现。
如此,对背景部分、主体以及主体轮廓进行不同的图像处理,从而获得视觉效果更好的逆光特效图像。
在本发明的实施方式的描述中,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个所述特征。在本发明的实施方式的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
在本发明的实施方式的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接或可以相互通讯;可以是直接相连,也可以 通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明的实施方式中的具体含义。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理模块的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的实施方式的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明的各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。
尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施方式进行变化、修改、替换和变型。

Claims (22)

  1. 一种基于深度的逆光图像处理方法,用于处理成像装置采集的场景数据,所述场景数据包括场景主图像,其特征在于,所述逆光图像处理方法包括以下步骤:
    处理所述场景数据以获得所述场景主图像的主体;和
    根据所述主体处理所述场景主图像以绘制逆光照射所述主体的效果。
  2. 如权利要求1所述的逆光图像处理方法,其特征在于,所述处理所述场景数据以获得所述场景主图像的主体的步骤包括以下步骤:
    处理所述场景数据以获取所述场景主图像的前景部分;
    判断所述前景部分占所述场景主图像的面积比是否落入预定范围;和
    在所述面积比落入所述预定范围时确定所述前景部分为所述主体。
  3. 如权利要求2所述的逆光图像处理方法,其特征在于,所述处理所述场景数据以获得所述场景主图像的前景部分的步骤包括以下步骤:
    处理所述场景数据以获取所述场景主图像的深度信息;和
    根据所述深度信息获取所述场景主图像的前景部分。
  4. 如权利要求3所述的逆光图像处理方法,其特征在于,所述场景数据包括与所述场景主图像对应的深度图像,所述处理所述场景数据以获取所述场景主图像的深度信息的步骤包括以下步骤:
    处理所述深度图像以获取所述场景主图像的深度数据;和
    处理所述深度数据以得到所述深度信息。
  5. 如权利要求3所述的逆光图像处理方法,其特征在于,所述场景数据包括与所述场景主图像对应的场景副图像,所述处理所述场景数据以获取所述场景主图像的深度信息的步骤包括以下步骤:
    处理所述场景主图像和所述场景副图像以得到所述场景主图像的深度数据;和
    处理所述深度数据以得到所述深度信息。
  6. 如权利要求3所述的逆光图像处理方法,其特征在于,所述根据所述深度信息获取所述场景主图像的前景部分的步骤包括以下步骤:
    根据所述深度信息获得所述场景主图像的最前点;和
    寻找与所述最前点邻接且深度连续变化的区域作为所述前景部分。
  7. 如权利要求2所述的逆光图像处理方法,其特征在于,所述预定范围为15-60。
  8. 如权利要求2所述的逆光图像处理方法,其特征在于,所述处理所述场景数据以获得所述场景主图像的主体的步骤包括以下步骤:
    在所述面积比超出所述预定范围时判断所述场景主图像不存在所述主体;
    所述逆光图像处理方法包括以下步骤:
    在所述场景主图像不存在所述主体时直接输出所述场景主图像。
  9. 如权利要求1所述的逆光图像处理方法,其特征在于,所述处理所述场景数据以获得所述场景主图像的主体的步骤包括以下步骤:
    确定所述场景主图像除所述主体外的区域为背景部分;
    所述根据所述主体处理所述场景主图像以绘制逆光照射所述主体的效果的步骤包括以下步骤:
    处理所述场景主图像以使得所述背景部分过曝;
    处理所述场景主图像以使得所述主体亮度增大;和
    处理所述场景主图像以使得所述主体的轮廓出现强光散射效果。
  10. 一种基于深度的逆光图像处理装置,用于处理成像装置采集的场景数据,所述场景数据包括场景主图像,其特征在于,所述逆光图像处理装置包括:
    第一处理模块,所述第一处理模块用于处理所述场景数据以获得所述场景主图像的主体;和
    第二处理模块,所述第二处理模块用于根据所述主体处理所述场景主图像以绘制逆光照射所述主体的效果。
  11. 如权利要求10所述的逆光图像处理装置,其特征在于,所述第一处理模块包括:
    第一处理子模块,所述第一处理子模块用于处理所述场景数据以获取所述场景主图像的前景部分;
    判断子模块,所述判断子模块用于判断所述前景部分占所述场景主图像的面积比是否落入预定范围;和
    确定子模块,所述确定子模块用于在所述面积比落入所述预定范围时确定所述前景部 分为所述主体。
  12. 如权利要求11所述的逆光图像处理装置,其特征在于,所述第一处理子模块包括:
    处理单元,所述处理单元用于处理所述场景数据以获取所述场景主图像的深度信息;和
    获取单元,所述获取单元用于根据所述深度信息获取所述场景主图像的前景部分。
  13. 如权利要求12所述的逆光图像处理装置,其特征在于,所述场景数据包括与所述场景主图像对应的深度图像,所述处理单元包括:
    第一处理子单元,所述第一处理子单元用于处理所述深度图像以获取所述场景主图像的深度数据;和
    第二处理子单元,所述第二处理子单元用于处理所述深度数据以得到所述深度信息。
  14. 如权利要求12所述的逆光图像处理装置,其特征在于,所述场景数据包括与所述场景主图像对应的场景副图像,所述处理单元包括:
    第三处理子单元,所述第三处理子单元用于处理所述场景主图像和所述场景副图像以得到所述场景主图像的深度数据;和
    第四处理子单元,所述第四处理子单元用于处理所述深度数据以得到所述深度信息。
  15. 如权利要求12所述的逆光图像处理装置,其特征在于,所述获取单元包括:
    第五处理子单元,所述第五处理子单元用于根据所述深度信息获得所述场景主图像的最前点;和
    寻找子单元,所述寻找子单元用于寻找与所述最前点邻接且深度连续变化的区域作为所述前景部分。
  16. 如权利要求11所述的逆光图像处理装置,其特征在于,所述预定范围为15-60。
  17. 如权利要求11所述的逆光图像处理装置,其特征在于,所述确定子模块还用于在所述面积比超出所述预定范围时判断所述场景主图像不存在所述主体;
    所述逆光图像处理装置包括:
    输出模块,所述输出模块用于在所述场景主图像不存在所述主体时直接输出所述场景主图像。
  18. 如权利要求10所述的逆光图像处理装置,其特征在于,所述逆光图像处理装置包括:
    确定模块,所述确定模块用于确定所述场景主图像除所述主体外的区域为背景部分;
    所述第二处理模块包括:
    第二处理子模块,所述第二处理子模块用于处理所述场景主图像以使得所述背景部分过曝;
    第三处理子模块,所述第三处理子模块用于处理所述场景主图像以使得所述主体亮度增大;和
    第四处理子模块,所述第四处理子模块用于处理所述场景主图像以使得所述主体的轮廓出现强光散射效果。
  19. 一种电子装置,其特征在于,包括:
    成像装置;和
    如权利要求10至18任意一项所述的逆光图像处理装置。
  20. 如权利要求19所述的电子装置,其特征在于,所述成像装置包括主摄像头和副摄像头。
  21. 如权利要求19所述的电子装置,其特征在于,所述成像装置包括摄像头和投射器。
  22. 如权利要求19所述的电子装置,其特征在于,所述成像装置包括TOF深度摄像头。
PCT/CN2018/075492 2017-03-09 2018-02-06 逆光图像处理方法、逆光图像处理装置及电子装置 WO2018161759A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18763396.1A EP3591615B1 (en) 2017-03-09 2018-02-06 Backlight image processing method, backlight image processing device and electronic device
US16/563,370 US11295421B2 (en) 2017-03-09 2019-09-06 Image processing method, image processing device and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710138846.5 2017-03-09
CN201710138846.5A CN106991696B (zh) 2017-03-09 2017-03-09 逆光图像处理方法、逆光图像处理装置及电子装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/563,370 Continuation US11295421B2 (en) 2017-03-09 2019-09-06 Image processing method, image processing device and electronic device

Publications (1)

Publication Number Publication Date
WO2018161759A1 true WO2018161759A1 (zh) 2018-09-13

Family

ID=59413176

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/075492 WO2018161759A1 (zh) 2017-03-09 2018-02-06 逆光图像处理方法、逆光图像处理装置及电子装置

Country Status (4)

Country Link
US (1) US11295421B2 (zh)
EP (1) EP3591615B1 (zh)
CN (1) CN106991696B (zh)
WO (1) WO2018161759A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991696B (zh) * 2017-03-09 2020-01-24 Oppo广东移动通信有限公司 逆光图像处理方法、逆光图像处理装置及电子装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1127577A (ja) * 1997-06-30 1999-01-29 Hitachi Ltd 仮想視点画像システム
CN104202524A (zh) * 2014-09-02 2014-12-10 三星电子(中国)研发中心 一种逆光拍摄方法和装置
CN104424624A (zh) * 2013-08-28 2015-03-18 中兴通讯股份有限公司 一种图像合成的优化方法及装置
CN105303543A (zh) * 2015-10-23 2016-02-03 努比亚技术有限公司 图像增强方法及移动终端
CN105933532A (zh) * 2016-06-06 2016-09-07 广东欧珀移动通信有限公司 图像处理方法、装置和移动终端
CN106991696A (zh) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 逆光图像处理方法、逆光图像处理装置及电子装置

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100467610B1 (ko) * 2002-09-06 2005-01-24 삼성전자주식회사 디지털 화질 개선 방법 및 장치
US7840066B1 (en) * 2005-11-15 2010-11-23 University Of Tennessee Research Foundation Method of enhancing a digital image by gray-level grouping
US20130169760A1 (en) * 2012-01-04 2013-07-04 Lloyd Watts Image Enhancement Methods And Systems
CN103024165B (zh) * 2012-12-04 2015-01-28 华为终端有限公司 一种自动设置拍摄模式的方法和装置
WO2014184417A1 (en) * 2013-05-13 2014-11-20 Nokia Corporation Method, apparatus and computer program product to represent motion in composite images
JP2014238731A (ja) * 2013-06-07 2014-12-18 株式会社ソニー・コンピュータエンタテインメント 画像処理装置、画像処理システム、および画像処理方法
JP2015192238A (ja) * 2014-03-27 2015-11-02 キヤノン株式会社 画像データ生成装置および画像データ生成方法
CN104333710A (zh) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 相机曝光方法、装置及设备
CN104363377B (zh) * 2014-11-28 2017-08-29 广东欧珀移动通信有限公司 对焦框的显示方法、装置及终端
CN104333748A (zh) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 获取图像主体对象的方法、装置及终端
US10171745B2 (en) * 2014-12-31 2019-01-01 Dell Products, Lp Exposure computation via depth-based computational photography
CN104994363B (zh) * 2015-07-02 2017-10-20 广东欧珀移动通信有限公司 一种基于服饰的美颜方法、装置及智能终端
CN106469309B (zh) * 2015-08-14 2019-11-12 杭州海康威视数字技术股份有限公司 车辆监控的方法和装置、处理器、图像采集设备
US10306203B1 (en) * 2016-06-23 2019-05-28 Amazon Technologies, Inc. Adaptive depth sensing of scenes by targeted light projections
CN106101547A (zh) * 2016-07-06 2016-11-09 北京奇虎科技有限公司 一种图像数据的处理方法、装置和移动终端
CN106303250A (zh) 2016-08-26 2017-01-04 维沃移动通信有限公司 一种图像处理方法及移动终端
US10269098B2 (en) * 2016-11-01 2019-04-23 Chun Ming Tsang Systems and methods for removing haze in digital photos
WO2018161323A1 (zh) * 2017-03-09 2018-09-13 广东欧珀移动通信有限公司 基于深度的控制方法、控制装置及电子装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1127577A (ja) * 1997-06-30 1999-01-29 Hitachi Ltd 仮想視点画像システム
CN104424624A (zh) * 2013-08-28 2015-03-18 中兴通讯股份有限公司 一种图像合成的优化方法及装置
CN104202524A (zh) * 2014-09-02 2014-12-10 三星电子(中国)研发中心 一种逆光拍摄方法和装置
CN105303543A (zh) * 2015-10-23 2016-02-03 努比亚技术有限公司 图像增强方法及移动终端
CN105933532A (zh) * 2016-06-06 2016-09-07 广东欧珀移动通信有限公司 图像处理方法、装置和移动终端
CN106991696A (zh) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 逆光图像处理方法、逆光图像处理装置及电子装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3591615A4

Also Published As

Publication number Publication date
EP3591615B1 (en) 2020-12-30
US11295421B2 (en) 2022-04-05
CN106991696A (zh) 2017-07-28
EP3591615A1 (en) 2020-01-08
CN106991696B (zh) 2020-01-24
EP3591615A4 (en) 2020-02-19
US20190392561A1 (en) 2019-12-26

Similar Documents

Publication Publication Date Title
WO2018161758A1 (zh) 曝光控制方法、曝光控制装置及电子装置
US10404969B2 (en) Method and apparatus for multiple technology depth map acquisition and fusion
WO2018161877A1 (zh) 处理方法、处理装置、电子装置和计算机可读存储介质
US11978225B2 (en) Depth determination for images captured with a moving camera and representing moving features
US20160210754A1 (en) Surface normal information producing apparatus, image capturing apparatus, surface normal information producing method, and storage medium storing surface normal information producing program
US20190188860A1 (en) Detection system
US20140368615A1 (en) Sensor fusion for depth estimation
TW201415863A (zh) 產生穩健立體影像的技術
CN107750370B (zh) 用于确定图像的深度图的方法和装置
WO2019047985A1 (zh) 图像处理方法和装置、电子装置和计算机可读存储介质
CN110378946B (zh) 深度图处理方法、装置以及电子设备
CN107992187A (zh) 显示方法及其系统
CN106851107A (zh) 切换摄像头辅助构图的控制方法、控制装置及电子装置
WO2019011110A1 (zh) 逆光场景的人脸区域处理方法和装置
CN106875433A (zh) 裁剪构图的控制方法、控制装置及电子装置
WO2016197494A1 (zh) 对焦区域调整方法和装置
CN106973236B (zh) 一种拍摄控制方法及装置
WO2018161759A1 (zh) 逆光图像处理方法、逆光图像处理装置及电子装置
JP2004133919A (ja) 擬似3次元画像生成装置および生成方法並びにそのためのプログラムおよび記録媒体
CN107025636B (zh) 结合深度信息的图像去雾方法及装置和电子装置
CN106973224B (zh) 辅助构图的控制方法、控制装置及电子装置
CN106997595A (zh) 基于景深的图像颜色处理方法、处理装置及电子装置
CN107018322B (zh) 旋转摄像头辅助构图的控制方法、控制装置及电子装置
US11006094B2 (en) Depth sensing apparatus and operation method thereof
WO2018161322A1 (zh) 基于深度的图像处理方法、处理装置和电子装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18763396

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018763396

Country of ref document: EP

Effective date: 20191009

ENP Entry into the national phase

Ref document number: 2018763396

Country of ref document: EP

Effective date: 20190930