WO2020146965A1 - 图像重新聚焦的控制方法及系统 - Google Patents

图像重新聚焦的控制方法及系统 Download PDF

Info

Publication number
WO2020146965A1
WO2020146965A1 PCT/CN2019/071532 CN2019071532W WO2020146965A1 WO 2020146965 A1 WO2020146965 A1 WO 2020146965A1 CN 2019071532 W CN2019071532 W CN 2019071532W WO 2020146965 A1 WO2020146965 A1 WO 2020146965A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
points
point
feature point
feature
Prior art date
Application number
PCT/CN2019/071532
Other languages
English (en)
French (fr)
Inventor
吕键
曾贵
Original Assignee
广东省航空航天装备技术研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东省航空航天装备技术研究所 filed Critical 广东省航空航天装备技术研究所
Priority to PCT/CN2019/071532 priority Critical patent/WO2020146965A1/zh
Publication of WO2020146965A1 publication Critical patent/WO2020146965A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • This application relates to the field of image display technology, in particular to a method and system for controlling image refocusing.
  • An exemplary method of refocusing a single image is to use an image sensor integrated with a special microlens array with different sets of focal lengths.
  • the pixels of the image sensor are focused at different depths, thereby directly refocusing the scene of a single image or video.
  • the image sensor integrated with the microlens array is expensive and costly, and it is easy to reduce the overall definition under the condition of multiple focal lengths.
  • a method and system for controlling image refocusing are provided.
  • a control method for image refocusing including:
  • At different capture angles and different focal lengths capture at least two images according to the feature point pattern
  • the step of projecting the feature point pattern of the target object includes:
  • a feature point pattern is projected to the scene by using the target object.
  • the characteristic points are arranged according to a preset rule.
  • the preset regular arrangement includes a regular interval arrangement of characteristic points in rows and columns, a regular interval arrangement in rows and a regular interval arrangement in columns.
  • the step of obtaining image point groups corresponding to the same feature point in different images and obtaining the position offset between the image points in the image point group includes:
  • the step of obtaining image point groups corresponding to the same feature point in different images, and obtaining the position offset between the image points in the image point group further includes:
  • the step of establishing a second mapping relationship between feature points and image points is specifically:
  • mapping relationship table between feature points and image points is established, and a second mapping relationship between feature points and image points is fitted according to the mapping relationship table.
  • the step of selecting the target image and refocusing the target area on the image according to the depth map includes:
  • the target area on the image is refocused according to the depth map.
  • a control system for image refocusing including:
  • the pattern building device is set to project the characteristic point pattern of the target object
  • the image capturing device is set to capture at least two images according to the characteristic point pattern under different capturing angles and different focal lengths;
  • the offset obtaining device is configured to obtain image point groups corresponding to the same feature point in different images, and obtain the position offset between the image points in the image point group;
  • a depth map generating device configured to obtain the depth information of the feature points according to the image point group, the position offset, and the focal length to generate a depth map
  • the focusing device is set to select the target image and refocus the target area on the image according to the depth map.
  • the pattern creation device includes:
  • the first mapping component is configured to establish a first mapping relationship between the target object and the feature point pattern
  • the projection component is configured to use the target object to project a feature point pattern to the scene according to the first mapping relationship.
  • the projection component includes a projector.
  • the image capturing device includes multiple cameras with different apertures or one camera with multiple apertures.
  • the offset acquisition device includes:
  • the first selection component is configured to select at least one feature point, and obtain a corresponding image point group according to the feature point;
  • the first acquiring component is set to acquire the position coordinates of the center of each pixel in the pixel group
  • the second acquiring component is configured to acquire the position offset between the image points in the image point group according to the position coordinates.
  • the offset acquisition device further includes:
  • the second mapping component is configured to establish a second mapping relationship between feature points and image points.
  • the focusing device includes:
  • the second selected component is set to compare the focal lengths corresponding to different images, and select the image corresponding to the maximum focal length as the target image;
  • the focusing component is set to refocus the target area on the image according to the depth map.
  • FIG. 1 is a method flowchart of an image refocusing control method in an embodiment
  • Figure 2 is a schematic diagram of a feature point pattern in an embodiment
  • Figure 3 is a schematic diagram of an image captured by a camera in an embodiment
  • FIG. 4 is a schematic diagram of a captured image point in an embodiment
  • FIG. 5 is a schematic diagram of the position of the pixel point group corresponding to the same feature point on the XY plane in an embodiment
  • Fig. 6 is a system structure diagram of an image refocusing control system in an embodiment.
  • FIG. 1 is a method flowchart of an image refocusing control method in an embodiment.
  • control method includes steps S101, S102, S103, S104, and S105.
  • steps S101, S102, S103, S104, and S105 are as follows:
  • step S101 the feature point pattern of the target object is projected.
  • the target object refers to the target shooting object, that is, the image shooting subject that needs to be refocused.
  • the target object is an object with different depths, such as a three-dimensional object;
  • the feature point pattern refers to a spatial point with the target object.
  • the corresponding projection point patterns can correspond to different depth information.
  • the feature point patterns can be arranged according to a preset rule, and the preset regular arrangement includes, but is not limited to, a regular interval arrangement of characteristic points in rows and columns, a regular interval arrangement in rows and a regular interval arrangement in columns.
  • the preset regular arrangement includes, but is not limited to, a regular interval arrangement of characteristic points in rows and columns, a regular interval arrangement in rows and a regular interval arrangement in columns.
  • it can be a pattern of sparse dots with regular intervals, including but not limited to matrix points with regular intervals in rows and columns, matrix points with regular intervals in rows, and matrix points with regular intervals in columns (please refer to Figure 2 for assistance.
  • the feature point pattern can also be irregularly arranged discrete points; the distance between two adjacent feature points can be the same or different.
  • the specific location and definition of is not further restricted.
  • the projection point pattern is a sparse point pattern with regular intervals
  • the projection cost can be reduced, and at the same time, it is also convenient for the subsequent steps to select the image point group corresponding to the same feature point. It should be noted that a given point on the target object at any time only occupies a unique position on the feature point.
  • the target object can be used to project the feature point pattern to the scene through a projector, or the target object can be used to project the feature point pattern to the scene through other projection components.
  • step S101 includes step S1011 and step S1012.
  • step S101 a first mapping relationship between the target object and the feature point pattern is established.
  • the first mapping relationship between the target object and the feature point pattern can be established, for example, through perspective projection transformation or orthogonal projection transformation.
  • the first mapping relationship can be established, for example, through perspective projection transformation or orthogonal projection transformation.
  • step S1012 according to the first mapping relationship, a feature point pattern is projected onto the scene using the target object.
  • the projection device is controlled to project the feature point pattern on the target object according to the first mapping relationship.
  • step S102 at different capturing angles and different focal lengths, at least two images are captured according to the feature point pattern.
  • multiple cameras with different apertures and located at different shooting angles can be used to capture multiple images of the same scene (ie, feature point patterns) (please refer to Figure 3 for assistance, which uses two different apertures Take a camera as an example, where the first camera 10 is set to have a larger aperture value, and the second camera 20 is set to have a smaller aperture value); it is also possible to use a camera with multiple apertures, and the same camera at multiple shooting angles
  • the scene captures multiple images.
  • the camera can be applied to electronic devices, such as mobile phones, tablet computers, in-vehicle computers, wearable devices, digital cameras, and any other electronic devices that are capable of taking photos and videos.
  • the center points of different cameras are on the same plane; when a camera captures the same scene, the movement track of the center point is kept on the same plane during the movement of the camera .
  • the out-of-focus image corresponds to a large aperture camera, the corresponding focal length is larger, the image resolution is higher, and the image point is blurry
  • the image T1 is the image captured by the first camera 10; where the image point A1 , B1, and P1 respectively correspond to the characteristic points A, B, and P in Figure 3, and the point in the middle of the circle is the point center of the image point
  • the focused image corresponds to a small aperture camera with a smaller focal length and a clearer image point
  • the image T2 is an image captured by the second camera 20; wherein, the image points A2, B2, and P2 respectively correspond to the feature points A, B, and P in FIG. 3).
  • the image points A1, B1, and P1 can be on different sides of the focal plane of the camera system where the first camera 10 is located; the image points A2, B2, and P2 can be They are on different sides of the focal plane of the camera system where the second camera 20 is located.
  • step S103 image point groups corresponding to the same feature point in different images are obtained, and the position offset between the image points in the image point group is obtained.
  • the position offset refers to the position offset between the image point groups corresponding to the same feature point in different images.
  • the position offset refers to the center of each image point in the image point group.
  • the position offset is a vector offset. Since a feature point only occupies a unique position in space at any time, each image point on each image can only correspond to a unique feature point, so the position offset of the image point center between the same image point group is only.
  • a certain feature point only occupies a unique position in space at any time, so each image point on each image can only correspond to a unique feature point, that is, the image point group and the feature point have a unique mapping relationship.
  • step S103 includes: step S1031, step S1032, and step S1033.
  • step S1031 at least one feature point is selected, and a corresponding image point group is obtained according to the feature point.
  • the image point group corresponding to the feature point can be obtained according to the mapping relationship.
  • step S1032 the position coordinates of the center of each image point in the image point group are acquired.
  • the position offset is calculated by the coordinates of the center position of the image point, which can ensure the reliability of measurement accuracy and improve the accuracy of depth information acquisition.
  • the position offset can be obtained according to the coordinates of the center position of the image point.
  • the plane on which the captured image is located can be selected as the XY plane, and a two-dimensional coordinate system can be established on the XY plane.
  • the origin of the two-dimensional coordinate system is not further limited in this application.
  • the position offset can map the first image and the second image on the XY plane after overlapping, and obtain the coordinate of the center position of each image point corresponding to the same feature point in the two images The vector distance between.
  • A1 located in image T1
  • A2 located in image T2
  • image point A1 The coordinate information in the XY plane is A1 (X11, Y11), and the coordinate information of the image point A2 in the XY plane is (X21, Y21).
  • the position offset D1 can be obtained; B1 (located in Images T1) and B2 (located in image T2), the coordinate information of the image point B1 in the XY plane is B1 (X12, Y12), the coordinate information of the image point B2 in the XY plane is (X22, Y22), according to the image point B1 and the image Point B2 can get the position offset D2.
  • the vector distance between the position coordinates of the image point center corresponding to the same feature point in the two images can be obtained; according to the determined multiple feature points, each image corresponding to each feature point can be obtained. Point, and the position offset corresponding to each image point group.
  • step S103 further includes step S1034.
  • step S103 a second mapping relationship between feature points and image points is established.
  • the second mapping relationship between the feature points and the image points can be preset through experiments, theoretical calculations, etc., or a combination of methods, and then the feature points and the corresponding image point groups are matched according to the second mapping relationship.
  • a mapping relationship table between feature points and image points may be established in advance, and the second mapping relationship between feature points and image points can be fitted according to the mapping relationship table.
  • the mapping relationship between the fitting feature point and the image point can be determined by setting the function model to determine the function that the location coordinates of the feature point and the location coordinates of the image point meet.
  • the simulation can be drawn in a two-dimensional coordinate system. The curve is combined to determine the function that the position coordinates of the characteristic point and the position coordinates of the corresponding image point satisfy.
  • step S104 the depth information of the feature points is acquired according to the image point group, the position offset, and the focal length to generate a depth map.
  • the depth information of the feature point is obtained according to the position coordinates of the center of the image point group, the position offset of the center of the image point group, and the focal length of the used camera.
  • the center point of the first camera and the center point of the second camera are on the same plane, you can set and determine the shooting position (shooting angle) of the two cameras and the distance between the camera center point and The focal length of the first camera and the second camera.
  • the distance Z between the feature point and the plane where the center points of the two cameras are located can be obtained, where the distance Z is the depth information of the feature point.
  • distance Z distance between the center points of the two cameras*(focal length of the first camera or the second camera)/position offset.
  • the position offset is a vector, the reconstruction of depth information can be extended to the inside and outside of the focal plane of the camera system where the camera is located.
  • this solution can also be applied to electronic devices that include three or more cameras. Taking three cameras as an example to illustrate, a combination of two cameras can be formed. The two cameras in each combination can obtain the depth information of the feature points, so that three sets of depth information can be obtained, and the three sets of depth information can be averaged The depth is taken as the actual depth of the feature point. Improve the accuracy of depth information acquisition, and then achieve precise focus on the subject.
  • the depth map corresponding to the sparse dot pattern can be acquired. If you need to get the depth information of all the feature points, you can use surface interpolation or approximation algorithms to calculate the depth values of other undetermined feature points among the determined feature point depth information.
  • step S105 the target image is selected, and the target area on the image is refocused according to the depth map.
  • the image to be refocused is selected first, and then a target area is determined from the target image for refocusing.
  • the target area refers to an area of interest on the target image, such as a face area in a portrait, or other areas with special marks.
  • the target area can be selected according to actual needs.
  • the target image is an image with better defocusing effect, that is, an image captured by a camera with a larger aperture is selected.
  • the image has blurry points, higher image resolution, and higher contrast during refocusing. , The refocusing effect is more prominent.
  • step S105 may include: step S1051 and step S1052.
  • step S1051 the focal lengths corresponding to different images are compared, and the image corresponding to the maximum focal length is selected as the target image.
  • step S1052 the target area on the image is refocused according to the depth map.
  • multiple target regions can be selected, and different depth information is matched for each target region of the target image in turn; the target region is refocused according to the matched depth information.
  • the control method provided in this embodiment captures at least two images according to the feature point pattern at different capturing angles and different focal lengths by projecting the feature point pattern of the target object, and then obtains the image point group images corresponding to the same feature point in different images
  • the position offset between the points and then obtain the more accurate depth information according to the image point group, the position offset and the focal length, and generate a depth map, thereby achieving precise focus on the target area on the image according to the depth map, and improving the image
  • the overall clarity improves the user experience.
  • steps in the flowchart of FIG. 1 are displayed in sequence as indicated by the arrows, these steps are not necessarily performed in sequence in the order indicated by the arrows. Unless clearly stated in this article, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least part of the steps in FIG. 1 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. The execution of these sub-steps or stages The sequence is not necessarily performed sequentially, but may be performed alternately or alternately with other steps or at least a part of sub-steps or stages of other steps.
  • FIG. 6 is a system structure diagram of an image refocusing control system provided by an embodiment.
  • the control system of this embodiment includes: a pattern building device 101, an image capturing device 102, an offset acquiring device 103, a depth map generating device 104, and a focusing device 105. specifically:
  • the pattern building device 101 is configured to project the characteristic point pattern of the target object.
  • the image capturing device 102 is configured to capture at least two images according to the characteristic point pattern under different capturing angles and different focal lengths.
  • the offset obtaining device 103 is configured to obtain image point groups corresponding to the same feature point in different images, and obtain the position offset between the image points in the image point group.
  • the depth map generating device 104 is configured to obtain the depth information of the characteristic points according to the image point group, the position offset, and the focal length to generate a depth map.
  • the focusing device 105 is set to select the target image and refocus the target area on the image according to the depth map.
  • the image capturing device 102 is matched with the pattern building device 101, the offset obtaining device 103 is connected to the image capturing device 102, and the depth map generating device 104 is respectively connected to the offset obtaining device 103 and the focusing device 105.
  • the focusing device 105 also Connect the image capture device 102 to select the target image;
  • the pattern creation device 101 includes but is not limited to projectors and other projection components;
  • the image capture device 102 includes, but is not limited to, multiple cameras with different apertures or one camera with multiple apertures;
  • the displacement acquiring device 103 and the depth map generating device 104 include but are not limited to an image analysis device;
  • the focusing device 105 includes but is not limited to an image processing device.
  • the pattern building device 101 includes a first mapping component and a projection component.
  • the first mapping component is configured to establish a first mapping relationship between the target object and the feature point pattern.
  • the projection component is configured to use the target object to project a feature point pattern to the scene according to the first mapping relationship.
  • the offset acquisition device 103 includes a first selection component, a first acquisition component, and a second acquisition component.
  • the first selection component is configured to select at least one characteristic point, and obtain a corresponding image point group according to the characteristic point.
  • the first obtaining component is configured to obtain the position coordinates of the center of each image point in the image point group.
  • the second acquiring component is configured to acquire the position offset between the image points in the image point group according to the position coordinates.
  • the offset obtaining device 103 further includes a second mapping component.
  • the second mapping component is configured to establish a second mapping relationship between feature points and image points.
  • the focusing device 105 includes a second selected component and a focusing component.
  • the second selected component is set to compare the focal lengths corresponding to different images, and select the image corresponding to the maximum focal length as the target image.
  • the second selected component includes but is not limited to an image processor.
  • the focus component is set to refocus the target area on the image according to the depth map.
  • Focusing components include but are not limited to image adjusters.
  • the control system includes a pattern building device, an image capturing device, an offset acquisition device, a depth map generating device, and a focusing device.
  • the pattern building device projects the characteristic point pattern of the target object; the image capturing device is at different capturing angles. And under different focal lengths, at least two images are captured according to the feature point pattern; then the offset acquisition device acquires the position offset between the image points of the image point group corresponding to the same feature point in the different images, and the depth map generation device according to the image point group , The position offset and the focal length obtain depth information with higher accuracy to generate a depth map; the focusing device focuses on the target area according to the depth map.
  • the system can achieve precise focusing on the target area on the image, improve the overall clarity of the image, and improve the user experience.
  • Each device in the above-mentioned control system can be implemented in whole or in part by software, hardware and a combination thereof.
  • Each of the above-mentioned devices may be embedded in or independent of the processor in the computer equipment in the form of hardware, or may be stored in the memory in the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned devices.
  • the embodiments of the present application also provide a computer-readable storage medium.
  • One or more non-volatile computer-readable storage media containing computer-executable instructions when the computer-executable instructions are executed by one or more processors, cause the processors to execute the steps of the control method in any of the above embodiments .
  • the embodiments of the present application also provide a terminal device, which includes a processor, and the processor is configured to execute a computer program stored in a memory to implement the steps of the control method provided in each of the foregoing embodiments.
  • the program can be stored in a non-volatile computer readable storage medium.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), etc.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM), which acts as external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

一种图像重新聚焦的控制方法及系统。所述方法包括:投射目标物体的特征点图案(S101),在不同捕获角度且不同焦距下,根据特征点图案捕获至少两幅图像(S102),获取不同图像中对应同一特征点的像点组像点间的位置偏移量(S103),根据像点组、位置偏移量以及焦距获取准确度较高的深度信息,生成深度图(S104),选定目标图像,根据深度图对图像上的目标区域进行重新聚焦(S105)。该方法可以提高用户的体验。

Description

图像重新聚焦的控制方法及系统 技术领域
本申请涉及图像显示技术领域,特别是涉及一种图像重新聚焦的控制方法及系统。
背景技术
这里的陈述仅提供与本申请有关的背景信息,而不必然地构成现有技术。
图像或视频的重新聚焦是摄影和拍摄的重要手段。示例性的重新聚焦单个图像的方法是使用集成有特殊微透镜阵列的图像传感器,该微透镜阵列具有不同组的焦距。在拍摄快照时,通过控制微透镜阵列对拍摄物体的对焦,使图像传感器的像素被聚焦在不同深度,由此直接实现单个图像或视频的场景重聚焦。
然而,集成有微透镜阵列的图像传感器价格昂贵,成本高,且多焦距的条件下易于降低整体的清晰度。
发明内容
根据本申请公开的各种实施例,提供一种图像重新聚焦的控制方法及系统。
一种图像重新聚焦的控制方法,包括:
投射目标物体的特征点图案;
在不同捕获角度且不同焦距下,根据特征点图案捕获至少两幅图像;
获取不同图像中对应同一特征点的像点组,获得像点组中像点间的位置偏移量;
根据像点组、位置偏移量以及焦距获取所述特征点的深度信息,生成深度图;
选定目标图像,根据深度图对图像上的目标区域进行重新聚焦。
在其中一个实施例中,投射目标物体的特征点图案的步骤,包括:
建立目标物体与特征点图案的第一映射关系;
根据第一映射关系,利用目标物体向场景投射出特征点图案。
在其中一个实施例中,所述特征点图案中,特征点按预设规则排列。
在其中一个实施例中,预设规则排列包括特征点行列规则间隔排列、行规则间隔排列以及列规则间隔排列。
在其中一个实施例中,获取不同图像中对应同一特征点的像点组,获得像点组中像点间的位置偏移量的步骤,包括:
选定至少一特征点,根据特征点获取对应的像点组;
获取像点组中各像点点中心的位置坐标;
根据所述位置坐标获取像点组中像点间的位置偏移量。
在其中一个实施例中,获取不同图像中对应同一特征点的像点组,获得像点组中像点间的位置偏移量的步骤,还包括:
建立特征点与像点间的第二映射关系。
在其中一个实施例中,建立特征点与像点间的第二映射关系的步骤,具体为:
建立特征点与像点间的映射关系表,根据映射关系表拟合出特征点与像点间的第二映射关系。
在其中一个实施例中,选定目标图像,根据深度图对图像上的目标区域进行重新聚焦的步骤,包括:
对不同图像对应的焦距进行比较,将最大焦距对应的图像选定为目标图像;
根据所述深度图对图像上的目标区域进行重新聚焦。
一种图像重新聚焦的控制系统,包括:
图案建立装置,设置为投射目标物体的特征点图案;
图像捕获装置,设置为在不同捕获角度且不同焦距下,根据特征点图案捕获至少两幅图像;
偏移量获取装置,设置为获取不同图像中对应同一特征点的像点组,获得像点组中像点间的位置偏移量;
深度图生成装置,设置为根据像点组、位置偏移量以及焦距获取所述特征点的深度信息,生成深度图;
聚焦装置,设置为选定目标图像,根据深度图对图像上的目标区域进行重新聚焦。
在其中一个实施例中,所述图案建立装置包括:
第一映射组件,设置为建立目标物体与特征点图案的第一映射关系;
投射组件,设置为根据第一映射关系,利用目标物体向场景投射出特征点图案。
在其中一个实施例中,所述投射组件包括投影仪。
在其中一个实施例中,所述图像捕获装置包括多个具有不同光圈的摄像头或一个具有多个光圈的摄像头。
在其中一个实施例中,所述偏移量获取装置包括:
第一选定组件,设置为选定至少一特征点,根据特征点获取对应的像点组;
第一获取组件,设置为获取像点组中各像点点中心的位置坐标;
第二获取组件,设置为根据所述位置坐标获取像点组中像点间的位置偏移量。
在其中一个实施例中,所述偏移量获取装置还包括:
第二映射组件,设置为建立特征点与像点间的第二映射关系。
在其中一个实施例中,所述聚焦装置包括:
第二选定组件,设置为对不同图像对应的焦距进行比较,将最大焦距对应的图像选定为目标图像;
聚焦组件,设置为根据所述深度图对图像上的目标区域进行重新聚焦。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为一实施例中图像重新聚焦的控制方法的方法流程图;
图2为一实施例中特征点图案的示意图;
图3为一实施例中摄像头捕获图像的示意图;
图4为一实施例中一捕获图像像点的示意图;
图5为一实施例中对应同一特征点的像素点组位于XY平面的位置示意图;
图6为一实施例中图像重新聚焦的控制系统的系统结构图。
具体实施方式
为了使本申请的技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
参见图1,图1为一实施例中图像重新聚焦的控制方法的方法流程图。
在本实施例中,该控制方法包括步骤S101、S102、S103、S104以及S105。详述如下:
在步骤S101中,投射目标物体的特征点图案。
在本实施例中,目标物体是指目标拍摄对象,即最终需要进行重新聚焦的图像拍摄主体,通常该目标物体为具有不同深度的物体,例如三维物体; 特征点图案是指与目标物体空间点相对应的投射点图案,可对应不同深度信息。
其中,特征点图案可以按预设规则排列,预设规则排列包括但不限于特征点行列规则间隔排列、行规则间隔排列以及列规则间隔排列。例如可以是具有规律间隔的稀疏点图案,包括但不限于行列规则间隔的矩阵点、行规则间隔的矩阵点、列规则间隔的矩阵点(请辅助参见图2,图2以行列规则间隔的矩阵点为例,其中小圆圈a代表特征点);当然,特征点图案也可以是无规则排列的离散点;相邻两个特征点的距离可以相同,也可以不相同,在此,对特征点的具体位置及定义不做进一步的限定。当投射点图案为具有规律间隔的稀疏点图案时,可以减少投射成本,同时,还便于后续步骤选定对应同一特征点的像点组。需要说明的是,在任何时刻位于目标物体上的一个给定点在特征点上只占有一个唯一的位置。
其中,可以通过投影仪利用目标物体向场景投射出特征点图案,或者通过其他投影组件利用目标物体向场景投射出特征点图案。
在一实施例中,步骤S101包括步骤S1011和步骤S1012。
在步骤S1011中,建立目标物体与特征点图案的第一映射关系。
由于在任何时刻位于目标物体上的一个给定点在特征点上只占有一个唯一的位置,因此可以建立目标物体与特征点图案的第一映射关系,例如通过透视投影变换或正交投影变换等获得第一映射关系。
在步骤S1012中,根据第一映射关系,利用目标物体向场景投射出特征点图案。
在建立目标物体与特征点图案的第一映射关系后,根据第一映射关系控制投影装置对目标物体投射出特征点图案。
在步骤S102中,在不同捕获角度且不同焦距下,根据特征点图案捕获至少两幅图像。
在本实施例中,可以采用多个具有不同光圈且位于不同拍摄角度的摄像头对同一场景(即特征点图案)进行多个图像的捕获(请辅助参见图3,图3 以两个不同光圈的摄像头为例,其中,第一摄像头10设置为具有较大的光圈值,第二摄像头20设置为具有较小光圈值);也可以采用一个具有多个光圈的摄像头,在多个拍摄角度对同一场景进行多个图像的捕获。其中,摄像头可以应用在电子设备中,例如应用在手机、平板电脑、车载电脑、穿戴式设备、数码相机等具备拍照、摄像功能的任意电子设备中。
在一个实施例中,通过不同摄像头对同一场景进行捕获时,不同摄像头的中心点处在同一平面上;一个摄像头对同一场景进行捕获时,摄像头移动过程中保持中心点的移动轨迹在同一平面上。
在不同的捕获角度且不同焦距下,根据特征点图案可以对同一场景到捕获不同的图像,由于捕获角度不同,因而同一场景不同图像上像点的位置存在偏移;由于焦距不同,同一场景不同图像间存有聚焦的图像和离焦的图像,同时也使得像点位置发生偏移。其中,离焦的图像对应大光圈摄像头,对应的焦距较大,图像分辨率较高,具有较模糊的像点(参见图4,图像T1为第一摄像头10捕获的图像;其中,像点A1、B1、P1分别对应图3的特征点A、B、P,圆圈中部的点为像点的点中心);聚焦的图像对应小光圈摄像头,对应的焦距较小,具有较清晰的像点(参见图4,图像T2为第二摄像头20捕获的图像;其中,像点A2、B2、P2分别对应图3的特征点A、B、P)。其中,由于特征点A、B、P对应不同的深度信息,因此,像点A1、B1、P1可以分别处于第一摄像头10所在相机系统的焦平面的不同侧;像点A2、B2、P2可以分别处于第二摄像头20所在相机系统的焦平面的不同侧。
在步骤S103中,获取不同图像中对应同一特征点的像点组,获得像点组中像点间的位置偏移量。
在本实施例中,位置偏移量是指不同图像中对应同一特征点的像点组间的位置存在的偏移,在一个实施例中,位置偏移量是指像点组各像点点中心间位置偏移的大小或方向,位置偏移量为矢量偏移。由于任何时刻某一特征点在空间只占有一个唯一的位置,因而每幅图像上每一像点只能与唯一一个特征点对应,由此同一像点组间像点点中心的位置偏移量是唯一的。
在本实施例中,任何时刻某一特征点在空间只占有一个唯一的位置,因而每幅图像上每一像点只能与唯一一个特征点对应,即像点组与特征点具有唯一的映射关系。
在一个实施例中,步骤S103包括:步骤S1031、步骤S1032以及步骤S1033。
在步骤S1031中,选定至少一特征点,根据特征点获取对应的像点组。
当选定其中一特征点时,则可以根据映射关系获取该特征点对应的像点组。或者,也可以选定一像点,根据映射关系获取对应特征点,从而根据特征点获取对应同一特征点的其他像点。
在步骤S1032中,获取像点组中各像点点中心的位置坐标。
在确定同一特征点对应的像点组后,确定像点组各像点的位置,并获取各像点点中心的位置坐标。由于图像对应的捕获焦距不同,因此不同图像的模糊程度不同,也即,同一特征点对应的像点组的图像模糊程度不同,而像点点中心的检测,与图像模糊程度以及环境光照无关,因此在不同成像条件下,通过像点点中心位置坐标计算位置偏移量,能保证测量精度的可靠性,提高深度信息获取的准确度。
在S1033中,根据位置坐标获取像点组中像点间的位置偏移量。
在筛选出同一特征点对应的像点组后,根据像点点中心位置坐标就可以获取位置偏移量。
上述步骤中,可以选取捕获图像所在平面为XY平面,并在XY平面上建立二维坐标系,其二维坐标系的原点位置在本申请中不做进一步的限定。以捕获的图像为两幅图像为例,位置偏移量可以将第一图像与第二图像重叠之后映射在XY平面上,并获取两个图像中对应同一特征点的各像点点中心位置坐标之间的矢量距离。例如,选取某一特征点A,根据映射关系选取出与特征点A对应的像点组(辅助参见图3-图5):A1(位于图像T1)和A2(位于图像T2),像点A1在XY平面的坐标信息为A1(X11,Y11),像点A2在XY平面的坐标信息为(X21,Y21),根据像点A1和像点A2就可以 获取位置偏移量D1;B1(位于图像T1)和B2(位于图像T2),像点B1在XY平面的坐标信息为B1(X12,Y12),像点B2在XY平面的坐标信息为(X22,Y22),根据像点B1和像点B2就可以获取位置偏移量D2。
由此,针对每个特征点,可以获取两个图像中对应同一特征点的像点点中心的位置坐标之间的矢量距离;根据确定的多个特征点,可以对获取各个特征点对应的各个像点,以及各个像点组对应的位置偏移量。
在一个实施例中,为了提高特征点与像点组间的匹配度,步骤S103还包括步骤S1034。
在步骤S1034中,建立特征点与像点间的第二映射关系。
可通过实验、理论运算等方式或相结合的方式,预先设置特征点与像点间的第二映射关系,继而根据第二映射关系匹配特征点与对应的像点组。在一个实施例中,可以预先建立特征点与像点间的映射关系表,并根据映射关系表拟合出特征点与像点间的第二映射关系。其中,拟合特征点与像点间的映射关系,可以通过设置函数模型,确定出特征点的位置坐标与像点的位置坐标满足的函数,通过计算机几何技术,在二维坐标系中绘制拟合曲线,从而确定特征点的位置坐标与对应像点的位置坐标值满足的函数。
在步骤S104中,根据像点组、位置偏移量以及焦距获取特征点的深度信息,生成深度图。
在本实施例中,根据像点组点中心的位置坐标、像点组点中心的位置偏移量以及所利用的摄像头的焦距获取特征点的深度信息。以两个摄像头捕获图像为例,其中,第一摄像头的中心点、第二摄像头的中心点位于同一平面,可以设定确定两个摄像头的拍摄位置(拍摄角度)和摄像头中心点间的距离以及第一摄像头和第二摄像头的焦距。基于三角测距原理,可以获取特征点与两个摄像头中心点所在平面之间的距离Z,其中,距离Z即为特征点的深度信息。具体地,距离Z=两个摄像头中心点之间的距离*(第一摄像头或第二摄像头的焦距)/位置偏移量。其中,由于位置偏移量为矢量,因此深度信息的重建可以扩展到摄像头所在相机系统的焦平面的内外侧。
可选地,本方案还可以适用于包括三个或三个以上的摄像头的电子设备。以三个摄像头为例进行说明,可以构成两两摄像头的组合,每个组合中的两个摄像头可以获取特征点的深度信息,这样就可以获取三组深度信息,可以将三组深度信息的平均深度作为特征点的实际深度。提高深度信息获取的准确度,进而实现对拍摄物体的精确对焦。
在本实施例中,可以只获取稀疏点图案对应的深度图。如果需要得到所有特征点的深度信息,可以利用表面内插算法或逼近算法等在已确定的特征点深度信息之间计算出其他未确定特征点的深度值。
在步骤S105中,选定目标图像,根据深度图对图像上的目标区域进行重新聚焦。
在本实施例中,先选定需要进行重聚焦的图像,再从目标图像中确定出一目标区域进行重新聚焦。其中,目标区域是指目标图像上感兴趣的区域,例如人像中的脸部区域,或者其他具有特殊标记的区域。目标区域可以根据实际需求进行大小的选取。
在一实施例中,目标图像选取离焦效果较好的图像,即选取光圈较大的摄像头捕获的图像,该图像具有较模糊的点,较高的图像分辨率,进行重新聚焦时对比度更高,重聚焦效果更突出。
例如,步骤S105可以包括:步骤S1051和步骤S1052。在步骤S1051中,对不同图像对应的焦距进行比较,将最大焦距对应的图像选定为目标图像。在步骤S1052中,根据深度图对图像上的目标区域进行重新聚焦。
在一实施例中,可以选定多个目标区域,依次为目标图像的每一目标区域匹配不同的深度信息;根据匹配的深度信息对目标区域进行重新聚焦。对目标区域进行逐一重聚焦的方式,相对于整个目标图像一次聚焦的方式而言,其聚焦精度大大增强,清晰度更好,且对于后期背景虚化的效果有明显提高。
本实施例提供的控制方法,通过投射目标物体的特征点图案,在不同捕获角度且不同焦距下,根据特征点图案捕获至少两幅图像,继而获取不同图像中对应同一特征点的像点组像点间的位置偏移量,再根据像点组、位置偏 移量以及焦距获取准确度较高的深度信息,生成深度图,由此根据深度图实现对图像上目标区域的精确聚焦,提高图像整体的清晰度,提高用户的体验。
应该理解的是,虽然图1的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
参见图6,图6为一实施例提供的一种图像重新聚焦的控制系统的系统结构图。
本实施例的控制系统包括的各装置设置为执行图1对应的实施例中的各步骤,具体请参阅图1以及图1对应的实施例中的相关描述,此处不赘述。本实施例的控制系统包括:图案建立装置101、图像捕获装置102、偏移量获取装置103、深度图生成装置104以及聚焦装置105。具体地:
图案建立装置101,设置为投射目标物体的特征点图案。
图像捕获装置102,设置为在不同捕获角度且不同焦距下,根据特征点图案捕获至少两幅图像。
偏移量获取装置103,设置为获取不同图像中对应同一特征点的像点组,获得像点组中像点间的位置偏移量。
深度图生成装置104,设置为根据像点组、位置偏移量以及焦距获取特征点的深度信息,生成深度图。
聚焦装置105,设置为选定目标图像,根据深度图对图像上的目标区域进行重新聚焦。
其中,图像捕获装置102与图案建立装置101相匹配,偏移量获取装置103连接图像捕获装置102,深度图生成装置104分别连接偏移量获取装置 103和聚焦装置105,同时,聚焦装置105还连接图像捕获装置102以选定目标图像;图案建立装置101包括但不限于投影仪等投影组件;图像捕获装置102包括但不限于多个具有不同光圈的摄像头或一个具有多个光圈的摄像头;偏移量获取装置103和深度图生成装置104包括但不限于图像分析装置;聚焦装置105包括但不限于图像处理装置。
在一个实施例中,图案建立装置101包括第一映射组件和投射组件。
第一映射组件,设置为建立目标物体与特征点图案的第一映射关系。
投射组件,设置为根据第一映射关系,利用目标物体向场景投射出特征点图案。
在一个实施例中,偏移量获取装置103包括第一选定组件、第一获取组件以及第二获取组件。
第一选定组件,设置为选定至少一特征点,根据特征点获取对应的像点组。
第一获取组件,设置为获取像点组中各像点点中心的位置坐标。
第二获取组件,设置为根据所述位置坐标获取像点组中像点间的位置偏移量。
在另一实施例中,偏移量获取装置103还包括第二映射组件。
第二映射组件,设置为建立特征点与像点间的第二映射关系。
在一个实施例中,聚焦装置105包括第二选定组件和聚焦组件。
第二选定组件,设置为对不同图像对应的焦距进行比较,将最大焦距对应的图像选定为目标图像。第二选定组件包括但不限于图像处理器。
聚焦组件,设置为根据深度图对图像上的目标区域进行重新聚焦。聚焦组件包括但不限于图像调节器。
本实施例提供的控制系统,包括图案建立装置、图像捕获装置、偏移量获取装置、深度图生成装置以及聚焦装置,通过图案建立装置投射目标物体的特征点图案;图像捕获装置在不同捕获角度且不同焦距下,根据特征点图案捕获至少两幅图像;继而偏移量获取装置获取不同图像中对应同一特征点 的像点组像点间的位置偏移量,深度图生成装置根据像点组、位置偏移量以及焦距获取准确度较高的深度信息,生成深度图;聚焦装置根据深度图对目标区域进行聚焦。由此,系统能够实现对图像上目标区域的精确聚焦,提高图像整体的清晰度,提高用户的体验。
上述控制系统中的各个装置可全部或部分通过软件、硬件及其组合来实现。上述各装置可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个装置对应的操作。
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当计算机可执行指令被一个或多个处理器执行时,使得处理器执行上述任一实施例中的控制方法的步骤。
本申请实施例还提供一种终端设备,该终端设备包括处理器,处理器用于执行存储器中存储的计算机程序以实现上述各个实施例提供的控制方法的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直 接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (15)

  1. 一种图像重新聚焦的控制方法,包括:
    投射目标物体的特征点图案;
    在不同捕获角度且不同焦距下,根据特征点图案捕获至少两幅图像;
    获取不同图像中对应同一特征点的像点组,获得像点组中像点间的位置偏移量;
    根据像点组、位置偏移量以及焦距获取所述特征点的深度信息,生成深度图;
    选定目标图像,根据深度图对图像上的目标区域进行重新聚焦。
  2. 根据权利要求1所述的控制方法,其中投射目标物体的特征点图案的步骤,包括:
    建立目标物体与特征点图案的第一映射关系;
    根据第一映射关系,利用目标物体向场景投射出特征点图案。
  3. 根据权利要求2所述的控制方法,其中所述特征点图案中,特征点按预设规则排列。
  4. 根据权利要求3所述的控制方法,其中预设规则排列包括特征点行列规则间隔排列、行规则间隔排列以及列规则间隔排列。
  5. 根据权利要求1所述的控制方法,其中获取不同图像中对应同一特征点的像点组,获得像点组中像点间的位置偏移量的步骤,包括:
    选定至少一特征点,根据特征点获取对应的像点组;
    获取像点组中各像点点中心的位置坐标;
    根据所述位置坐标获取像点组中像点间的位置偏移量。
  6. 根据权利要求5所述的控制方法,其中获取不同图像中对应同一特征点的像点组,获得像点组中像点间的位置偏移量的步骤,还包括:
    建立特征点与像点间的第二映射关系。
  7. 根据权利要求6所述的控制方法,其中建立特征点与像点间的第二映射关系的步骤,具体为:
    建立特征点与像点间的映射关系表,根据映射关系表拟合出特征点与像点间的第二映射关系。
  8. 根据权利要求1所述的控制方法,其中选定目标图像,根据深度图对图像上的目标区域进行重新聚焦的步骤,包括:
    对不同图像对应的焦距进行比较,将最大焦距对应的图像选定为目标图像;
    根据所述深度图对图像上的目标区域进行重新聚焦。
  9. 一种图像重新聚焦的控制系统,包括:
    图案建立装置,设置为投射目标物体的特征点图案;
    图像捕获装置,设置为在不同捕获角度且不同焦距下,根据特征点图案捕获至少两幅图像;
    偏移量获取装置,设置为获取不同图像中对应同一特征点的像点组,获得像点组中像点间的位置偏移量;
    深度图生成装置,设置为根据像点组、位置偏移量以及焦距获取所述特征点的深度信息,生成深度图;
    聚焦装置,设置为选定目标图像,根据深度图对图像上的目标区域进行重新聚焦。
  10. 根据权利要求9所述的控制系统,其中所述图案建立装置包括:
    第一映射组件,设置为建立目标物体与特征点图案的第一映射关系;
    投射组件,设置为根据第一映射关系,利用目标物体向场景投射出特征点图案。
  11. 根据权利要求10所述的控制系统,其中所述投射组件包括投影仪。
  12. 根据权利要求9所述的控制系统,其中所述图像捕获装置包括多个具有不同光圈的摄像头或一个具有多个光圈的摄像头。
  13. 根据权利要求9所述的控制系统,其中所述偏移量获取装置包括:
    第一选定组件,设置为选定至少一特征点,根据特征点获取对应的像点组;
    第一获取组件,设置为获取像点组中各像点点中心的位置坐标;
    第二获取组件,设置为根据所述位置坐标获取像点组中像点间的位置偏移量。
  14. 根据权利要求13所述的控制系统,其中所述偏移量获取装置还包括:
    第二映射组件,设置为建立特征点与像点间的第二映射关系。
  15. 根据权利要求9所述的控制系统,其中所述聚焦装置包括:
    第二选定组件,设置为对不同图像对应的焦距进行比较,将最大焦距对应的图像选定为目标图像;
    聚焦组件,设置为根据所述深度图对图像上的目标区域进行重新聚焦。
PCT/CN2019/071532 2019-01-14 2019-01-14 图像重新聚焦的控制方法及系统 WO2020146965A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/071532 WO2020146965A1 (zh) 2019-01-14 2019-01-14 图像重新聚焦的控制方法及系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/071532 WO2020146965A1 (zh) 2019-01-14 2019-01-14 图像重新聚焦的控制方法及系统

Publications (1)

Publication Number Publication Date
WO2020146965A1 true WO2020146965A1 (zh) 2020-07-23

Family

ID=71613470

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/071532 WO2020146965A1 (zh) 2019-01-14 2019-01-14 图像重新聚焦的控制方法及系统

Country Status (1)

Country Link
WO (1) WO2020146965A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053406A (zh) * 2020-08-25 2020-12-08 杭州零零科技有限公司 成像装置参数标定方法、装置及电子设备
CN117880630A (zh) * 2024-03-13 2024-04-12 杭州星犀科技有限公司 对焦深度获取方法、对焦深度获取系统及终端

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160266467A1 (en) * 2015-03-10 2016-09-15 Qualcomm Incorporated Search range extension for depth assisted autofocus
CN106412426A (zh) * 2016-09-24 2017-02-15 上海大学 全聚焦摄影装置及方法
CN107133982A (zh) * 2017-04-28 2017-09-05 广东欧珀移动通信有限公司 深度图构建方法、装置及拍摄设备、终端设备
CN107527336A (zh) * 2016-06-22 2017-12-29 北京疯景科技有限公司 镜头相对位置标定方法及装置
CN107924104A (zh) * 2015-08-18 2018-04-17 英特尔公司 深度感测自动聚焦多相机系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160266467A1 (en) * 2015-03-10 2016-09-15 Qualcomm Incorporated Search range extension for depth assisted autofocus
CN107924104A (zh) * 2015-08-18 2018-04-17 英特尔公司 深度感测自动聚焦多相机系统
CN107527336A (zh) * 2016-06-22 2017-12-29 北京疯景科技有限公司 镜头相对位置标定方法及装置
CN106412426A (zh) * 2016-09-24 2017-02-15 上海大学 全聚焦摄影装置及方法
CN107133982A (zh) * 2017-04-28 2017-09-05 广东欧珀移动通信有限公司 深度图构建方法、装置及拍摄设备、终端设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053406A (zh) * 2020-08-25 2020-12-08 杭州零零科技有限公司 成像装置参数标定方法、装置及电子设备
CN112053406B (zh) * 2020-08-25 2024-05-10 杭州零零科技有限公司 成像装置参数标定方法、装置及电子设备
CN117880630A (zh) * 2024-03-13 2024-04-12 杭州星犀科技有限公司 对焦深度获取方法、对焦深度获取系统及终端
CN117880630B (zh) * 2024-03-13 2024-06-07 杭州星犀科技有限公司 对焦深度获取方法、对焦深度获取系统及终端

Similar Documents

Publication Publication Date Title
US11039121B2 (en) Calibration apparatus, chart for calibration, chart pattern generation apparatus, and calibration method
US9946955B2 (en) Image registration method
CN108432230B (zh) 一种成像设备和一种用于显示场景的图像的方法
US20160269620A1 (en) Phase detection autofocus using subaperture images
US11282232B2 (en) Camera calibration using depth data
WO2016155110A1 (zh) 图像透视畸变校正的方法及系统
US10692262B2 (en) Apparatus and method for processing information of multiple cameras
CN112215880B (zh) 一种图像深度估计方法及装置、电子设备、存储介质
TW201340035A (zh) 結合影像之方法
WO2018001252A1 (zh) 一种投射单元及包括该单元的拍摄装置、处理器、成像设备
CN109495733B (zh) 三维影像重建方法、装置及其非暂态电脑可读取储存媒体
EP3144894A1 (en) Method and system for calibrating an image acquisition device and corresponding computer program product
KR102335167B1 (ko) 영상 촬영 장치 및 이의 촬영 방법
WO2020146965A1 (zh) 图像重新聚焦的控制方法及系统
KR20220121533A (ko) 어레이 카메라를 통해 획득된 영상을 복원하는 영상 복원 방법 및 영상 복원 장치
WO2017096859A1 (zh) 照片的处理方法及装置
US10560648B2 (en) Systems and methods for rolling shutter compensation using iterative process
US11166005B2 (en) Three-dimensional information acquisition system using pitching practice, and method for calculating camera parameters
JP5925109B2 (ja) 画像処理装置、その制御方法、および制御プログラム
CN111292380A (zh) 图像处理方法及装置
CN109600552B (zh) 图像重新聚焦的控制方法及系统
CN108648238B (zh) 虚拟角色驱动方法及装置、电子设备和存储介质
KR102430726B1 (ko) 멀티 카메라의 정보 처리 장치 및 방법
CN110796596A (zh) 图像拼接方法、成像装置及全景成像系统
CN111080689B (zh) 确定面部深度图的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19910400

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19910400

Country of ref document: EP

Kind code of ref document: A1