WO2017152529A1 - 基准平面的确定方法和确定系统 - Google Patents
基准平面的确定方法和确定系统 Download PDFInfo
- Publication number
- WO2017152529A1 WO2017152529A1 PCT/CN2016/085251 CN2016085251W WO2017152529A1 WO 2017152529 A1 WO2017152529 A1 WO 2017152529A1 CN 2016085251 W CN2016085251 W CN 2016085251W WO 2017152529 A1 WO2017152529 A1 WO 2017152529A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- reference plane
- planar
- plane
- image
- axis
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/564—Depth or shape recovery from multiple images from contours
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/162—Segmentation; Edge detection involving graph-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20072—Graph-based image processing
Definitions
- the present invention relates to the field of display technologies, and in particular, to a method and a system for determining a reference plane.
- Augmented Reality is a new technology that seamlessly integrates real-world information with virtual world information.
- augmented reality technology applies virtual information to the real world, so that the real environment and the virtual object are superimposed in the same picture or the same space in real time. Therefore, augmented reality technology not only displays the information of the real world, but also displays the information of the virtual world at the same time.
- the two kinds of information complement each other, are superimposed, and are perceived by human senses, so that people can obtain a sensory experience beyond reality.
- the optical perspective augmented reality system has the advantages of simplicity, high resolution, and no visual bias.
- the existing optical perspective augmented reality system integrates the virtual object with the real scene, or needs to adjust the angle of the lens at all times, or needs to manually set the calibration position, so that the virtual object can be arranged in an appropriate position in the real scene.
- the above method causes the virtual object to be difficult to match with the real scene in real time, thereby affecting the user experience.
- embodiments of the present invention provide a method and a determining system for determining a reference plane, which can easily match a virtual object with a real scene in real time, improve a surreal sensory experience of the user, and are suitable for use in a portable device. .
- An aspect of the invention provides a method for determining a reference plane, comprising:
- edge extraction on the depth image to form an edge image the edge image including a plurality of planar graphics
- a planar pattern among the edge images is filtered to determine a reference plane.
- the determining method of the reference plane further includes:
- a reference coordinate system is formed according to the reference plane.
- the step of performing edge extraction on the depth image to form an edge image includes:
- Edge extraction is performed on the binary image to form an edge image.
- the step of filtering the planar graphics in the edge image to determine the reference plane comprises:
- a planar pattern among the first set of planar graphics is filtered to determine a reference plane.
- the step of filtering the planar graphic in the first planar graphic set to determine the reference plane comprises:
- a planar pattern among the second set of planar graphics is filtered to determine a reference plane.
- the step of filtering the planar graphic in the second planar graphic set to determine the reference plane comprises:
- a plane figure whose center point of the plane figure is closest to the lower part of the edge image is selected from the second set of plane patterns as a reference plane.
- the reference coordinate system includes a first axis, a second axis, and a third axis, wherein the first axis, the second axis, and the third axis are perpendicular to each other, and the reference coordinate is formed according to the reference plane
- the steps of the department include:
- the first axis is perpendicular to the reference plane, and the second axis and the third axis are disposed at the reference plane Inside.
- the step of acquiring coordinates of at least three points within the reference plane comprises:
- the determining method of the reference plane further includes:
- the step of forming a reference coordinate system according to the reference plane includes:
- a reference coordinate system is formed according to the reference plane and the reference object plane, and an origin of the reference coordinate system is disposed within the reference object plane.
- Another aspect of the present invention provides a determination system of a reference plane, comprising:
- a first acquiring unit configured to acquire a depth image
- a first extracting unit configured to perform edge extraction on the depth image to form an edge image, where the edge image includes a plurality of planar graphics
- a first screening unit configured to filter a planar graphic among the edge images to determine a reference plane.
- the determining system of the reference plane further includes:
- a first forming unit configured to form a reference coordinate system according to the reference plane.
- the first extracting unit includes:
- a first acquiring module configured to acquire a gradient change rate of the depth image according to a preset gradient algorithm
- a first forming module configured to form a binary image according to the gradient change rate
- the first extraction module is configured to perform edge extraction on the binary image to form an edge image.
- the first screening unit includes:
- a first screening module configured to filter a planar image whose image depth value is decremented from a lower portion to an upper portion to form a first planar graphic set
- a second screening module configured to: planar graphics in the first planar graphic set Screening to determine the datum plane.
- the second screening module includes:
- a first screening sub-module configured to filter a planar graphic having a planar graphic area larger than 15% of an edge image area to form a second planar graphic set
- a second screening sub-module configured to filter a planar graphic in the second planar graphic set to determine a reference plane.
- the second screening submodule includes:
- a third screening sub-module configured to select, from the second set of planar graphics, a planar figure whose center point of the planar figure is closest to the lower part of the edge image as a reference plane.
- the reference coordinate system includes a first axis, a second axis, and a third axis, wherein the first axis, the second axis, and the third axis are perpendicular to each other, and the first forming unit includes:
- a second acquiring module configured to acquire coordinates of at least three points within the reference plane
- a second forming module configured to form a reference coordinate system according to coordinates of the at least three points, the first axis is perpendicular to the reference plane, and the second axis and the third axis are disposed on the reference plane within.
- the second obtaining module includes:
- a first acquisition submodule for acquiring a largest square within the reference plane
- a second obtaining submodule configured to acquire coordinates of four vertices of the largest square.
- the determining system of the reference plane further includes:
- a second screening unit configured to screen a planar image among the edge images to determine a reference object plane, the reference object plane being parallel to the reference plane;
- the first forming unit includes:
- a third forming module configured to form a reference coordinate system according to the reference plane and the reference object plane, where an origin of the reference coordinate system is disposed within the reference object plane.
- the determining method of the reference plane comprises: acquiring a depth image, and performing edge on the depth image Edge extraction to form an edge image comprising a plurality of planar graphics, the planar graphics of the edge images being screened to determine a reference plane.
- the technical solution provided by the embodiment of the present invention can determine a reference plane among the real scenes, thereby establishing a virtual coordinate system based on the reference plane, and finally integrating the virtual object with the real scene. Therefore, the technical solution provided by the present invention can easily match a virtual object with a real scene in real time, improve a surreal sensory experience of the user, and is suitable for use in a portable device such as a wearable lens.
- FIG. 1 is a flowchart of a method for determining a reference plane according to Embodiment 1 of the present invention
- Embodiment 2 is a depth image according to Embodiment 1 of the present invention.
- FIG. 3 is a binary image according to Embodiment 1 of the present invention.
- FIG. 5 is a schematic diagram of a reference plane according to Embodiment 1 of the present invention.
- FIG. 6 is a schematic diagram of a maximum inscribed square according to Embodiment 1 of the present invention.
- FIG. 7 is a schematic plan view of a reference object according to Embodiment 1 of the present invention.
- FIG. 8 is a schematic functional block diagram of a system for determining a reference plane according to Embodiment 2 of the present invention.
- FIG. 9 is a schematic functional block diagram of an example of a determination system of a reference plane according to Embodiment 2 of the present invention.
- FIG. 1 is a flowchart of a method for determining a reference plane according to Embodiment 1 of the present invention. As shown in FIG. 1, the method for determining the reference plane includes the following steps 1001 to 1003.
- Step 1001 Acquire a depth image.
- Embodiment 2 is a depth image provided by Embodiment 1 of the present invention. As shown in FIG. 2, this embodiment can acquire a depth image of an office through a depth camera.
- the depth image described in this embodiment may also be referred to as a depth of field image.
- Step 1002 Perform edge extraction on the depth image to form an edge image, where the edge image includes a plurality of planar graphics.
- the step of performing edge extraction on the depth image to form an edge image may include: acquiring a gradient change rate of the depth image according to a preset gradient algorithm; forming a binary value according to the gradient change rate An image; and performing edge extraction on the binary image to form an edge image.
- FIG. 3 is a binary image provided by Embodiment 1 of the present invention. Since the depth image is different from the binary image, edge extraction cannot be performed. Therefore, in the present embodiment, the depth image is first converted into a binary image, as shown in FIG. 3, and then edge extraction is performed in the binary image, thereby forming an edge image.
- the Binary Image means that there are only two possible values or gray levels for each pixel of the image. In the depth image, the gradient of the edge of one plane in the direction of the viewing angle is uniform. Therefore, the gradient change rate of the depth image may be calculated according to a preset gradient algorithm, for example, the Sobel algorithm, the pixels with the same gradient change rate are set to black, and the pixels with uneven gradient change rate are set to white, thereby A binary image is formed.
- FIG. 4 is an edge image according to Embodiment 1 of the present invention. Connecting pixels having the same gradient change rate to each other can form an edge of the planar pattern, and thus by performing edge extraction on the binary image, an edge image can be formed, as shown in FIG.
- the edge image includes a plurality of planar graphics. For example, the planar image 10 at the intermediate position is formed by the edge contour of the table top.
- Step 1003 Filter a planar graphic among the edge images to determine a reference plane.
- FIG. 5 is a schematic diagram of a reference plane according to Embodiment 1 of the present invention. How to screen a planar figure in an edge image to determine a reference plane is described below with reference to FIG.
- the edge image includes a plurality of plane graphics, such as a desk of a desk.
- the face which is the horizontal plane facing upwards
- the ceiling surface which is the horizontal plane facing downwards
- the wall surface which is the vertical plane
- the side of the desk which is a vertical plane
- the desired reference plane eg, a planar pattern of the horizontal plane facing upwards
- the image depth value depends on the device that captures the depth image.
- the farther the target is from the camera of the device the smaller the image depth value is; in other devices, the farther the target is from the camera of the device, the larger the image depth value; in some devices, according to some devices, A specific mapping relationship is used to convert the distance value between the target and the camera into an image depth value.
- the farther the target is from the camera of the device the smaller the image depth value is. Therefore, the gradient change of the planar pattern of the upward horizontal plane is decremented from bottom to top in the image (the depth value in the direction of the user's viewing angle is decreased from near and far), for example, the depth value of the desktop in the direction of the user's viewing angle is near And far and evenly decreasing, thereby forming a planar figure 10.
- the step of filtering the planar graphic among the edge images to determine the reference plane may include: screening a planar graphic whose image depth value is decreased from the lower portion to the upper portion to form a first planar graphic set. Filtering the planar graphics in the first set of planar graphics to determine a reference plane. Therefore, the present embodiment can eliminate the plane whose variation trend does not meet the above requirements by the gradient change rate of the plane, for example, the side of the ceiling and the side of the desk.
- the corresponding manner for filtering out the first planar graphic can be easily conceived with reference to the above example.
- This embodiment selects a plane that is as large as possible and horizontal as a reference plane for projecting a virtual object.
- the step of filtering the planar graphic in the first planar graphic set to determine the reference plane comprises: screening a planar graphic having a planar graphic area larger than 15% of an area of the entire edge image to form a second set of planar graphics; and screening the planar graphics of the second set of planar graphics to determine a reference plane.
- a plane pattern having an area larger than 15% of the entire image area is selected as the reference plane, and a planar pattern having an excessively small area can be excluded.
- the area of the planar graphic 10 of the desktop is greater than 15% of the entire image area, so the planar graphic 10 of the desktop is selected as the reference plane.
- the step of filtering the planar graphic in the second planar graphic set to determine the reference plane comprises: selecting, from the second planar graphic set, a central point of the planar graphic closest to the lower part of the edge image Plane graphic as a reference plane.
- the plane point of the edge contour near the lower portion of the entire image may be selected from the plane patterns in the user's viewing angle direction.
- some of the ceiling and walls can be excluded.
- the planar pattern 10 is selected as the reference plane 20, as shown in FIG.
- the determining method of the reference plane further includes: forming a reference coordinate system according to the reference plane.
- the reference coordinate system includes a first axis, a second axis, and a third axis, wherein the first axis, the second axis, and the third axis are perpendicular to each other, and the reference coordinate is formed according to the reference plane
- the step of: acquiring coordinates of at least three points within the reference plane; forming a reference coordinate system according to coordinates of the at least three points, the first axis being perpendicular to the reference plane, the second The shaft and the third shaft are disposed within the reference plane.
- the step of acquiring coordinates of at least three points within the reference plane comprises: acquiring a largest square within the reference plane; acquiring coordinates of four vertices of the maximum square.
- FIG. 6 is a schematic diagram of a maximum inscribed square according to Embodiment 1 of the present invention. As shown in FIG. 6, the coordinate values of the four vertices of the square 30 are set, and a reference coordinate system is established with the reference plane 20 (i.e., the plane figure 10) as a horizontal plane.
- the reference coordinate system includes a first axis, a second axis, and a third axis, the first axis, the second axis, and the third axis being perpendicular to each other.
- the first axis is perpendicular to the reference plane, and the second axis and the third axis are disposed within the reference plane.
- the largest inscribed square of the planar pattern of the desktop obtained by drawing a small square at the center of the contour and then gradually magnifying it to intersect the contour edge is located substantially in the central region of the table top.
- the coordinates form a reference coordinate system.
- the virtual laptop can be placed in the center of the desktop based on the reference coordinate system. Therefore, the plane formed by the chassis coordinate points of the virtual notebook coincides with the square 30 described above. If additional virtual objects need to be added, other virtual objects may be added sequentially on the reference plane 20 (eg, near the central region of the reference plane 20).
- the step of forming the reference coordinate system according to the reference plane may further include: screening a planar image among the edge images to determine a reference object plane, the reference object plane and the reference plane The reference plane is parallel; the step of forming a reference coordinate system according to the reference plane further comprises: forming a reference coordinate system according to the reference plane and the reference object plane, an origin of the reference coordinate system being disposed at the reference object plane within.
- the calculation speed of each frame is within 30 ms, so that plane detection can be performed in real time.
- the viewing angle is not allowed to change greatly, otherwise the virtual object cannot be accurately placed.
- the above-mentioned reference plane determination method accurately finds the reference plane, once the user's line of sight changes, the situation where the reference plane appears in the field of view also changes, and the position of the center point of the reference plane also changes. If a virtual laptop was previously placed in a square-calibrated location, the laptop will shift position as soon as the user's field of view changes. Although the notebook is still on the previous datum plane, its specific location in the real space is different, which causes the user's subjective feelings and experience to be greatly affected.
- FIG. 7 is a schematic plan view of a reference object according to Embodiment 1 of the present invention.
- a reference object such as a box or a book placed in advance on the table top is detected, and the obtained planar image of the reference object is determined as the reference object plane 40.
- the method of determining the reference plane 40 can be similar to the method of determining the reference plane as described above.
- an appropriate planar graphic can be obtained from the image depth value.
- the planar pattern can be selected, for example, by setting a threshold value smaller than 15% or less of the edge image area to obtain a reference object plane.
- the reference plane 40 can be parallel to the reference plane 20.
- a reference coordinate system is established from the reference plane 20 and the reference object plane 40.
- the origin of the reference coordinate system can be placed within the reference object plane 40.
- the placement position of each virtual object with respect to the reference object can be determined on the reference plane 20, and then the virtual object is merged with the real scene. Since the reference object plane 40 is fixed and the origin of the reference coordinate system is disposed within the reference object plane 40, the user can be allowed to perform a certain angle of view rotation. As long as the reference object plane 40 is always present completely within the field of view, it can be used as a reference for calibration, thereby avoiding the confusion of the positions of other virtual objects.
- the determining method of the reference plane provided in this embodiment includes: acquiring a depth image; performing edge extraction on the depth image to form an edge image, the edge image includes a plurality of plane graphics; and planar graphics among the edge images Screening is performed to determine the datum plane.
- the technical solution provided by the embodiment can determine the reference plane among the real scenes, thereby establishing a virtual coordinate system based on the reference plane, and finally achieving the purpose of integrating the virtual object with the real scene. Therefore, the technical solution provided by the embodiment can easily match the virtual object with the real scene in real time, improve the surreal sensory experience of the user, and is suitable for application in a portable device such as a wearable lens.
- FIG. 8 is a schematic structural diagram of a system for determining a reference plane according to Embodiment 2 of the present invention.
- the determining system of the reference plane includes: a first acquiring unit 101, configured to acquire a depth image; and a first extracting unit 102, configured to perform edge extraction on the depth image to form an edge image.
- the edge image includes a plurality of plane graphics; the first screening unit 103 is configured to filter the planar graphics among the edge images to determine a reference plane.
- the first acquisition unit 101 acquires a depth image of the office.
- the first acquisition unit 101 may be a depth camera, and the depth image described in this embodiment is also referred to as a depth image.
- the first extraction unit 102 includes: a first acquiring module, configured to acquire a gradient change rate of the depth image according to a preset gradient algorithm; and a first forming module, configured to change according to the gradient Forming a binary image; a first extraction module, configured to perform edge extraction on the binary image to form an edge image.
- the present embodiment converts the depth image into a binary image, as shown in FIG. 3, and then performs edge extraction in the binary image, thereby forming an edge image.
- the Binary Image means that there are only two possible values or gray levels for each pixel of the image. In the depth image, the gradient of the edge of one plane in the direction of the viewing angle is uniform. Therefore, the first obtaining module calculates a gradient change rate of the depth image according to a preset gradient algorithm, for example, a Sobel algorithm, and the first forming module sets pixels with the same gradient change rate to black, and the gradient change rate is uneven. The pixels are set to white to form a binary image.
- the edge image includes a plurality of planar graphics.
- the planar image 10 at the intermediate position is formed by the edge contour of the table top.
- the first screening unit includes: a first screening module, configured to filter a planar image whose image depth value is decremented from a lower portion to an upper portion to form a first planar graphic set; and a second screening module, configured to The planar graphics in the first set of planar graphics are filtered to determine a reference plane.
- the gradient change of the horizontal plane graphic is decremented from bottom to top in the image (the depth value in the user's viewing direction is decreased from near to far), for example, the desktop is The depth value of the user's viewing angle direction is uniformly decremented from near to far, thereby forming a planar figure 10.
- This embodiment can eliminate the plane whose variation trend does not meet the above requirements by the gradient change rate of the plane, for example, the side of the ceiling and the side of the desk.
- the second screening module includes: a first screening sub-module, configured to filter a planar graphic having a planar graphic area larger than 15% of an area of the entire edge image to form a second planar graphic set; and a second screening sub-module And filtering the planar graphic in the second planar graphic set to determine a reference plane.
- the selected area is greater than 15% of the entire image area as the reference plane, thereby eliminating the planar image with too small an area.
- the area of the planar graphic 10 of the desktop is greater than 15% of the entire image area, so the planar graphic 10 of the desktop is selected as the reference plane.
- the second screening sub-module includes a third screening sub-module
- the third screening sub-module is configured to select, from the second set of planar graphics, a planar graphic that is closest to a lower portion of the edge image of a central point of the planar graphic.
- the third screening sub-module may select a plane pattern of the edge contour from the center point of the entire image in the direction of the user's viewing angle. At this point, some of the ceiling and walls can be excluded.
- the planar pattern 10 is selected as the reference plane 20.
- the determining system of the reference plane further includes: a first forming unit configured to form a reference coordinate system according to the reference plane.
- the reference coordinate system includes a first axis, a second axis, and a third axis, wherein the first axis, the second axis, and the third axis are perpendicular to each other
- the first forming unit includes: a second An acquiring module, configured to acquire coordinates of at least three points within the reference plane; a second forming module, configured to form a reference coordinate system according to coordinates of the at least three points, the first axis and the reference
- the plane is vertical, and the second axis and the third axis are disposed within the reference plane.
- the second obtaining module includes: a first acquiring sub-module for acquiring a maximum square within the reference plane; and a second acquiring sub-module for acquiring coordinates of four vertices of the maximum square.
- the reference plane Above the reference plane, at least three points are selected to locate the reference plane.
- This embodiment obtains four points by drawing the maximum inscribed square of the reference plane.
- the largest inscribed square of the outline of the datum plane can be obtained by drawing a small square at the center of the outline and then gradually magnifying it to intersect the edge of the outline.
- the coordinate values of the four vertices of the square 30 are set, and a reference coordinate system is established with the reference plane 20 (i.e., the plane figure 10) as a horizontal plane.
- the reference coordinate system includes a first axis, a second axis, and a third axis, the first axis, the second axis, and the third axis being perpendicular to each other.
- the first axis is perpendicular to the reference plane, and the second axis and the third axis are disposed within the reference plane.
- the largest inscribed square of the planar pattern of the desktop obtained by drawing a small square at the center of the contour and then gradually magnifying it to intersect the contour edge is located substantially in the central region of the table top.
- a reference coordinate system is formed based on coordinates of at least three points on the square 30.
- the virtual laptop can be placed in the center of the desktop based on the reference coordinate system. Therefore, the plane formed by the chassis coordinate points of the virtual notebook coincides with the square 30 described above. If additional virtual objects need to be added, other virtual objects may be added sequentially on the reference plane 20 (eg, near the central region of the reference plane 20).
- the calculation speed of each frame is within 30 ms, so that plane detection can be performed in real time.
- the viewing angle is not allowed to change greatly, otherwise the virtual object cannot be accurately placed.
- the above-mentioned reference plane determination method accurately finds the reference plane, once the user's line of sight changes, the situation where the reference plane appears in the field of view also changes, and the position of the center point of the reference plane also changes. If a virtual laptop was previously placed in a square-calibrated location, the laptop will shift position as soon as the user's field of view changes. Although the notebook is still on the previous datum plane, its specific location in the real space is different, which causes the user's subjective feelings and experience to be greatly affected.
- the determining system of the reference plane may further include: a second screening unit, configured to filter a planar graphic among the edge images to determine a reference object plane, the reference object plane and the reference plane
- the reference plane is parallel;
- the first forming unit may further include: a third forming module, configured to form a reference coordinate system according to the reference plane and the reference object plane, where an origin of the reference coordinate system is set at the reference object Within the plane.
- a reference object such as a box or a book placed in advance on the table top is detected, and a plane figure of the reference object is determined as the reference object plane 40.
- the method of determining the reference plane 40 can be similar to the method of determining the reference plane as described above.
- an appropriate planar graphic can be obtained from the image depth value.
- the planar pattern can be screened by setting a threshold less than 15% or less of the edge image area to obtain a reference object plane.
- the reference plane 40 can be parallel to the reference plane 20.
- a reference coordinate system is established from the reference plane 20 and the reference object plane 40.
- the origin of the reference coordinate system can be placed within the reference object plane 40.
- the placement position of each virtual object with respect to the reference object can be determined on the reference plane 20, and then the virtual object is merged with the real scene. Since the reference object plane 40 is fixed and the origin of the reference coordinate system is disposed within the reference object plane 40, the user can be allowed to perform a certain angle of view rotation. As long as the reference object plane 40 is always present completely within the field of view, it can be used as a reference for calibration, thereby avoiding the confusion of the positions of other virtual objects.
- the determining system of the reference plane includes: a first acquiring unit, configured to acquire a depth image; and a first extracting unit, configured to perform edge extraction on the depth image to form an edge image, where the edge image includes multiple a plane; a first screening unit for screening planes in the edge image to determine a reference plane.
- the technical solution provided by the embodiment can determine the reference plane among the real scenes, thereby establishing a virtual coordinate system based on the reference plane, and finally achieving the purpose of integrating the virtual object with the real scene. Therefore, the technical solution provided by the embodiment can easily match the virtual object with the real scene in real time, improve the surreal sensory experience of the user, and is suitable for application in a portable device such as a wearable lens.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (18)
- 一种基准平面的确定方法,包括:获取深度图像;对所述深度图像进行边缘提取,以形成边缘图像,所述边缘图像包括多个平面图形;以及对所述边缘图像之中的平面图形进行筛选,以确定基准平面。
- 根据权利要求1所述的基准平面的确定方法,其中,在所述对所述边缘图像之中的平面图形进行筛选,以确定基准平面的步骤之后,所述基准平面的确定方法还包括:根据所述基准平面形成基准坐标系。
- 根据权利要求1所述的基准平面的确定方法,其中,所述对所述深度图像进行边缘提取,以形成边缘图像的步骤包括:根据预设的梯度算法获取所述深度图像的梯度变化率;根据所述梯度变化率形成二值图像;以及对所述二值图像进行边缘提取,以形成边缘图像。
- 根据权利要求1所述的基准平面的确定方法,其中,所述对所述边缘图像之中的平面图形进行筛选,以确定基准平面的步骤包括:筛选图像深度值由下部向上部递减的平面图形,以形成第一平面图形集合;以及对所述第一平面图形集合之中的平面图形进行筛选,以确定基准平面。
- 根据权利要求4所述的基准平面的确定方法,其中,所述对所述第一平面图形集合之中的平面图形进行筛选,以确定基准平面的步骤包括:筛选平面图形面积大于边缘图像面积的15%的平面图形,以形成第二平面图形集合;以及对所述第二平面图形集合之中的平面图形进行筛选,以确定基准平面。
- 根据权利要求5所述的基准平面的确定方法,其中,所述对所述第二平面图形集合之中的平面图形进行筛选,以确定基准平面的步骤包括:从所述第二平面图形集合之中选择平面图形中心点最靠近边缘图像下部的平面图形,作为基准平面。
- 根据权利要求2所述的基准平面的确定方法,其中,所述基准坐标系包括第一轴、第二轴以及第三轴,所述第一轴、第二轴以及第三轴之间相互垂直,所述根据所述基准平面形成基准坐标系的步骤包括:获取所述基准平面之内的至少三个点的坐标;以及根据所述至少三个点的坐标形成基准坐标系,所述第一轴与所述基准平面垂直,所述第二轴与所述第三轴设置在所述基准平面之内。
- 根据权利要求7所述的基准平面的确定方法,其中,所述获取所述基准平面之内的至少三个点的坐标的步骤包括:获取所述基准平面之内的最大正方形;以及获取所述最大正方形的四个顶点的坐标。
- 根据权利要求2所述的基准平面的确定方法,其中,在所述根据所述基准平面形成基准坐标系的步骤之前,所述基准平面的确定方法还包括:对所述边缘图像之中的平面图形进行筛选,以确定参照物平面,所述参照物平面与所述基准平面平行;并且所述根据所述基准平面形成基准坐标系的步骤包括:根据所述基准平面和所述参照物平面形成基准坐标系,所述基准坐标系的原点设置在所述参照物平面之内。
- 一种基准平面的确定系统,包括:第一获取单元,用于获取深度图像;第一提取单元,用于对所述深度图像进行边缘提取,以形成边缘图像,所述边缘图像包括多个平面图形;以及第一筛选单元,用于对所述边缘图像之中的平面图形进行筛选,以确定基准平面。
- 根据权利要求10所述的基准平面的确定系统,还包括:第一形成单元,用于根据所述基准平面形成基准坐标系。
- 根据权利要求10所述的基准平面的确定系统,其中,所述第一提取单元包括:第一获取模块,用于根据预设的梯度算法获取所述深度图像的梯度变化率;第一形成模块,用于根据所述梯度变化率形成二值图像;以及第一提取模块,用于对所述二值图像进行边缘提取,以形成边缘图像。
- 根据权利要求10所述的基准平面的确定系统,其中,所述第一筛选单元包括:第一筛选模块,用于筛选图像深度值由下部向上部递减的平面图形,以形成第一平面图形集合;第二筛选模块,用于对所述第一平面图形集合之中的平面图形进行筛选,以确定基准平面。
- 根据权利要求13所述的基准平面的确定系统,其中,所述 第二筛选模块包括:第一筛选子模块,用于筛选平面图形面积大于边缘图像面积的15%的平面图形,以形成第二平面图形集合;以及第二筛选子模块,用于对所述第二平面图形集合之中的平面图形进行筛选,以确定基准平面。
- 根据权利要求14所述的基准平面的确定系统,其中,所述第二筛选子模块包括:第三筛选子模块,用于从所述第二平面图形集合之中选择平面图形中心点最靠近边缘图像下部的平面图形,作为基准平面。
- 根据权利要求11所述的基准平面的确定系统,其中,所述基准坐标系包括第一轴、第二轴以及第三轴,所述第一轴、第二轴以及第三轴之间相互垂直,所述第一形成单元包括:第二获取模块,用于获取所述基准平面之内的至少三个点的坐标;第二形成模块,用于根据所述至少三个点的坐标形成基准坐标系,所述第一轴与所述基准平面垂直,所述第二轴与所述第三轴设置在所述基准平面之内。
- 根据权利要求16所述的基准平面的确定系统,其中,所述第二获取模块包括:第一获取子模块,用于获取所述基准平面之内的最大正方形;第二获取子模块,用于获取所述最大正方形的四个顶点的坐标。
- 根据权利要求11所述的基准平面的确定系统,还包括:第二筛选单元,用于对所述边缘图像之中的平面图形进行筛选,以确定参照物平面,所述参照物平面与所述基准平面平行;并且所述第一形成单元包括:第三形成模块,用于根据所述基准平面和所述参照物平面形成 基准坐标系,所述基准坐标系的原点设置在所述参照物平面之内。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/525,703 US10319104B2 (en) | 2016-03-09 | 2016-06-08 | Method and system for determining datum plane |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610133165.5 | 2016-03-09 | ||
CN201610133165.5A CN105825499A (zh) | 2016-03-09 | 2016-03-09 | 基准平面的确定方法和确定系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017152529A1 true WO2017152529A1 (zh) | 2017-09-14 |
Family
ID=56987608
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/085251 WO2017152529A1 (zh) | 2016-03-09 | 2016-06-08 | 基准平面的确定方法和确定系统 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10319104B2 (zh) |
CN (1) | CN105825499A (zh) |
WO (1) | WO2017152529A1 (zh) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106600638B (zh) * | 2016-11-09 | 2020-04-17 | 深圳奥比中光科技有限公司 | 一种增强现实的实现方法 |
EP3680857B1 (en) * | 2017-09-11 | 2021-04-28 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image processing method and apparatus, electronic device and computer-readable storage medium |
CN110827412A (zh) * | 2018-08-09 | 2020-02-21 | 北京微播视界科技有限公司 | 自适应平面的方法、装置和计算机可读存储介质 |
CN112912921B (zh) * | 2018-10-11 | 2024-04-30 | 上海科技大学 | 从深度图中提取平面的系统和方法 |
US11804015B2 (en) | 2018-10-30 | 2023-10-31 | Samsung Electronics Co., Ltd. | Methods for determining three-dimensional (3D) plane information, methods for displaying augmented reality display information and corresponding devices |
CN110215686B (zh) * | 2019-06-27 | 2023-01-06 | 网易(杭州)网络有限公司 | 游戏场景中的显示控制方法及装置、存储介质及电子设备 |
CN110675360B (zh) * | 2019-08-02 | 2022-04-01 | 杭州电子科技大学 | 一种基于深度图像的实时平面检测及提取的方法 |
CN113766147B (zh) * | 2020-09-22 | 2022-11-08 | 北京沃东天骏信息技术有限公司 | 视频中嵌入图像的方法、平面预测模型获取方法和装置 |
CN112198527B (zh) * | 2020-09-30 | 2022-12-27 | 上海炬佑智能科技有限公司 | 参考平面调整及障碍物检测方法、深度相机、导航设备 |
CN112198529B (zh) * | 2020-09-30 | 2022-12-27 | 上海炬佑智能科技有限公司 | 参考平面调整及障碍物检测方法、深度相机、导航设备 |
CN112697042B (zh) * | 2020-12-07 | 2023-12-05 | 深圳市繁维科技有限公司 | 手持式tof相机及其强适应包裹体积测量方法 |
CN114322775B (zh) * | 2022-01-06 | 2022-11-11 | 深圳威洛博机器人有限公司 | 一种机器人视觉定位系统及视觉定位方法 |
CN114943778B (zh) * | 2022-07-26 | 2023-01-13 | 广州镭晨智能装备科技有限公司 | 基准面确定方法、检测方法、装置、设备和存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0633550A2 (en) * | 1993-06-29 | 1995-01-11 | Canon Kabushiki Kaisha | Image processing method and apparatus thereof |
CN102135417A (zh) * | 2010-12-26 | 2011-07-27 | 北京航空航天大学 | 一种全自动三维特征提取方法 |
US20110211749A1 (en) * | 2010-02-28 | 2011-09-01 | Kar Han Tan | System And Method For Processing Video Using Depth Sensor Information |
CN102566827A (zh) * | 2010-12-30 | 2012-07-11 | 株式会社理光 | 虚拟触摸屏系统中对象检测方法和系统 |
CN103389042A (zh) * | 2013-07-11 | 2013-11-13 | 夏东 | 基于深度图像的地面自动检测以及场景高度计算的方法 |
US20140064602A1 (en) * | 2012-09-05 | 2014-03-06 | Industrial Technology Research Institute | Method and apparatus for object positioning by using depth images |
CN103729850A (zh) * | 2013-12-31 | 2014-04-16 | 楚天科技股份有限公司 | 一种在全景图中提取直线的方法 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2553473A1 (en) * | 2005-07-26 | 2007-01-26 | Wa James Tam | Generating a depth map from a tw0-dimensional source image for stereoscopic and multiview imaging |
US8223143B2 (en) * | 2006-10-27 | 2012-07-17 | Carl Zeiss Meditec, Inc. | User interface for efficiently displaying relevant OCT imaging data |
US9164577B2 (en) | 2009-12-22 | 2015-10-20 | Ebay Inc. | Augmented reality system, method, and apparatus for displaying an item image in a contextual environment |
CN102075686B (zh) * | 2011-02-10 | 2013-10-30 | 北京航空航天大学 | 一种鲁棒的实时在线摄像机跟踪方法 |
US9330490B2 (en) * | 2011-04-29 | 2016-05-03 | University Health Network | Methods and systems for visualization of 3D parametric data during 2D imaging |
WO2014020801A1 (ja) * | 2012-07-31 | 2014-02-06 | 株式会社ソニー・コンピュータエンタテインメント | 画像処理装置、画像処理方法、および画像ファイルのデータ構造 |
CN102901488B (zh) * | 2012-09-07 | 2015-12-16 | 曹欢欢 | 一种自动生成房间平面图的方法及设备 |
US9773074B2 (en) * | 2012-12-06 | 2017-09-26 | Daybreak Game Company Llc | System and method for building digital objects with blocks |
US9317962B2 (en) * | 2013-08-16 | 2016-04-19 | Indoor Technologies Ltd | 3D space content visualization system |
CN104574515B (zh) * | 2013-10-09 | 2017-10-17 | 华为技术有限公司 | 一种三维物体重建的方法、装置和终端 |
US10008027B1 (en) * | 2014-10-20 | 2018-06-26 | Henry Harlyn Baker | Techniques for determining a three-dimensional representation of a surface of an object from a set of images |
US9691177B2 (en) * | 2014-12-12 | 2017-06-27 | Umbra Software Ltd. | Techniques for automatic occluder simplification using planar sections |
CN104539925B (zh) | 2014-12-15 | 2016-10-05 | 北京邮电大学 | 基于深度信息的三维场景增强现实的方法及系统 |
CN105046710A (zh) * | 2015-07-23 | 2015-11-11 | 北京林业大学 | 基于深度图分割与代理几何体的虚实碰撞交互方法及装置 |
US9734405B2 (en) * | 2015-10-05 | 2017-08-15 | Pillar Vision, Inc. | Systems and methods for monitoring objects in athletic playing spaces |
US9741125B2 (en) * | 2015-10-28 | 2017-08-22 | Intel Corporation | Method and system of background-foreground segmentation for image processing |
CN107205113B (zh) * | 2016-03-18 | 2020-10-16 | 松下知识产权经营株式会社 | 图像生成装置、图像生成方法及计算机可读存储介质 |
-
2016
- 2016-03-09 CN CN201610133165.5A patent/CN105825499A/zh active Pending
- 2016-06-08 WO PCT/CN2016/085251 patent/WO2017152529A1/zh active Application Filing
- 2016-06-08 US US15/525,703 patent/US10319104B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0633550A2 (en) * | 1993-06-29 | 1995-01-11 | Canon Kabushiki Kaisha | Image processing method and apparatus thereof |
US20110211749A1 (en) * | 2010-02-28 | 2011-09-01 | Kar Han Tan | System And Method For Processing Video Using Depth Sensor Information |
CN102135417A (zh) * | 2010-12-26 | 2011-07-27 | 北京航空航天大学 | 一种全自动三维特征提取方法 |
CN102566827A (zh) * | 2010-12-30 | 2012-07-11 | 株式会社理光 | 虚拟触摸屏系统中对象检测方法和系统 |
US20140064602A1 (en) * | 2012-09-05 | 2014-03-06 | Industrial Technology Research Institute | Method and apparatus for object positioning by using depth images |
CN103389042A (zh) * | 2013-07-11 | 2013-11-13 | 夏东 | 基于深度图像的地面自动检测以及场景高度计算的方法 |
CN103729850A (zh) * | 2013-12-31 | 2014-04-16 | 楚天科技股份有限公司 | 一种在全景图中提取直线的方法 |
Also Published As
Publication number | Publication date |
---|---|
US10319104B2 (en) | 2019-06-11 |
CN105825499A (zh) | 2016-08-03 |
US20180075616A1 (en) | 2018-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017152529A1 (zh) | 基准平面的确定方法和确定系统 | |
CN109961406B (zh) | 一种图像处理的方法、装置及终端设备 | |
TWI712918B (zh) | 擴增實境的影像展示方法、裝置及設備 | |
US9036007B2 (en) | System and method for converting two dimensional to three dimensional video | |
US11004267B2 (en) | Information processing apparatus, information processing method, and storage medium for generating a virtual viewpoint image | |
US10148895B2 (en) | Generating a combined infrared/visible light image having an enhanced transition between different types of image information | |
US10360711B2 (en) | Image enhancement with fusion | |
WO2016188010A1 (zh) | 运动图像补偿方法及装置、显示装置 | |
KR20060113514A (ko) | 화상 처리 장치 및 화상 처리 방법, 프로그램, 및 기록매체 | |
US20150379720A1 (en) | Methods for converting two-dimensional images into three-dimensional images | |
WO2019076348A1 (zh) | 一种虚拟现实vr界面生成的方法和装置 | |
CN108182659A (zh) | 一种基于视点跟踪、单视角立体画的裸眼3d显示技术 | |
EP2642446B1 (en) | System and method of estimating page position | |
US20220172319A1 (en) | Camera-based Transparent Display | |
TW201630408A (zh) | 影像資料分割技術 | |
WO2023097805A1 (zh) | 显示方法、显示设备及计算机可读存储介质 | |
US20210390780A1 (en) | Augmented reality environment enhancement | |
JP2015171143A (ja) | カラーコード化された構造によるカメラ較正の方法及び装置、並びにコンピュータ可読記憶媒体 | |
CN111343445A (zh) | 动态调整深度解析度的装置及其方法 | |
WO2024055531A1 (zh) | 照度计数值识别方法、电子设备及存储介质 | |
JP2020523957A (ja) | マルチ・ビュー・コンテンツを観察するユーザに情報を提示する方法及び機器 | |
JP2006318015A (ja) | 画像処理装置および画像処理方法、画像表示システム、並びに、プログラム | |
JP7125847B2 (ja) | 3dモデル表示装置、3dモデル表示方法及び3dモデル表示プログラム | |
CN102169597B (zh) | 一种平面图像上物体的深度设置方法和系统 | |
TWI541761B (zh) | 影像處理方法及其電子裝置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 15525703 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16893176 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16893176 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.05.2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16893176 Country of ref document: EP Kind code of ref document: A1 |