WO2017152529A1 - 基准平面的确定方法和确定系统 - Google Patents

基准平面的确定方法和确定系统 Download PDF

Info

Publication number
WO2017152529A1
WO2017152529A1 PCT/CN2016/085251 CN2016085251W WO2017152529A1 WO 2017152529 A1 WO2017152529 A1 WO 2017152529A1 CN 2016085251 W CN2016085251 W CN 2016085251W WO 2017152529 A1 WO2017152529 A1 WO 2017152529A1
Authority
WO
WIPO (PCT)
Prior art keywords
reference plane
planar
plane
image
axis
Prior art date
Application number
PCT/CN2016/085251
Other languages
English (en)
French (fr)
Inventor
赵骥伯
李英杰
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US15/525,703 priority Critical patent/US10319104B2/en
Publication of WO2017152529A1 publication Critical patent/WO2017152529A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/564Depth or shape recovery from multiple images from contours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/162Segmentation; Edge detection involving graph-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20072Graph-based image processing

Definitions

  • the present invention relates to the field of display technologies, and in particular, to a method and a system for determining a reference plane.
  • Augmented Reality is a new technology that seamlessly integrates real-world information with virtual world information.
  • augmented reality technology applies virtual information to the real world, so that the real environment and the virtual object are superimposed in the same picture or the same space in real time. Therefore, augmented reality technology not only displays the information of the real world, but also displays the information of the virtual world at the same time.
  • the two kinds of information complement each other, are superimposed, and are perceived by human senses, so that people can obtain a sensory experience beyond reality.
  • the optical perspective augmented reality system has the advantages of simplicity, high resolution, and no visual bias.
  • the existing optical perspective augmented reality system integrates the virtual object with the real scene, or needs to adjust the angle of the lens at all times, or needs to manually set the calibration position, so that the virtual object can be arranged in an appropriate position in the real scene.
  • the above method causes the virtual object to be difficult to match with the real scene in real time, thereby affecting the user experience.
  • embodiments of the present invention provide a method and a determining system for determining a reference plane, which can easily match a virtual object with a real scene in real time, improve a surreal sensory experience of the user, and are suitable for use in a portable device. .
  • An aspect of the invention provides a method for determining a reference plane, comprising:
  • edge extraction on the depth image to form an edge image the edge image including a plurality of planar graphics
  • a planar pattern among the edge images is filtered to determine a reference plane.
  • the determining method of the reference plane further includes:
  • a reference coordinate system is formed according to the reference plane.
  • the step of performing edge extraction on the depth image to form an edge image includes:
  • Edge extraction is performed on the binary image to form an edge image.
  • the step of filtering the planar graphics in the edge image to determine the reference plane comprises:
  • a planar pattern among the first set of planar graphics is filtered to determine a reference plane.
  • the step of filtering the planar graphic in the first planar graphic set to determine the reference plane comprises:
  • a planar pattern among the second set of planar graphics is filtered to determine a reference plane.
  • the step of filtering the planar graphic in the second planar graphic set to determine the reference plane comprises:
  • a plane figure whose center point of the plane figure is closest to the lower part of the edge image is selected from the second set of plane patterns as a reference plane.
  • the reference coordinate system includes a first axis, a second axis, and a third axis, wherein the first axis, the second axis, and the third axis are perpendicular to each other, and the reference coordinate is formed according to the reference plane
  • the steps of the department include:
  • the first axis is perpendicular to the reference plane, and the second axis and the third axis are disposed at the reference plane Inside.
  • the step of acquiring coordinates of at least three points within the reference plane comprises:
  • the determining method of the reference plane further includes:
  • the step of forming a reference coordinate system according to the reference plane includes:
  • a reference coordinate system is formed according to the reference plane and the reference object plane, and an origin of the reference coordinate system is disposed within the reference object plane.
  • Another aspect of the present invention provides a determination system of a reference plane, comprising:
  • a first acquiring unit configured to acquire a depth image
  • a first extracting unit configured to perform edge extraction on the depth image to form an edge image, where the edge image includes a plurality of planar graphics
  • a first screening unit configured to filter a planar graphic among the edge images to determine a reference plane.
  • the determining system of the reference plane further includes:
  • a first forming unit configured to form a reference coordinate system according to the reference plane.
  • the first extracting unit includes:
  • a first acquiring module configured to acquire a gradient change rate of the depth image according to a preset gradient algorithm
  • a first forming module configured to form a binary image according to the gradient change rate
  • the first extraction module is configured to perform edge extraction on the binary image to form an edge image.
  • the first screening unit includes:
  • a first screening module configured to filter a planar image whose image depth value is decremented from a lower portion to an upper portion to form a first planar graphic set
  • a second screening module configured to: planar graphics in the first planar graphic set Screening to determine the datum plane.
  • the second screening module includes:
  • a first screening sub-module configured to filter a planar graphic having a planar graphic area larger than 15% of an edge image area to form a second planar graphic set
  • a second screening sub-module configured to filter a planar graphic in the second planar graphic set to determine a reference plane.
  • the second screening submodule includes:
  • a third screening sub-module configured to select, from the second set of planar graphics, a planar figure whose center point of the planar figure is closest to the lower part of the edge image as a reference plane.
  • the reference coordinate system includes a first axis, a second axis, and a third axis, wherein the first axis, the second axis, and the third axis are perpendicular to each other, and the first forming unit includes:
  • a second acquiring module configured to acquire coordinates of at least three points within the reference plane
  • a second forming module configured to form a reference coordinate system according to coordinates of the at least three points, the first axis is perpendicular to the reference plane, and the second axis and the third axis are disposed on the reference plane within.
  • the second obtaining module includes:
  • a first acquisition submodule for acquiring a largest square within the reference plane
  • a second obtaining submodule configured to acquire coordinates of four vertices of the largest square.
  • the determining system of the reference plane further includes:
  • a second screening unit configured to screen a planar image among the edge images to determine a reference object plane, the reference object plane being parallel to the reference plane;
  • the first forming unit includes:
  • a third forming module configured to form a reference coordinate system according to the reference plane and the reference object plane, where an origin of the reference coordinate system is disposed within the reference object plane.
  • the determining method of the reference plane comprises: acquiring a depth image, and performing edge on the depth image Edge extraction to form an edge image comprising a plurality of planar graphics, the planar graphics of the edge images being screened to determine a reference plane.
  • the technical solution provided by the embodiment of the present invention can determine a reference plane among the real scenes, thereby establishing a virtual coordinate system based on the reference plane, and finally integrating the virtual object with the real scene. Therefore, the technical solution provided by the present invention can easily match a virtual object with a real scene in real time, improve a surreal sensory experience of the user, and is suitable for use in a portable device such as a wearable lens.
  • FIG. 1 is a flowchart of a method for determining a reference plane according to Embodiment 1 of the present invention
  • Embodiment 2 is a depth image according to Embodiment 1 of the present invention.
  • FIG. 3 is a binary image according to Embodiment 1 of the present invention.
  • FIG. 5 is a schematic diagram of a reference plane according to Embodiment 1 of the present invention.
  • FIG. 6 is a schematic diagram of a maximum inscribed square according to Embodiment 1 of the present invention.
  • FIG. 7 is a schematic plan view of a reference object according to Embodiment 1 of the present invention.
  • FIG. 8 is a schematic functional block diagram of a system for determining a reference plane according to Embodiment 2 of the present invention.
  • FIG. 9 is a schematic functional block diagram of an example of a determination system of a reference plane according to Embodiment 2 of the present invention.
  • FIG. 1 is a flowchart of a method for determining a reference plane according to Embodiment 1 of the present invention. As shown in FIG. 1, the method for determining the reference plane includes the following steps 1001 to 1003.
  • Step 1001 Acquire a depth image.
  • Embodiment 2 is a depth image provided by Embodiment 1 of the present invention. As shown in FIG. 2, this embodiment can acquire a depth image of an office through a depth camera.
  • the depth image described in this embodiment may also be referred to as a depth of field image.
  • Step 1002 Perform edge extraction on the depth image to form an edge image, where the edge image includes a plurality of planar graphics.
  • the step of performing edge extraction on the depth image to form an edge image may include: acquiring a gradient change rate of the depth image according to a preset gradient algorithm; forming a binary value according to the gradient change rate An image; and performing edge extraction on the binary image to form an edge image.
  • FIG. 3 is a binary image provided by Embodiment 1 of the present invention. Since the depth image is different from the binary image, edge extraction cannot be performed. Therefore, in the present embodiment, the depth image is first converted into a binary image, as shown in FIG. 3, and then edge extraction is performed in the binary image, thereby forming an edge image.
  • the Binary Image means that there are only two possible values or gray levels for each pixel of the image. In the depth image, the gradient of the edge of one plane in the direction of the viewing angle is uniform. Therefore, the gradient change rate of the depth image may be calculated according to a preset gradient algorithm, for example, the Sobel algorithm, the pixels with the same gradient change rate are set to black, and the pixels with uneven gradient change rate are set to white, thereby A binary image is formed.
  • FIG. 4 is an edge image according to Embodiment 1 of the present invention. Connecting pixels having the same gradient change rate to each other can form an edge of the planar pattern, and thus by performing edge extraction on the binary image, an edge image can be formed, as shown in FIG.
  • the edge image includes a plurality of planar graphics. For example, the planar image 10 at the intermediate position is formed by the edge contour of the table top.
  • Step 1003 Filter a planar graphic among the edge images to determine a reference plane.
  • FIG. 5 is a schematic diagram of a reference plane according to Embodiment 1 of the present invention. How to screen a planar figure in an edge image to determine a reference plane is described below with reference to FIG.
  • the edge image includes a plurality of plane graphics, such as a desk of a desk.
  • the face which is the horizontal plane facing upwards
  • the ceiling surface which is the horizontal plane facing downwards
  • the wall surface which is the vertical plane
  • the side of the desk which is a vertical plane
  • the desired reference plane eg, a planar pattern of the horizontal plane facing upwards
  • the image depth value depends on the device that captures the depth image.
  • the farther the target is from the camera of the device the smaller the image depth value is; in other devices, the farther the target is from the camera of the device, the larger the image depth value; in some devices, according to some devices, A specific mapping relationship is used to convert the distance value between the target and the camera into an image depth value.
  • the farther the target is from the camera of the device the smaller the image depth value is. Therefore, the gradient change of the planar pattern of the upward horizontal plane is decremented from bottom to top in the image (the depth value in the direction of the user's viewing angle is decreased from near and far), for example, the depth value of the desktop in the direction of the user's viewing angle is near And far and evenly decreasing, thereby forming a planar figure 10.
  • the step of filtering the planar graphic among the edge images to determine the reference plane may include: screening a planar graphic whose image depth value is decreased from the lower portion to the upper portion to form a first planar graphic set. Filtering the planar graphics in the first set of planar graphics to determine a reference plane. Therefore, the present embodiment can eliminate the plane whose variation trend does not meet the above requirements by the gradient change rate of the plane, for example, the side of the ceiling and the side of the desk.
  • the corresponding manner for filtering out the first planar graphic can be easily conceived with reference to the above example.
  • This embodiment selects a plane that is as large as possible and horizontal as a reference plane for projecting a virtual object.
  • the step of filtering the planar graphic in the first planar graphic set to determine the reference plane comprises: screening a planar graphic having a planar graphic area larger than 15% of an area of the entire edge image to form a second set of planar graphics; and screening the planar graphics of the second set of planar graphics to determine a reference plane.
  • a plane pattern having an area larger than 15% of the entire image area is selected as the reference plane, and a planar pattern having an excessively small area can be excluded.
  • the area of the planar graphic 10 of the desktop is greater than 15% of the entire image area, so the planar graphic 10 of the desktop is selected as the reference plane.
  • the step of filtering the planar graphic in the second planar graphic set to determine the reference plane comprises: selecting, from the second planar graphic set, a central point of the planar graphic closest to the lower part of the edge image Plane graphic as a reference plane.
  • the plane point of the edge contour near the lower portion of the entire image may be selected from the plane patterns in the user's viewing angle direction.
  • some of the ceiling and walls can be excluded.
  • the planar pattern 10 is selected as the reference plane 20, as shown in FIG.
  • the determining method of the reference plane further includes: forming a reference coordinate system according to the reference plane.
  • the reference coordinate system includes a first axis, a second axis, and a third axis, wherein the first axis, the second axis, and the third axis are perpendicular to each other, and the reference coordinate is formed according to the reference plane
  • the step of: acquiring coordinates of at least three points within the reference plane; forming a reference coordinate system according to coordinates of the at least three points, the first axis being perpendicular to the reference plane, the second The shaft and the third shaft are disposed within the reference plane.
  • the step of acquiring coordinates of at least three points within the reference plane comprises: acquiring a largest square within the reference plane; acquiring coordinates of four vertices of the maximum square.
  • FIG. 6 is a schematic diagram of a maximum inscribed square according to Embodiment 1 of the present invention. As shown in FIG. 6, the coordinate values of the four vertices of the square 30 are set, and a reference coordinate system is established with the reference plane 20 (i.e., the plane figure 10) as a horizontal plane.
  • the reference coordinate system includes a first axis, a second axis, and a third axis, the first axis, the second axis, and the third axis being perpendicular to each other.
  • the first axis is perpendicular to the reference plane, and the second axis and the third axis are disposed within the reference plane.
  • the largest inscribed square of the planar pattern of the desktop obtained by drawing a small square at the center of the contour and then gradually magnifying it to intersect the contour edge is located substantially in the central region of the table top.
  • the coordinates form a reference coordinate system.
  • the virtual laptop can be placed in the center of the desktop based on the reference coordinate system. Therefore, the plane formed by the chassis coordinate points of the virtual notebook coincides with the square 30 described above. If additional virtual objects need to be added, other virtual objects may be added sequentially on the reference plane 20 (eg, near the central region of the reference plane 20).
  • the step of forming the reference coordinate system according to the reference plane may further include: screening a planar image among the edge images to determine a reference object plane, the reference object plane and the reference plane The reference plane is parallel; the step of forming a reference coordinate system according to the reference plane further comprises: forming a reference coordinate system according to the reference plane and the reference object plane, an origin of the reference coordinate system being disposed at the reference object plane within.
  • the calculation speed of each frame is within 30 ms, so that plane detection can be performed in real time.
  • the viewing angle is not allowed to change greatly, otherwise the virtual object cannot be accurately placed.
  • the above-mentioned reference plane determination method accurately finds the reference plane, once the user's line of sight changes, the situation where the reference plane appears in the field of view also changes, and the position of the center point of the reference plane also changes. If a virtual laptop was previously placed in a square-calibrated location, the laptop will shift position as soon as the user's field of view changes. Although the notebook is still on the previous datum plane, its specific location in the real space is different, which causes the user's subjective feelings and experience to be greatly affected.
  • FIG. 7 is a schematic plan view of a reference object according to Embodiment 1 of the present invention.
  • a reference object such as a box or a book placed in advance on the table top is detected, and the obtained planar image of the reference object is determined as the reference object plane 40.
  • the method of determining the reference plane 40 can be similar to the method of determining the reference plane as described above.
  • an appropriate planar graphic can be obtained from the image depth value.
  • the planar pattern can be selected, for example, by setting a threshold value smaller than 15% or less of the edge image area to obtain a reference object plane.
  • the reference plane 40 can be parallel to the reference plane 20.
  • a reference coordinate system is established from the reference plane 20 and the reference object plane 40.
  • the origin of the reference coordinate system can be placed within the reference object plane 40.
  • the placement position of each virtual object with respect to the reference object can be determined on the reference plane 20, and then the virtual object is merged with the real scene. Since the reference object plane 40 is fixed and the origin of the reference coordinate system is disposed within the reference object plane 40, the user can be allowed to perform a certain angle of view rotation. As long as the reference object plane 40 is always present completely within the field of view, it can be used as a reference for calibration, thereby avoiding the confusion of the positions of other virtual objects.
  • the determining method of the reference plane provided in this embodiment includes: acquiring a depth image; performing edge extraction on the depth image to form an edge image, the edge image includes a plurality of plane graphics; and planar graphics among the edge images Screening is performed to determine the datum plane.
  • the technical solution provided by the embodiment can determine the reference plane among the real scenes, thereby establishing a virtual coordinate system based on the reference plane, and finally achieving the purpose of integrating the virtual object with the real scene. Therefore, the technical solution provided by the embodiment can easily match the virtual object with the real scene in real time, improve the surreal sensory experience of the user, and is suitable for application in a portable device such as a wearable lens.
  • FIG. 8 is a schematic structural diagram of a system for determining a reference plane according to Embodiment 2 of the present invention.
  • the determining system of the reference plane includes: a first acquiring unit 101, configured to acquire a depth image; and a first extracting unit 102, configured to perform edge extraction on the depth image to form an edge image.
  • the edge image includes a plurality of plane graphics; the first screening unit 103 is configured to filter the planar graphics among the edge images to determine a reference plane.
  • the first acquisition unit 101 acquires a depth image of the office.
  • the first acquisition unit 101 may be a depth camera, and the depth image described in this embodiment is also referred to as a depth image.
  • the first extraction unit 102 includes: a first acquiring module, configured to acquire a gradient change rate of the depth image according to a preset gradient algorithm; and a first forming module, configured to change according to the gradient Forming a binary image; a first extraction module, configured to perform edge extraction on the binary image to form an edge image.
  • the present embodiment converts the depth image into a binary image, as shown in FIG. 3, and then performs edge extraction in the binary image, thereby forming an edge image.
  • the Binary Image means that there are only two possible values or gray levels for each pixel of the image. In the depth image, the gradient of the edge of one plane in the direction of the viewing angle is uniform. Therefore, the first obtaining module calculates a gradient change rate of the depth image according to a preset gradient algorithm, for example, a Sobel algorithm, and the first forming module sets pixels with the same gradient change rate to black, and the gradient change rate is uneven. The pixels are set to white to form a binary image.
  • the edge image includes a plurality of planar graphics.
  • the planar image 10 at the intermediate position is formed by the edge contour of the table top.
  • the first screening unit includes: a first screening module, configured to filter a planar image whose image depth value is decremented from a lower portion to an upper portion to form a first planar graphic set; and a second screening module, configured to The planar graphics in the first set of planar graphics are filtered to determine a reference plane.
  • the gradient change of the horizontal plane graphic is decremented from bottom to top in the image (the depth value in the user's viewing direction is decreased from near to far), for example, the desktop is The depth value of the user's viewing angle direction is uniformly decremented from near to far, thereby forming a planar figure 10.
  • This embodiment can eliminate the plane whose variation trend does not meet the above requirements by the gradient change rate of the plane, for example, the side of the ceiling and the side of the desk.
  • the second screening module includes: a first screening sub-module, configured to filter a planar graphic having a planar graphic area larger than 15% of an area of the entire edge image to form a second planar graphic set; and a second screening sub-module And filtering the planar graphic in the second planar graphic set to determine a reference plane.
  • the selected area is greater than 15% of the entire image area as the reference plane, thereby eliminating the planar image with too small an area.
  • the area of the planar graphic 10 of the desktop is greater than 15% of the entire image area, so the planar graphic 10 of the desktop is selected as the reference plane.
  • the second screening sub-module includes a third screening sub-module
  • the third screening sub-module is configured to select, from the second set of planar graphics, a planar graphic that is closest to a lower portion of the edge image of a central point of the planar graphic.
  • the third screening sub-module may select a plane pattern of the edge contour from the center point of the entire image in the direction of the user's viewing angle. At this point, some of the ceiling and walls can be excluded.
  • the planar pattern 10 is selected as the reference plane 20.
  • the determining system of the reference plane further includes: a first forming unit configured to form a reference coordinate system according to the reference plane.
  • the reference coordinate system includes a first axis, a second axis, and a third axis, wherein the first axis, the second axis, and the third axis are perpendicular to each other
  • the first forming unit includes: a second An acquiring module, configured to acquire coordinates of at least three points within the reference plane; a second forming module, configured to form a reference coordinate system according to coordinates of the at least three points, the first axis and the reference
  • the plane is vertical, and the second axis and the third axis are disposed within the reference plane.
  • the second obtaining module includes: a first acquiring sub-module for acquiring a maximum square within the reference plane; and a second acquiring sub-module for acquiring coordinates of four vertices of the maximum square.
  • the reference plane Above the reference plane, at least three points are selected to locate the reference plane.
  • This embodiment obtains four points by drawing the maximum inscribed square of the reference plane.
  • the largest inscribed square of the outline of the datum plane can be obtained by drawing a small square at the center of the outline and then gradually magnifying it to intersect the edge of the outline.
  • the coordinate values of the four vertices of the square 30 are set, and a reference coordinate system is established with the reference plane 20 (i.e., the plane figure 10) as a horizontal plane.
  • the reference coordinate system includes a first axis, a second axis, and a third axis, the first axis, the second axis, and the third axis being perpendicular to each other.
  • the first axis is perpendicular to the reference plane, and the second axis and the third axis are disposed within the reference plane.
  • the largest inscribed square of the planar pattern of the desktop obtained by drawing a small square at the center of the contour and then gradually magnifying it to intersect the contour edge is located substantially in the central region of the table top.
  • a reference coordinate system is formed based on coordinates of at least three points on the square 30.
  • the virtual laptop can be placed in the center of the desktop based on the reference coordinate system. Therefore, the plane formed by the chassis coordinate points of the virtual notebook coincides with the square 30 described above. If additional virtual objects need to be added, other virtual objects may be added sequentially on the reference plane 20 (eg, near the central region of the reference plane 20).
  • the calculation speed of each frame is within 30 ms, so that plane detection can be performed in real time.
  • the viewing angle is not allowed to change greatly, otherwise the virtual object cannot be accurately placed.
  • the above-mentioned reference plane determination method accurately finds the reference plane, once the user's line of sight changes, the situation where the reference plane appears in the field of view also changes, and the position of the center point of the reference plane also changes. If a virtual laptop was previously placed in a square-calibrated location, the laptop will shift position as soon as the user's field of view changes. Although the notebook is still on the previous datum plane, its specific location in the real space is different, which causes the user's subjective feelings and experience to be greatly affected.
  • the determining system of the reference plane may further include: a second screening unit, configured to filter a planar graphic among the edge images to determine a reference object plane, the reference object plane and the reference plane
  • the reference plane is parallel;
  • the first forming unit may further include: a third forming module, configured to form a reference coordinate system according to the reference plane and the reference object plane, where an origin of the reference coordinate system is set at the reference object Within the plane.
  • a reference object such as a box or a book placed in advance on the table top is detected, and a plane figure of the reference object is determined as the reference object plane 40.
  • the method of determining the reference plane 40 can be similar to the method of determining the reference plane as described above.
  • an appropriate planar graphic can be obtained from the image depth value.
  • the planar pattern can be screened by setting a threshold less than 15% or less of the edge image area to obtain a reference object plane.
  • the reference plane 40 can be parallel to the reference plane 20.
  • a reference coordinate system is established from the reference plane 20 and the reference object plane 40.
  • the origin of the reference coordinate system can be placed within the reference object plane 40.
  • the placement position of each virtual object with respect to the reference object can be determined on the reference plane 20, and then the virtual object is merged with the real scene. Since the reference object plane 40 is fixed and the origin of the reference coordinate system is disposed within the reference object plane 40, the user can be allowed to perform a certain angle of view rotation. As long as the reference object plane 40 is always present completely within the field of view, it can be used as a reference for calibration, thereby avoiding the confusion of the positions of other virtual objects.
  • the determining system of the reference plane includes: a first acquiring unit, configured to acquire a depth image; and a first extracting unit, configured to perform edge extraction on the depth image to form an edge image, where the edge image includes multiple a plane; a first screening unit for screening planes in the edge image to determine a reference plane.
  • the technical solution provided by the embodiment can determine the reference plane among the real scenes, thereby establishing a virtual coordinate system based on the reference plane, and finally achieving the purpose of integrating the virtual object with the real scene. Therefore, the technical solution provided by the embodiment can easily match the virtual object with the real scene in real time, improve the surreal sensory experience of the user, and is suitable for application in a portable device such as a wearable lens.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

一种基准平面的确定方法和确定系统,该基准平面的确定方法包括:获取深度图像(1001);对深度图像进行边缘提取以形成边缘图像,边缘图像包括多个平面图形(1002);以及对边缘图像之中的平面图形进行筛选以确定基准平面(1003)。该基准平面的确定方法和系统能够容易地将虚拟物体与现实场景实时匹配,改善了用户的超现实感官体验。

Description

基准平面的确定方法和确定系统 技术领域
本发明涉及显示技术领域,尤其涉及一种基准平面的确定方法和确定系统。
背景技术
增强现实(Augmented Reality,简称AR)是一种将真实世界信息和虚拟世界信息“无缝”集成的新技术。具体来说,增强现实技术将虚拟的信息应用到真实世界,使得真实的环境与虚拟的物体实时地叠加至同一个画面或同一个空间。因此,增强现实技术不仅展现了真实世界的信息,而且将虚拟世界的信息同时显示出来,两种信息相互补充、叠加,并且被人类感官所感知,从而使得人们能够获得超越现实的感官体验。
在增强现实领域中,光学透视式增强现实系统具有简单、分辨率高、没有视觉偏差等优点。然而,现有的光学透视式增强现实系统将虚拟物体与现实场景融合时,或者需要时刻调整镜头的角度,或者需要人工设置标定位置,才能将虚拟物体布置在现实场景中适当的位置上。上述方式导致虚拟物体与现实场景难以实时匹配,从而影响用户的体验。
发明内容
为解决上述问题,本发明的实施例提供一种基准平面的确定方法和确定系统,其能够容易地将虚拟物体与现实场景实时匹配,改善用户的超现实感官体验,并且适合应用于便携式装置中。
本发明的一个方面提供一种基准平面的确定方法,包括:
获取深度图像;
对所述深度图像进行边缘提取,以形成边缘图像,所述边缘图像包括多个平面图形;以及
对所述边缘图像之中的平面图形进行筛选,以确定基准平面。
可选的,在所述对所述边缘图像之中的平面图形进行筛选,以确定基准平面的步骤之后,所述基准平面的确定方法还包括:
根据所述基准平面形成基准坐标系。
可选的,所述对所述深度图像进行边缘提取,以形成边缘图像的步骤包括:
根据预设的梯度算法获取所述深度图像的梯度变化率;
根据所述梯度变化率形成二值图像;以及
对所述二值图像进行边缘提取,以形成边缘图像。
可选的,所述对所述边缘图像之中的平面图形进行筛选,以确定基准平面的步骤包括:
筛选图像深度值由下部向上部递减的平面图形,以形成第一平面图形集合;以及
对所述第一平面图形集合之中的平面图形进行筛选,以确定基准平面。
可选的,所述对所述第一平面图形集合之中的平面图形进行筛选,以确定基准平面的步骤包括:
筛选平面图形面积大于边缘图像面积的15%的平面图形,以形成第二平面图形集合;以及
对所述第二平面图形集合之中的平面图形进行筛选,以确定基准平面。
可选的,所述对所述第二平面图形集合之中的平面图形进行筛选,以确定基准平面的步骤包括:
从所述第二平面图形集合之中选择平面图形中心点最靠近边缘图像下部的平面图形,作为基准平面。
可选的,所述基准坐标系包括第一轴、第二轴以及第三轴,所述第一轴、第二轴以及第三轴之间相互垂直,所述根据所述基准平面形成基准坐标系的步骤包括:
获取所述基准平面之内的至少三个点的坐标;以及
根据所述至少三个点的坐标形成基准坐标系,所述第一轴与所述基准平面垂直,所述第二轴与所述第三轴设置在所述基准平面之 内。
可选的,所述获取所述基准平面之内的至少三个点的坐标的步骤包括:
获取所述基准平面之内的最大正方形;以及
获取所述最大正方形的四个顶点的坐标。
可选的,在所述根据所述基准平面形成基准坐标系的步骤之前,所述基准平面的确定方法在还包括:
对所述边缘图像之中的平面图形进行筛选,以确定参照物平面,所述参照物平面与所述基准平面平行;并且
所述根据所述基准平面形成基准坐标系的步骤包括:
根据所述基准平面和所述参照物平面形成基准坐标系,所述基准坐标系的原点设置在所述参照物平面之内。
本发明的另一方面提供一种基准平面的确定系统,包括:
第一获取单元,用于获取深度图像;
第一提取单元,用于对所述深度图像进行边缘提取,以形成边缘图像,所述边缘图像包括多个平面图形;以及
第一筛选单元,用于对所述边缘图像之中的平面图形进行筛选,以确定基准平面。
可选的,所述基准平面的确定系统还包括:
第一形成单元,用于根据所述基准平面形成基准坐标系。
可选的,所述第一提取单元包括:
第一获取模块,用于根据预设的梯度算法获取所述深度图像的梯度变化率;
第一形成模块,用于根据所述梯度变化率形成二值图像;以及
第一提取模块,用于对所述二值图像进行边缘提取,以形成边缘图像。
可选的,所述第一筛选单元包括:
第一筛选模块,用于筛选图像深度值由下部向上部递减的平面图形,以形成第一平面图形集合;以及
第二筛选模块,用于对所述第一平面图形集合之中的平面图形 进行筛选,以确定基准平面。
可选的,所述第二筛选模块包括:
第一筛选子模块,用于筛选平面图形面积大于边缘图像面积的15%的平面图形,以形成第二平面图形集合;以及
第二筛选子模块,用于对所述第二平面图形集合之中的平面图形进行筛选,以确定基准平面。
可选的,所述第二筛选子模块包括:
第三筛选子模块,用于从所述第二平面图形集合之中选择平面图形中心点最靠近边缘图像下部的平面图形,作为基准平面。
可选的,所述基准坐标系包括第一轴、第二轴以及第三轴,所述第一轴、第二轴以及第三轴之间相互垂直,所述第一形成单元包括:
第二获取模块,用于获取所述基准平面之内的至少三个点的坐标;以及
第二形成模块,用于根据所述至少三个点的坐标形成基准坐标系,所述第一轴与所述基准平面垂直,所述第二轴与所述第三轴设置在所述基准平面之内。
可选的,所述第二获取模块包括:
第一获取子模块,用于获取所述基准平面之内的最大正方形;以及
第二获取子模块,用于获取所述最大正方形的四个顶点的坐标。
可选的,所述基准平面的确定系统还包括:
第二筛选单元,用于对所述边缘图像之中的平面图形进行筛选,以确定参照物平面,所述参照物平面与所述基准平面平行;并且
所述第一形成单元包括:
第三形成模块,用于根据所述基准平面和所述参照物平面形成基准坐标系,所述基准坐标系的原点设置在所述参照物平面之内。
本发明具有下述有益效果:
本发明实施例提供的基准平面的确定方法和确定系统之中,所述基准平面的确定方法包括:获取深度图像,对所述深度图像进行边 缘提取以形成边缘图像,所述边缘图像包括多个平面图形,对所述边缘图像之中的平面图形进行筛选以确定基准平面。本发明实施例提供的技术方案可以确定现实景象之中的基准平面,从而以上述基准平面为基准建立虚拟坐标系,最终将虚拟物体与现实场景相互融合。因此,本发明提供的技术方案能够容易地将虚拟物体与现实场景实时匹配,改善了用户的超现实感官体验,并且适合应用于诸如可穿戴透镜之类的便携式装置中。
附图说明
图1为本发明实施例一提供的一种基准平面的确定方法的流程图;
图2为本发明实施例一提供的深度图像;
图3为本发明实施例一提供的二值图像;
图4为本发明实施例一提供的边缘图像;
图5为本发明实施例一提供的基准平面示意图;
图6为本发明实施例一提供的最大内接正方形示意图;
图7为本发明实施例一提供的参照物平面示意图;
图8为本发明实施例二提供的一种基准平面的确定系统的示意性功能框图;
图9为本发明实施例二提供的一种基准平面的确定系统的一个示例的示意性功能框图。
具体实施方式
为使本领域的技术人员更好地理解本发明的技术方案,下面结合附图对本发明提供的基准平面的确定方法和确定系统进行详细描述。
实施例一
图1为本发明实施例一提供的一种基准平面的确定方法的流程图。如图1所示,所述基准平面的确定方法包括以下步骤1001至 1003。
步骤1001、获取深度图像。
图2为本发明实施例一提供的深度图像。如图2所示,本实施例可以通过深度摄像头获取办公室的深度图像。本实施例所述的深度图像也可以称为景深图像。
步骤1002、对所述深度图像进行边缘提取,以形成边缘图像,所述边缘图像包括多个平面图形。
本实施例中,所述对所述深度图像进行边缘提取,以形成边缘图像的步骤可以包括:根据预设的梯度算法获取所述深度图像的梯度变化率;根据所述梯度变化率形成二值图像;以及对所述二值图像进行边缘提取,以形成边缘图像。
图3为本发明实施例一提供的二值图像。由于深度图像不同于二值图像,不能进行边缘提取。因此,本实施例中,先将深度图像转化为二值图像,如图3所示,随后在二值图像中进行边缘提取,从而形成边缘图像。所述二值图像(Binary Image)是指图像的每一个像素只有两种可能的取值或灰度等级。在深度图像中,一个平面的边缘在视角方向上的梯度变化是均匀的。因此,可以根据预设的梯度算法,例如,Sobel算法,来计算所述深度图像的梯度变化率,将梯度变化率相同的像素设置为黑色,将梯度变化率不均匀的像素设置为白色,从而形成二值图像。
图4为本发明实施例一提供的边缘图像。将具有相同的梯度变化率的像素相互连接可以形成平面图形的边缘,因此通过对所述二值图像进行边缘提取,可以形成边缘图像,如图4所示。所述边缘图像包括多个平面图形。例如,中间位置的平面图形10由桌面的边缘轮廓形成。
步骤1003、对所述边缘图像之中的平面图形进行筛选,以确定基准平面。
图5为本发明实施例一提供的基准平面示意图。下面参照图5来描述如何对边缘图像中的平面图形进行筛选以确定基准平面。
如图5所示,边缘图像中包括多个平面图形,例如办公桌的桌 面(其为朝上的水平平面)、天花板表面(其为朝下的水平平面)、墙面(其为垂直平面)和办公桌的侧面(其为垂直平面)等。可以通过利用图像深度值来筛选所需的基准平面(例如朝上的水平平面的平面图形)。图像深度值(景深值)取决于拍摄深度图像的设备。在一些设备中,目标与设备的摄像头距离越远,其图像深度值越小;在另外一些设备中,目标与设备的摄像头距离越远,其图像深度值越大;在又一些设备中,根据特定的映射关系来将目标与摄像头之间的距离值折算成图像深度值。在本实施例中,假设目标与设备的摄像头距离越远,其图像深度值越小。因此,朝上的水平平面的平面图形的梯度变化在图像中由下而上递减(在用户视角方向上的深度值由近及远递减),例如,桌面在用户视角方向上的深度值由近及远均匀递减,从而形成平面图形10。在这种情况下,所述对所述边缘图像之中的平面图形进行筛选,以确定基准平面的步骤可以包括:筛选图像深度值由下部向上部递减的平面图形,以形成第一平面图形集合;对所述第一平面图形集合之中的平面图形进行筛选,以确定基准平面。因此,本实施例通过平面的梯度变化率可以排除变化趋势不符合上述要求的平面,例如,部分天花板和办公桌的侧面。当然,在目标和摄像头之间的距离值与图像深度值具有不同关系的其他设备的情况下,可以参照以上示例容易地想到相应的用于筛选出第一平面图形的方式。
本实施例选择尽可能大而且水平的平面作为基准平面,用于投射虚拟物体。本实施例中,所述的对所述第一平面图形集合之中的平面图形进行筛选,以确定基准平面步骤包括:筛选平面图形面积大于整个边缘图像的面积的15%的平面图形,以形成第二平面图形集合;以及对所述第二平面图形集合之中的平面图形进行筛选,以确定基准平面。选取面积大于整个图像面积的15%的平面图形作为基准平面,可以排除面积过小的平面图形。例如,桌面的平面图形10的面积大于整个图像面积的15%,因此桌面的平面图形10被选为基准平面。
优选的,所述对所述第二平面图形集合之中的平面图形进行筛选,以确定基准平面的步骤包括:从所述第二平面图形集合之中选择平面图形中心点最靠近边缘图像下部的平面图形,作为基准平面。当 存在多个平面图形满足上述条件时,可以在用户视角方向上从这些平面图形中选取边缘轮廓的中心点靠近整个图像的下部的平面图形。此时,可以排除部分天花板和墙壁。例如,在本实施例中,平面图形10被选为基准平面20,如图5所示。
本实施例中,所述基准平面的确定方法还包括:根据所述基准平面形成基准坐标系。可选的,所述基准坐标系包括第一轴、第二轴以及第三轴,所述第一轴、第二轴以及第三轴之间相互垂直,所述根据所述基准平面形成基准坐标系的步骤包括:获取所述基准平面之内的至少三个点的坐标;根据所述至少三个点的坐标形成基准坐标系,所述第一轴与所述基准平面垂直,所述第二轴与所述第三轴设置在所述基准平面之内。优选的,所述获取所述基准平面之内的至少三个点的坐标的步骤包括:获取所述基准平面之内的最大正方形;获取所述最大正方形的四个顶点的坐标。
在所述基准平面之上,选择至少三个点定位所述基准平面。本实施例采用绘制所述基准平面的最大内接正方形的方式获得四个点。基准平面的轮廓的最大内接正方形可以通过在轮廓中心绘制一个小正方形,然后将其逐渐放大以与轮廓边缘相交的方式获得。图6为本发明实施例一提供的最大内接正方形示意图。如图6所示,设置正方形30的四个顶点的坐标值,以基准平面20(即,平面图形10)为水平面建立一个基准坐标系。所述基准坐标系包括第一轴、第二轴以及第三轴,所述第一轴、第二轴以及第三轴之间相互垂直。所述第一轴与所述基准平面垂直,所述第二轴与所述第三轴设置在所述基准平面之内。通过上述坐标系,深度图像之中的每一个像素点的坐标都是可以确定的,因此虚拟物体可以通过预知环境之中的每个真实物体的位置选择合适的位置进行虚实融合。因此,本实施例提供的技术方案能够容易地将虚拟物体与现实场景实时匹配,改善了用户的超现实感官体验,并且适合应用于诸如可穿戴透镜之类的便携式装置中。
参见图6,通过在轮廓中心绘制一个小正方形,然后将其逐渐放大以与轮廓边缘相交的方式获得的桌面的平面图形的最大内接正方形基本上位于桌面的中心区域。根据所述正方形30上的至少三个点 的坐标形成基准坐标系。当需要建模例如一个虚拟的笔记本电脑时,可以根据该基准坐标系将这个虚拟的笔记本电脑放在桌面的中心区域。因此,将虚拟笔记本的底盘坐标点形成的平面与上述正方形30重合。如果还需要添置其他虚拟物体,可以在基准平面20上(例如,在基准平面20的中心区域附近)依次对其他虚拟物体进行添置。
本实施例中,所述根据所述基准平面形成基准坐标系的步骤之前还可以包括:对所述边缘图像之中的平面图形进行筛选,以确定参照物平面,所述参照物平面与所述基准平面平行;所述根据所述基准平面形成基准坐标系的步骤还包括:根据所述基准平面和所述参照物平面形成基准坐标系,所述基准坐标系的原点设置在所述参照物平面之内。
本实施例对每一帧的计算速度在30ms以内,因此可以实时的进行平面检测。然而,一旦选定基准平面,而且根据所述基准平面虚拟物体与现实场景融合之后就不允许视角发生很大的变动,否则将无法准确放置虚拟物体。虽然上述基准平面的确定方法准确的找到了基准平面,但是用户的视线一旦发生变化,基准平面出现在视野里的情况也会发生变化,基准平面中心点的位置也会随之变化。假如之前在正方形标定的位置放置了一个虚拟的笔记本电脑,那么一旦用户的视野发生变化,笔记本电脑也会随之转移位置。虽然笔记本电脑还是在之前的基准平面上,但是其在实际空间中的具体位置却不同了,导致用户主观上的感受和体验受到很大影响。
因此,为解决上述问题,可以另外设置参照物平面。图7为本发明实施例一提供的参照物平面示意图。如图7所示,对预先放置在桌面之上的诸如盒子或书本之类的参照物进行检测,并将获得的参照物的平面图形确定为参照物平面40。确定参照物平面40的方法可以类似于前文所述的确定基准平面的方法。例如,可以根据图像深度值来获得适当的平面图形。另外,由于参照物通常选取为书本之类的对象,因此其平面图形的面积通常较小。此时,例如可以通过设定小于边缘图像面积的15%或者更小的阈值来筛选平面图形,以得到参照物平面。参照物平面40可以平行于基准平面20。
随后,根据所述基准平面20和所述参照物平面40建立基准坐标系。可以将所述基准坐标系的原点设置在所述参照物平面40之内。通过以参照物平面40作为参考,可以在基准平面20上确定各个虚拟物体相对于参照物的放置位置,并随后将虚拟物体与现实场景融合。由于参照物平面40是固定的,而基准坐标系的原点设置在参照物平面40之内,因此可以允许用户进行一定的视角转动。只要参照物平面40始终完整的呈现在视野之内就可以作为标定的参照物,从而避免其他虚拟物体的位置发生错乱。
本实施例提供的基准平面的确定方法包括:获取深度图像;对所述深度图像进行边缘提取以形成边缘图像,所述边缘图像包括多个平面图形;以及对所述边缘图像之中的平面图形进行筛选以确定基准平面。本实施例提供的技术方案可以确定现实景象之中的基准平面,从而以上述基准平面为基准建立虚拟坐标系,最终实现将虚拟物体与现实场景相互融合的目的。因此,本实施例提供的技术方案能够容易地将虚拟物体与现实场景实时匹配,改善了用户的超现实感官体验,并且适合应用于诸如可穿戴透镜之类的便携式装置中。
实施例二
图8为本发明实施例二提供的一种基准平面的确定系统的结构示意图。如图8所示,所述基准平面的确定系统包括:第一获取单元101,用于获取深度图像;第一提取单元102,用于对所述深度图像进行边缘提取,以形成边缘图像,所述边缘图像包括多个平面图形;第一筛选单元103,用于对所述边缘图像之中的平面图形进行筛选,以确定基准平面。
参见图2,第一获取单元101获取办公室的深度图像,本实施例中,第一获取单元101可以为深度摄像头,此外本实施例所述的深度图像也称为景深图像。本实施例中,所述第一提取单元102包括:第一获取模块,用于根据预设的梯度算法获取所述深度图像的梯度变化率;第一形成模块,用于根据所述梯度变化率形成二值图像;第一提取模块,用于对所述二值图像进行边缘提取,以形成边缘图像。
由于深度图像不同于二值图像,不能进行边缘提取。因此,本实施例将深度图像转化为二值图像,如图3所示,随后在二值图像中进行边缘提取,从而形成边缘图像。所述二值图像(Binary Image)是指图像的每一个像素只有两种可能的取值或灰度等级。在深度图像中,一个平面的边缘在视角方向上的梯度变化是均匀的。因此,第一获取模块根据预设的梯度算法,例如,Sobel算法,计算所述深度图像的梯度变化率,第一形成模块将梯度变化率相同的像素设置为黑色,将梯度变化率不均匀的像素设置为白色,从而形成二值图像。
将具有相同的梯度变化率的像素相互连接可以形成平面的边缘,因此第一提取模块对所述二值图像进行边缘提取,可以形成边缘图像,如图4所示。所述边缘图像包括多个平面图形。例如,中间位置的平面图形10由桌面的边缘轮廓形成。
下面参照图9对本发明实施例二提供的基准平面的确定系统的一个示例进行详细说明。如图9所示,所述第一筛选单元包括:第一筛选模块,用于筛选图像深度值由下部向上部递减的平面图形,以形成第一平面图形集合;第二筛选模块,用于对所述第一平面图形集合之中的平面图形进行筛选,以确定基准平面。参见图5,如上文所述,在本实施例中,水平方向的平面图形的梯度变化在图像中由下而上递减(在用户视角方向的深度值由近及远递减),例如,桌面在用户视角方向的深度值由近及远均匀递减,从而形成平面图形10。本实施例通过平面的梯度变化率可以排除变化趋势不符合上述要求的平面,例如,部分天花板和办公桌的侧面。
本实施例选择尽可能大而且水平的平面作为基准平面,用于投射虚拟物体。可选的,所述第二筛选模块包括:第一筛选子模块,用于筛选平面图形面积大于整个边缘图像的面积的15%的平面图形,以形成第二平面图形集合;第二筛选子模块,用于对所述第二平面图形集合之中的平面图形进行筛选,以确定基准平面。选取面积大于整个图像面积的15%作为基准平面,从而排除面积过小的平面图形。例如,桌面的平面图形10的面积大于整个图像面积的15%,因此桌面的平面图形10被选为基准平面。
优选的,所述第二筛选子模块包括第三筛选子模块,所述第三筛选子模块用于从所述第二平面图形集合之中选择平面图形中心点最靠近边缘图像下部的平面图形,作为基准平面。当存在多个平面图形满足上述条件时,所述第三筛选子模块可以在用户视角方向上从这些平面图形中选取边缘轮廓的中心点靠近整个图像的下部的平面图形。此时,可以排除部分天花板和墙壁。例如,在本实施例中,平面图形10被选为基准平面20。
本实施例中,所述基准平面的确定系统还包括:第一形成单元,用于根据所述基准平面形成基准坐标系。可选的,所述基准坐标系包括第一轴、第二轴以及第三轴,所述第一轴、第二轴以及第三轴之间相互垂直,所述第一形成单元包括:第二获取模块,用于获取所述基准平面之内的至少三个点的坐标;第二形成模块,用于根据所述至少三个点的坐标形成基准坐标系,所述第一轴与所述基准平面垂直,所述第二轴与所述第三轴设置在所述基准平面之内。优选的,所述第二获取模块包括:第一获取子模块,用于获取所述基准平面之内的最大正方形;第二获取子模块,用于获取所述最大正方形的四个顶点的坐标。
在所述基准平面之上,选择至少三个点定位所述基准平面。本实施例采用绘制所述基准平面的最大内接正方形的方式获得四个点。基准平面的轮廓的最大内接正方形可以通过在轮廓中心绘制一个小正方形,然后将其逐渐放大以与轮廓边缘相交的方式获得。参见图6,设置正方形30的四个顶点的坐标值,以基准平面20(即,平面图形10)为水平面建立一个基准坐标系。所述基准坐标系包括第一轴、第二轴以及第三轴,所述第一轴、第二轴以及第三轴之间相互垂直。所述第一轴与所述基准平面垂直,所述第二轴与所述第三轴设置在所述基准平面之内。通过上述坐标系,深度图像之中的每一个像素点的坐标都是可以确定的,因此虚拟物体可以通过预知环境之中的每个真实物体的位置选择合适的位置进行虚实融合。因此,本实施例提供的技术方案能够容易地将虚拟物体与现实场景实时匹配,改善了用户的超现实感官体验,并且适合应用于诸如可穿戴透镜之类的便携式装置 中。
参见图6,通过在轮廓中心绘制一个小正方形,然后将其逐渐放大以与轮廓边缘相交的方式获得的桌面的平面图形的最大内接正方形基本上位于桌面的中心区域。根据所述正方形30上的至少三个点的坐标形成基准坐标系。当需要建模例如一个虚拟的笔记本电脑时,可以根据该基准坐标系将这个虚拟的笔记本电脑放在桌面的中心区域。因此,将虚拟笔记本的底盘坐标点形成的平面与上述正方形30重合。如果还需要添置其他虚拟物体,可以在基准平面20上(例如,在基准平面20的中心区域附近)依次对其他虚拟物体进行添置。
本实施例对每一帧的计算速度在30ms以内,因此可以实时的进行平面检测。然而,一旦选定基准平面,而且根据所述基准平面虚拟物体与现实场景融合之后就不允许视角发生很大的变动,否则将无法准确放置虚拟物体。虽然上述基准平面的确定方法准确的找到了基准平面,但是用户的视线一旦发生变化,基准平面出现在视野里的情况也会发生变化,基准平面中心点的位置也会随之变化。假如之前在正方形标定的位置放置了一个虚拟的笔记本电脑,那么一旦用户的视野发生变化,笔记本电脑也会随之转移位置。虽然笔记本电脑还是在之前的基准平面上,但是其在实际空间中的具体位置却不同了,导致用户主观上的感受和体验受到很大影响。
因此,为解决上述问题,可以另外设置参照物平面。本实施例中,所述基准平面的确定系统还可以包括:第二筛选单元,用于对所述边缘图像之中的平面图形进行筛选,以确定参照物平面,所述参照物平面与所述基准平面平行;所述第一形成单元还可以包括:第三形成模块,用于根据所述基准平面和所述参照物平面形成基准坐标系,所述基准坐标系的原点设置在所述参照物平面之内。
参见图7,对预先放置在桌面之上的诸如盒子或书本之类的参照物进行检测,并将参照物的平面图形确定为参照物平面40。确定参照物平面40的方法可以类似于前文所述的确定基准平面的方法。例如,可以根据图像深度值来获得适当的平面图形。另外,由于参照物通常选取为书本之类的对象,因此其平面图形的面积通常较小。此时, 例如可以通过设定小于边缘图像面积的15%或者更小的阈值来筛选平面图形,以得到参照物平面。参照物平面40可以平行于基准平面20。
随后,根据所述基准平面20和所述参照物平面40建立基准坐标系。可以将所述基准坐标系的原点设置在所述参照物平面40之内。通过以参照物平面40作为参考,可以在基准平面20上确定各个虚拟物体相对于参照物的放置位置,并随后将虚拟物体与现实场景融合。由于参照物平面40是固定的,而基准坐标系的原点设置在参照物平面40之内,因此可以允许用户进行一定的视角转动。只要参照物平面40始终完整的呈现在视野之内就可以作为标定的参照物,从而避免其他虚拟物体的位置发生错乱。
本实施例提供的基准平面的确定系统包括:第一获取单元,用于获取深度图像;第一提取单元,用于对所述深度图像进行边缘提取,以形成边缘图像,所述边缘图像包括多个平面;第一筛选单元,用于对所述边缘图像之中的平面进行筛选,以确定基准平面。本实施例提供的技术方案可以确定现实景象之中的基准平面,从而以上述基准平面为基准建立虚拟坐标系,最终实现将虚拟物体与现实场景相互融合的目的。因此,本实施例提供的技术方案能够容易地将虚拟物体与现实场景实时匹配,改善了用户的超现实感官体验,并且适合应用于诸如可穿戴透镜之类的便携式装置中。
可以理解的是,以上实施方式仅仅是为了说明本发明的原理而采用的示例性实施方式,然而本发明并不局限于此。各实施例中提到的各个组成部件在没有冲突的情况下可以进行任意组合或者去掉其中的一个或多个。对于本领域内的普通技术人员而言,在不脱离本发明的精神和实质的情况下,可以做出各种变型和改进,这些变型和改进也视为本发明的保护范围。

Claims (18)

  1. 一种基准平面的确定方法,包括:
    获取深度图像;
    对所述深度图像进行边缘提取,以形成边缘图像,所述边缘图像包括多个平面图形;以及
    对所述边缘图像之中的平面图形进行筛选,以确定基准平面。
  2. 根据权利要求1所述的基准平面的确定方法,其中,在所述对所述边缘图像之中的平面图形进行筛选,以确定基准平面的步骤之后,所述基准平面的确定方法还包括:
    根据所述基准平面形成基准坐标系。
  3. 根据权利要求1所述的基准平面的确定方法,其中,所述对所述深度图像进行边缘提取,以形成边缘图像的步骤包括:
    根据预设的梯度算法获取所述深度图像的梯度变化率;
    根据所述梯度变化率形成二值图像;以及
    对所述二值图像进行边缘提取,以形成边缘图像。
  4. 根据权利要求1所述的基准平面的确定方法,其中,所述对所述边缘图像之中的平面图形进行筛选,以确定基准平面的步骤包括:
    筛选图像深度值由下部向上部递减的平面图形,以形成第一平面图形集合;以及
    对所述第一平面图形集合之中的平面图形进行筛选,以确定基准平面。
  5. 根据权利要求4所述的基准平面的确定方法,其中,所述对所述第一平面图形集合之中的平面图形进行筛选,以确定基准平面的步骤包括:
    筛选平面图形面积大于边缘图像面积的15%的平面图形,以形成第二平面图形集合;以及
    对所述第二平面图形集合之中的平面图形进行筛选,以确定基准平面。
  6. 根据权利要求5所述的基准平面的确定方法,其中,所述对所述第二平面图形集合之中的平面图形进行筛选,以确定基准平面的步骤包括:
    从所述第二平面图形集合之中选择平面图形中心点最靠近边缘图像下部的平面图形,作为基准平面。
  7. 根据权利要求2所述的基准平面的确定方法,其中,所述基准坐标系包括第一轴、第二轴以及第三轴,所述第一轴、第二轴以及第三轴之间相互垂直,所述根据所述基准平面形成基准坐标系的步骤包括:
    获取所述基准平面之内的至少三个点的坐标;以及
    根据所述至少三个点的坐标形成基准坐标系,所述第一轴与所述基准平面垂直,所述第二轴与所述第三轴设置在所述基准平面之内。
  8. 根据权利要求7所述的基准平面的确定方法,其中,所述获取所述基准平面之内的至少三个点的坐标的步骤包括:
    获取所述基准平面之内的最大正方形;以及
    获取所述最大正方形的四个顶点的坐标。
  9. 根据权利要求2所述的基准平面的确定方法,其中,在所述根据所述基准平面形成基准坐标系的步骤之前,所述基准平面的确定方法还包括:
    对所述边缘图像之中的平面图形进行筛选,以确定参照物平面,所述参照物平面与所述基准平面平行;并且
    所述根据所述基准平面形成基准坐标系的步骤包括:
    根据所述基准平面和所述参照物平面形成基准坐标系,所述基准坐标系的原点设置在所述参照物平面之内。
  10. 一种基准平面的确定系统,包括:
    第一获取单元,用于获取深度图像;
    第一提取单元,用于对所述深度图像进行边缘提取,以形成边缘图像,所述边缘图像包括多个平面图形;以及
    第一筛选单元,用于对所述边缘图像之中的平面图形进行筛选,以确定基准平面。
  11. 根据权利要求10所述的基准平面的确定系统,还包括:
    第一形成单元,用于根据所述基准平面形成基准坐标系。
  12. 根据权利要求10所述的基准平面的确定系统,其中,所述第一提取单元包括:
    第一获取模块,用于根据预设的梯度算法获取所述深度图像的梯度变化率;
    第一形成模块,用于根据所述梯度变化率形成二值图像;以及
    第一提取模块,用于对所述二值图像进行边缘提取,以形成边缘图像。
  13. 根据权利要求10所述的基准平面的确定系统,其中,所述第一筛选单元包括:
    第一筛选模块,用于筛选图像深度值由下部向上部递减的平面图形,以形成第一平面图形集合;
    第二筛选模块,用于对所述第一平面图形集合之中的平面图形进行筛选,以确定基准平面。
  14. 根据权利要求13所述的基准平面的确定系统,其中,所述 第二筛选模块包括:
    第一筛选子模块,用于筛选平面图形面积大于边缘图像面积的15%的平面图形,以形成第二平面图形集合;以及
    第二筛选子模块,用于对所述第二平面图形集合之中的平面图形进行筛选,以确定基准平面。
  15. 根据权利要求14所述的基准平面的确定系统,其中,所述第二筛选子模块包括:
    第三筛选子模块,用于从所述第二平面图形集合之中选择平面图形中心点最靠近边缘图像下部的平面图形,作为基准平面。
  16. 根据权利要求11所述的基准平面的确定系统,其中,所述基准坐标系包括第一轴、第二轴以及第三轴,所述第一轴、第二轴以及第三轴之间相互垂直,所述第一形成单元包括:
    第二获取模块,用于获取所述基准平面之内的至少三个点的坐标;
    第二形成模块,用于根据所述至少三个点的坐标形成基准坐标系,所述第一轴与所述基准平面垂直,所述第二轴与所述第三轴设置在所述基准平面之内。
  17. 根据权利要求16所述的基准平面的确定系统,其中,所述第二获取模块包括:
    第一获取子模块,用于获取所述基准平面之内的最大正方形;
    第二获取子模块,用于获取所述最大正方形的四个顶点的坐标。
  18. 根据权利要求11所述的基准平面的确定系统,还包括:
    第二筛选单元,用于对所述边缘图像之中的平面图形进行筛选,以确定参照物平面,所述参照物平面与所述基准平面平行;并且
    所述第一形成单元包括:
    第三形成模块,用于根据所述基准平面和所述参照物平面形成 基准坐标系,所述基准坐标系的原点设置在所述参照物平面之内。
PCT/CN2016/085251 2016-03-09 2016-06-08 基准平面的确定方法和确定系统 WO2017152529A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/525,703 US10319104B2 (en) 2016-03-09 2016-06-08 Method and system for determining datum plane

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610133165.5 2016-03-09
CN201610133165.5A CN105825499A (zh) 2016-03-09 2016-03-09 基准平面的确定方法和确定系统

Publications (1)

Publication Number Publication Date
WO2017152529A1 true WO2017152529A1 (zh) 2017-09-14

Family

ID=56987608

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/085251 WO2017152529A1 (zh) 2016-03-09 2016-06-08 基准平面的确定方法和确定系统

Country Status (3)

Country Link
US (1) US10319104B2 (zh)
CN (1) CN105825499A (zh)
WO (1) WO2017152529A1 (zh)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600638B (zh) * 2016-11-09 2020-04-17 深圳奥比中光科技有限公司 一种增强现实的实现方法
EP3680857B1 (en) * 2017-09-11 2021-04-28 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and apparatus, electronic device and computer-readable storage medium
CN110827412A (zh) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 自适应平面的方法、装置和计算机可读存储介质
CN112912921B (zh) * 2018-10-11 2024-04-30 上海科技大学 从深度图中提取平面的系统和方法
US11804015B2 (en) 2018-10-30 2023-10-31 Samsung Electronics Co., Ltd. Methods for determining three-dimensional (3D) plane information, methods for displaying augmented reality display information and corresponding devices
CN110215686B (zh) * 2019-06-27 2023-01-06 网易(杭州)网络有限公司 游戏场景中的显示控制方法及装置、存储介质及电子设备
CN110675360B (zh) * 2019-08-02 2022-04-01 杭州电子科技大学 一种基于深度图像的实时平面检测及提取的方法
CN113766147B (zh) * 2020-09-22 2022-11-08 北京沃东天骏信息技术有限公司 视频中嵌入图像的方法、平面预测模型获取方法和装置
CN112198527B (zh) * 2020-09-30 2022-12-27 上海炬佑智能科技有限公司 参考平面调整及障碍物检测方法、深度相机、导航设备
CN112198529B (zh) * 2020-09-30 2022-12-27 上海炬佑智能科技有限公司 参考平面调整及障碍物检测方法、深度相机、导航设备
CN112697042B (zh) * 2020-12-07 2023-12-05 深圳市繁维科技有限公司 手持式tof相机及其强适应包裹体积测量方法
CN114322775B (zh) * 2022-01-06 2022-11-11 深圳威洛博机器人有限公司 一种机器人视觉定位系统及视觉定位方法
CN114943778B (zh) * 2022-07-26 2023-01-13 广州镭晨智能装备科技有限公司 基准面确定方法、检测方法、装置、设备和存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0633550A2 (en) * 1993-06-29 1995-01-11 Canon Kabushiki Kaisha Image processing method and apparatus thereof
CN102135417A (zh) * 2010-12-26 2011-07-27 北京航空航天大学 一种全自动三维特征提取方法
US20110211749A1 (en) * 2010-02-28 2011-09-01 Kar Han Tan System And Method For Processing Video Using Depth Sensor Information
CN102566827A (zh) * 2010-12-30 2012-07-11 株式会社理光 虚拟触摸屏系统中对象检测方法和系统
CN103389042A (zh) * 2013-07-11 2013-11-13 夏东 基于深度图像的地面自动检测以及场景高度计算的方法
US20140064602A1 (en) * 2012-09-05 2014-03-06 Industrial Technology Research Institute Method and apparatus for object positioning by using depth images
CN103729850A (zh) * 2013-12-31 2014-04-16 楚天科技股份有限公司 一种在全景图中提取直线的方法

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2553473A1 (en) * 2005-07-26 2007-01-26 Wa James Tam Generating a depth map from a tw0-dimensional source image for stereoscopic and multiview imaging
US8223143B2 (en) * 2006-10-27 2012-07-17 Carl Zeiss Meditec, Inc. User interface for efficiently displaying relevant OCT imaging data
US9164577B2 (en) 2009-12-22 2015-10-20 Ebay Inc. Augmented reality system, method, and apparatus for displaying an item image in a contextual environment
CN102075686B (zh) * 2011-02-10 2013-10-30 北京航空航天大学 一种鲁棒的实时在线摄像机跟踪方法
US9330490B2 (en) * 2011-04-29 2016-05-03 University Health Network Methods and systems for visualization of 3D parametric data during 2D imaging
WO2014020801A1 (ja) * 2012-07-31 2014-02-06 株式会社ソニー・コンピュータエンタテインメント 画像処理装置、画像処理方法、および画像ファイルのデータ構造
CN102901488B (zh) * 2012-09-07 2015-12-16 曹欢欢 一种自动生成房间平面图的方法及设备
US9773074B2 (en) * 2012-12-06 2017-09-26 Daybreak Game Company Llc System and method for building digital objects with blocks
US9317962B2 (en) * 2013-08-16 2016-04-19 Indoor Technologies Ltd 3D space content visualization system
CN104574515B (zh) * 2013-10-09 2017-10-17 华为技术有限公司 一种三维物体重建的方法、装置和终端
US10008027B1 (en) * 2014-10-20 2018-06-26 Henry Harlyn Baker Techniques for determining a three-dimensional representation of a surface of an object from a set of images
US9691177B2 (en) * 2014-12-12 2017-06-27 Umbra Software Ltd. Techniques for automatic occluder simplification using planar sections
CN104539925B (zh) 2014-12-15 2016-10-05 北京邮电大学 基于深度信息的三维场景增强现实的方法及系统
CN105046710A (zh) * 2015-07-23 2015-11-11 北京林业大学 基于深度图分割与代理几何体的虚实碰撞交互方法及装置
US9734405B2 (en) * 2015-10-05 2017-08-15 Pillar Vision, Inc. Systems and methods for monitoring objects in athletic playing spaces
US9741125B2 (en) * 2015-10-28 2017-08-22 Intel Corporation Method and system of background-foreground segmentation for image processing
CN107205113B (zh) * 2016-03-18 2020-10-16 松下知识产权经营株式会社 图像生成装置、图像生成方法及计算机可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0633550A2 (en) * 1993-06-29 1995-01-11 Canon Kabushiki Kaisha Image processing method and apparatus thereof
US20110211749A1 (en) * 2010-02-28 2011-09-01 Kar Han Tan System And Method For Processing Video Using Depth Sensor Information
CN102135417A (zh) * 2010-12-26 2011-07-27 北京航空航天大学 一种全自动三维特征提取方法
CN102566827A (zh) * 2010-12-30 2012-07-11 株式会社理光 虚拟触摸屏系统中对象检测方法和系统
US20140064602A1 (en) * 2012-09-05 2014-03-06 Industrial Technology Research Institute Method and apparatus for object positioning by using depth images
CN103389042A (zh) * 2013-07-11 2013-11-13 夏东 基于深度图像的地面自动检测以及场景高度计算的方法
CN103729850A (zh) * 2013-12-31 2014-04-16 楚天科技股份有限公司 一种在全景图中提取直线的方法

Also Published As

Publication number Publication date
US10319104B2 (en) 2019-06-11
CN105825499A (zh) 2016-08-03
US20180075616A1 (en) 2018-03-15

Similar Documents

Publication Publication Date Title
WO2017152529A1 (zh) 基准平面的确定方法和确定系统
CN109961406B (zh) 一种图像处理的方法、装置及终端设备
TWI712918B (zh) 擴增實境的影像展示方法、裝置及設備
US9036007B2 (en) System and method for converting two dimensional to three dimensional video
US11004267B2 (en) Information processing apparatus, information processing method, and storage medium for generating a virtual viewpoint image
US10148895B2 (en) Generating a combined infrared/visible light image having an enhanced transition between different types of image information
US10360711B2 (en) Image enhancement with fusion
WO2016188010A1 (zh) 运动图像补偿方法及装置、显示装置
KR20060113514A (ko) 화상 처리 장치 및 화상 처리 방법, 프로그램, 및 기록매체
US20150379720A1 (en) Methods for converting two-dimensional images into three-dimensional images
WO2019076348A1 (zh) 一种虚拟现实vr界面生成的方法和装置
CN108182659A (zh) 一种基于视点跟踪、单视角立体画的裸眼3d显示技术
EP2642446B1 (en) System and method of estimating page position
US20220172319A1 (en) Camera-based Transparent Display
TW201630408A (zh) 影像資料分割技術
WO2023097805A1 (zh) 显示方法、显示设备及计算机可读存储介质
US20210390780A1 (en) Augmented reality environment enhancement
JP2015171143A (ja) カラーコード化された構造によるカメラ較正の方法及び装置、並びにコンピュータ可読記憶媒体
CN111343445A (zh) 动态调整深度解析度的装置及其方法
WO2024055531A1 (zh) 照度计数值识别方法、电子设备及存储介质
JP2020523957A (ja) マルチ・ビュー・コンテンツを観察するユーザに情報を提示する方法及び機器
JP2006318015A (ja) 画像処理装置および画像処理方法、画像表示システム、並びに、プログラム
JP7125847B2 (ja) 3dモデル表示装置、3dモデル表示方法及び3dモデル表示プログラム
CN102169597B (zh) 一种平面图像上物体的深度设置方法和系统
TWI541761B (zh) 影像處理方法及其電子裝置

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 15525703

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16893176

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16893176

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.05.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16893176

Country of ref document: EP

Kind code of ref document: A1