CN111625093A - Dynamic scheduling display method of massive digital point cloud data in MR glasses - Google Patents

Dynamic scheduling display method of massive digital point cloud data in MR glasses Download PDF

Info

Publication number
CN111625093A
CN111625093A CN202010422755.6A CN202010422755A CN111625093A CN 111625093 A CN111625093 A CN 111625093A CN 202010422755 A CN202010422755 A CN 202010422755A CN 111625093 A CN111625093 A CN 111625093A
Authority
CN
China
Prior art keywords
data
point cloud
current
glasses
list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010422755.6A
Other languages
Chinese (zh)
Other versions
CN111625093B (en
Inventor
杨杰
储园
李丰
王轶
何涛
钟其波
陈玺舟
李镜晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming Escher Technology Co ltd
Original Assignee
Kunming Escher Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming Escher Technology Co ltd filed Critical Kunming Escher Technology Co ltd
Priority to CN202010422755.6A priority Critical patent/CN111625093B/en
Publication of CN111625093A publication Critical patent/CN111625093A/en
Application granted granted Critical
Publication of CN111625093B publication Critical patent/CN111625093B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a dynamic scheduling display method of massive digital point cloud data in MR glasses, which comprises point cloud and lenses and is characterized by comprising the first step of importing current visual angle frame data into incremental octree data, traversing the incremental octree data, and calculating a central point P projected on a screen by the coordinate of a certain level node Bn of the point cloudBnCoordinate and pass through PBnCalculating boundary point PEnCoordinates; second step, by dynamicsDynamic scheduling of a cut-down octree algorithm, and calculating and loading P of the current view child node Bn in each frameBnAnd PEnAnd unloading the child nodes which are not in the current visual field to obtain a loading list An and An unloading list Rn; thirdly, traversing the element Ai in the loading list An, judging whether the element Ai is loaded, and rendering data; fourthly, traversing the elements Ri in the unloading list Rn and judging whether the current Ri is loaded or not; fifthly, continuously traversing next-level child nodes and automatically supplementing the next-level child nodes into Rn; and sixthly, returning to the first step.

Description

Dynamic scheduling display method of massive digital point cloud data in MR glasses
Technical Field
The invention relates to the field of dynamic scheduling display in MR (magnetic resonance) glasses, in particular to a dynamic scheduling display method of massive digital point cloud data in the MR glasses.
Background
Mixed Reality (MR) is a further development of virtual reality technology that builds an interactive feedback information loop between the real world, the virtual world and the user by presenting virtual scene information in the real scene to enhance the realism of the user experience. Mixed reality is a combination of technologies that provide not only new viewing methods but also new input methods, and all methods are combined with each other, thereby promoting innovation.
Mixed Reality (MR), which includes both augmented reality and augmented virtual, refers to a new visualization environment created by the merging of real and virtual worlds. Physical and digital objects coexist in the new visualization environment and interact in real time. The system typically has three features: 1. it combines virtual and reality 2. in virtual three-dimensional (3D registration) 3. real-time operation; mixed Reality (MR) implementations need to be in an environment where real-world objects can interact with each other. The key point of MR is the interaction with the real world and the timely acquisition of information.
The existing MR glasses have very limited CPU/GPU computing power and memory space, so that all point cloud data cannot be loaded into a memory for rendering at one time, namely, dynamic scheduling display of massive digital point cloud data in the MR glasses cannot be realized.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a dynamic scheduling display method of massive digital point cloud data in MR glasses, which can adopt a dynamic scheduling method under the calculation power of a CPU/GPU of the MR glasses and the memory space, namely, each frame calculates and loads sub-node data visible in the current view, and unloads sub-nodes which are not in the current view; therefore, the CPU/GPU computing power and the memory space of the MR glasses are fully utilized, and the dynamic scheduling display of the point cloud data is realized.
In order to achieve the purpose, the invention provides the following technical scheme:
the invention discloses a dynamic scheduling display method of massive digital point cloud data in MR glasses, which is characterized by comprising the following steps:
the first step is as follows: importing current visual angle frame data into increment octree data, traversing the increment octree data, and calculating a central point P projected on a screen by the coordinate of a certain level node Bn of the point cloudBnCoordinate and pass through PBnCalculating coordinates to obtain boundary point PEnCoordinates;
the second step is that: calculating and loading the central point P of the current view child node Bn in each frame through dynamic scheduling of a dynamic cut-down octree algorithmBnAnd boundary point PEnAnd unloading the child nodes which are not in the current visual field to obtain a loading list An and An unloading list Rn;
the third step: after the loading list An is obtained, traversing the element Ai in the loading list An, judging whether the element Ai is loaded, and rendering corresponding point cloud data;
the fourth step: after obtaining an unloading list Rn, traversing elements Ri in the unloading list Rn, judging whether current Ri is loaded or not, and unloading data in Ri;
the fifth step: continuously traversing next-level child nodes and automatically supplementing the next-level child nodes into Rn;
and a sixth step: returning to the first step, and continuing to perform the rendering of the next frame.
Preferably, in the first step, the center point P projected on the screen is calculated according to the coordinates of a certain level node Bn among a plurality of nodes BnBnThe transformation formula of each point P to the screen coordinate P' is:
Figure BDA0002497558650000021
wherein: p is the coordinate of Bn; acquiring posture matrix data M0 of the position of the person in real time through an inertial navigation positioning system; obtaining an internal reference matrix Mi and a perspective projection matrix Mp of the miniature projector by using a calibration process provided by MR glasses; and calculating to obtain a window transformation matrix Mw according to the current display resolution of the MR glasses.
Preferably, said first step, by PBnCalculating coordinates to obtain boundary point PEnCoordinates and a calculation formula are as follows:
PEn=PBn+V0*Rn
wherein: attitude matrix M by the position of the person0Calculating to obtain the vertex direction V of the person wearing the MR glasses0Then taking the radius R of the sphere surrounded by the outside of BnnCalculating boundary point PEn
Preferably, the dynamic scheduling method by the dynamic cut octree algorithm in the second step includes: firstly, judging the display size of a certain level node Bn projected to a lens; then obtaining the central point P of a certain level node BnBnAnd a boundary point PEnAnd converting them into screen coordinate data; if P isBnAnd PEnThe coordinates are all outside the screen range, or PBnAnd PEnIf the distance value of the coordinates is smaller than a certain specified threshold value L, the current node and all the child nodes are removed from the rendering queue; otherwise, displaying the point cloud content of the current node, continuously traversing the next-level sub-node, and judgingWhether the disconnection can be displayed or not, a putting-in loading list An which can be displayed, and a putting-in unloading list Rn which cannot be displayed; the element in An is Ai, and the element in Rn is Ri.
Preferably, in the third step, after the loading list An is obtained, the element Ai in the loading list An is traversed, and if the current Ai is loaded, the current element Ai is skipped; and if the current Ai is not loaded, loading a data request in the Ai and rendering corresponding point cloud data.
Preferably, after the unloading list Rn is obtained, the element Ri in the unloading list Rn is traversed, if the current Ri is not loaded, the current element Ri is skipped, data in the Ri is unloaded, the corresponding point cloud data is not rendered, the next-level child node is continuously traversed, and the point cloud data is automatically supplemented to the Rn in the fourth step and the fifth step.
Preferably, the display size of the calculation node Bn projected on the lens and the calculation speed of the dynamic cut-off octree algorithm are fast, and the calculation node Bn is suitable for use in a low-computation-power environment such as MR glasses; the dynamic scheduling process of the dynamic clipping octree algorithm has no influence on the real-time rendering efficiency, and the residual renderable data volume of each frame after clipping is finished is controllable; the density distribution of the point cloud scene conforms to the natural law of human eye observation, and can be directly used for displaying and applying massive digital achievements.
Preferably, the plurality of central points PBn and boundary points PEn dynamically scheduled and processed by the dynamic clipping octree algorithm are copied into the MR glasses through cloud data, and the position and posture of the MR glasses in the space can be accurately calculated through inertial navigation positioning in the copying process, so that real-time rendering of the digital point cloud scene can be accurately realized; the rendered physical scene and the physical scene seen in eyes are accurately superposed and displayed, and scene navigation or data analysis and calculation work combining virtual and real is realized on the basis.
Preferably, an overall loading threshold value needs to be set in the dynamic scheduling process of the dynamic cut-down octree algorithm, if the currently loaded point data exceeds the threshold value, all other child node data are automatically shielded, no new data is loaded, and the dynamic scheduling display of the massive digital point cloud data in the MR glasses can be realized through the setting of the parameters.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention uses the structure of the increment octree, and can adopt a dynamic scheduling method under the CPU/GPU computing power and memory space of the MR glasses, namely, each frame computes and loads the sub-node data which is visible in the current visual field, and unloads the sub-nodes which are not in the current visual field; therefore, the CPU/GPU computing power and the memory space of the MR glasses are fully utilized, and the dynamic scheduling display of the point cloud data is realized.
2. The dynamic clipping octree process used in the invention does not affect the real-time rendering efficiency, and the residual renderable data volume of each frame after clipping is finished is controllable; the density distribution of the point cloud scene conforms to the natural law of human eye observation, and can be directly used for displaying and applying massive digital achievements.
Drawings
FIG. 1 is a flow chart of a method for dynamically scheduling and displaying mass digital point cloud data in MR glasses according to the present invention
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a method for dynamically scheduling and displaying mass digital point cloud data in MR glasses according to the present invention, and discloses a method for dynamically scheduling and displaying mass digital point cloud data in MR glasses, which comprises the following steps: the method comprises the steps of firstly, leading current visual angle frame data into incremental octree data, traversing the incremental octree data, and calculating a central point P projected on a screen by coordinates of a certain level node Bn of point cloudBnCoordinate and pass through PBnCalculating coordinates to obtain boundary point PEnCoordinates; second, dynamically scheduling by dynamic cut-down octree algorithm, every timeFrame calculation and loading of center point P of current view child node BnBnAnd boundary point PEnAnd unloading the child nodes which are not in the current visual field to obtain a loading list An and An unloading list Rn; thirdly, after a loading list An is obtained, traversing the elements Ai in the loading list An, judging whether the elements Ai are loaded, and rendering corresponding point cloud data; fourthly, after the unloading list Rn is obtained, traversing the elements Ri in the unloading list Rn, judging whether the current Ri is loaded or not, and unloading the data in the Ri; fifthly, continuously traversing next-level child nodes and automatically supplementing the next-level child nodes into Rn; and sixthly, returning to the first step, and continuing to perform the rendering of the next frame.
Firstly, in a plurality of nodes Bn, calculating a central point P projected on a screen according to the coordinates of a certain level node BnBnThe transformation formula of each point P to the screen coordinate P' is:
Figure BDA0002497558650000041
wherein: p is the coordinate of Bn; acquiring posture matrix data M0 of the position of the person in real time through an inertial navigation positioning system; obtaining an internal reference matrix Mi and a perspective projection matrix Mp of the miniature projector by using a calibration process provided by MR glasses; and calculating to obtain a window transformation matrix Mw according to the current display resolution of the MR glasses.
First step, by PBnCalculating coordinates to obtain boundary point PEnCoordinates and a calculation formula are as follows:
PEn=PBn+V0*Rn
wherein: attitude matrix M by the position of the person0Calculating to obtain the vertex direction V of the person wearing the MR glasses0Then taking the radius R of the sphere surrounded by the outside of BnnCalculating boundary point PEn
In the second step, the dynamic scheduling method by the dynamic cut octree algorithm comprises the following steps: firstly, judging the display size of a certain level node Bn projected to a lens; then obtaining the central point P of a certain level node BnBnAnd a boundary point PEnAnd converting them into screen coordinate data; if P isBnAnd PEnThe coordinates are all outside the screen range, or PBnAnd PEnIf the distance value of the coordinates is smaller than a certain specified threshold value L, the current node and all the child nodes are removed from the rendering queue; otherwise, displaying the point cloud content of the current node, continuously traversing the next-level sub-node, and judging whether the point cloud content can be displayed, the loading list An can be displayed, and the unloading list Rn cannot be displayed; the element in An is Ai, and the element in Rn is Ri.
Thirdly, after a loading list An is obtained, traversing the elements Ai in the loading list An, and skipping the current elements Ai if the current elements Ai are loaded; and if the current Ai is not loaded, loading a data request in the Ai and rendering corresponding point cloud data.
And fourthly, after the unloading list Rn is obtained, traversing the elements Ri in the unloading list Rn, if the current Ri is not loaded, skipping the current elements Ri, unloading the data in Ri, continuously traversing the next-level sub-node without rendering the corresponding point cloud data, and automatically supplementing the point cloud data into the Rn.
And sixthly, returning to the first step, and continuing to perform the rendering of the next frame.
The display size of the calculation node Bn projected to the lens and the calculation speed of the dynamic cutting octree algorithm are high, and the method is suitable for being used in the low-calculation-force environment of MR glasses; the dynamic scheduling process of the dynamic clipping octree algorithm has no influence on the real-time rendering efficiency, and the residual renderable data volume of each frame after clipping is finished is controllable; the density distribution of the point cloud scene conforms to the natural law of human eye observation, and can be directly used for displaying and applying massive digital achievements.
The central points PBn and the boundary points PEN which are dynamically scheduled and processed by the dynamic cutting octree algorithm are copied into the MR glasses through cloud data, and the position and the posture of the MR glasses in the space can be accurately calculated through inertial navigation positioning, so that the real-time rendering of a digital point cloud scene can be accurately realized; the rendered physical scene and the physical scene seen in eyes are accurately superposed and displayed, and scene navigation or data analysis and calculation work combining virtual and real is realized on the basis.
And a total loading threshold value is required to be set in the dynamic scheduling process of the dynamic cutting octree algorithm, if the currently loaded point data exceeds the threshold value, all other child node data are automatically shielded, no new data is loaded, and the dynamic scheduling display of the mass digital point cloud data in the MR glasses can be realized through the setting of the parameters.
The working principle is as follows:
1. starting rendering of the current frame;
2. according to the current view angle traversal increment octree data, the numerical values of BPn and BEn are judged, and a loading list An and An unloading list Rn are obtained
3. IF (BPn and BEn are both out of view)
Adding current child node Bn to Rn
}
4. ELSE IF (distance between BPn and BEn is less than threshold L)
Adding current child node Bn to Rn
}
5、ELSE{
The current child node Bn joins An
Continue traversing the child nodes of Bn
}
6. Element Ai in FOR EACH An
7. IF (Ai already loaded) { skip Current element }
Loading data requests in Ai and rendering corresponding point cloud data
}
8. Element Ri in FOR EACH Rn
9. IF Ri is not loaded { skips the current element }
10. Unloading the data in Ri, and no longer rendering the corresponding point cloud data
12. Continuously traversing next-level child nodes and automatically supplementing the nodes into Rn
}
Returning to the step 1, and continuing to perform the rendering of the next frame.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (8)

1. A dynamic scheduling display method of massive digital point cloud data in MR glasses is characterized by comprising the following steps:
the first step is as follows: importing current visual angle frame data into increment octree data, traversing the increment octree data, and calculating a central point P projected on a screen by the coordinate of a certain level node Bn of the point cloudBnCoordinate and pass through PBnCalculating coordinates to obtain boundary point PEnCoordinates;
the second step is that: calculating and loading the central point P of the current view child node Bn in each frame through dynamic scheduling of a dynamic cut-down octree algorithmBnAnd boundary point PEnAnd unloading the child nodes which are not in the current visual field to obtain a loading list An and An unloading list Rn;
the third step: after the loading list An is obtained, traversing the element Ai in the loading list An, judging whether the element Ai is loaded, and rendering corresponding point cloud data;
the fourth step: after the unloading list Rn is obtained, traversing the elements Ri in the unloading list Rn, judging whether the current Ri is loaded or not, and unloading the data in the Ri;
the fifth step: continuously traversing next-level child nodes and automatically supplementing the next-level child nodes into Rn;
and a sixth step: returning to the first step, and continuing to perform the rendering of the next frame.
2. The method for dynamically scheduling and displaying the mass digital point cloud data in the MR glasses according to claim 1, wherein in the first step, the center point P projected on the screen of a certain level of node Bn is calculated according to the coordinates of the node Bn in the plurality of nodes BnBnThe transformation formula of each point P to the screen coordinate P' is:
Figure FDA0002497558640000011
wherein: p is the coordinate of Bn; acquiring posture matrix data M0 of the position of the person in real time through an inertial navigation positioning system; obtaining an internal reference matrix Mi and a perspective projection matrix Mp of the miniature projector by using a calibration process provided by MR glasses; and calculating to obtain a window transformation matrix Mw according to the current display resolution of the MR glasses.
3. The method for dynamically scheduling and displaying the massive digital point cloud data in the MR glasses according to claim 1, wherein in the first step, P is usedBnCalculating coordinates to obtain boundary point PEnCoordinates and a calculation formula are as follows:
PEn=PBn+V0*Rn
wherein: attitude matrix M by the position of the person0Calculating to obtain the vertex direction V of the person wearing the MR glasses0Then taking the radius R of the sphere surrounded by the outside of BnnCalculating boundary point PEn
4. The method for dynamically scheduling and displaying the massive digital point cloud data in the MR glasses according to claim 1, wherein the method for dynamically scheduling by the dynamic cut octree algorithm in the second step comprises: firstly, judging the display size of a certain level node Bn projected to a lens; then obtaining the central point P of a certain level node BnBnAnd a boundary point PEnAnd converting them into screen coordinate data; if P isBnAnd PEnThe coordinates are all outside the screen range, or PBnAnd PEnIf the distance value of the coordinates is smaller than a certain specified threshold value L, the current node and all the child nodes are removed from the rendering queue; otherwise, displaying the point cloud content of the current node, continuously traversing the next-level sub-node, and judging whether the point cloud content can be displayed, the loading list An can be displayed, and the unloading list Rn cannot be displayed; the element in An is Ai, and the element in Rn is Ri.
5. The method for dynamically scheduling and displaying the mass digital point cloud data in the MR glasses according to claim 1, wherein in the third step, after the loading list An is obtained, the elements Ai in the loading list An are traversed, and if the current Ai is loaded, the current element Ai is skipped; and if the current Ai is not loaded, loading a data request in the Ai and rendering corresponding point cloud data.
6. The method for dynamically scheduling and displaying the massive digital point cloud data in the MR glasses according to claim 1, wherein in the fourth step and the fifth step, after the unloading list Rn is obtained, the elements Ri in the unloading list Rn are traversed, if the current Ri is not loaded, the current elements Ri are skipped, the data in Ri are unloaded, the corresponding point cloud data are not rendered, the next-level sub-node is continuously traversed, and the point cloud data are automatically supplemented to the Rn.
7. The method for dynamically scheduling and displaying the mass digital point cloud data in the MR glasses according to claim 2, wherein the plurality of central points PBn and boundary points PEN which are dynamically scheduled and processed by the dynamic clipping octree algorithm are copied into the MR glasses through cloud data, and the position and the posture of the MR glasses in the space can be accurately calculated through inertial navigation positioning in the copying process.
8. The method for dynamically scheduling and displaying massive digital point cloud data in MR glasses according to claim 1, wherein an overall loading threshold value is required to be set in the dynamic scheduling process of the dynamic clipping octree algorithm, and if the currently loaded point data exceeds the threshold value, all other child node data are automatically masked, and no new data is loaded.
CN202010422755.6A 2020-05-19 2020-05-19 Dynamic scheduling display method of massive digital point cloud data in MR (magnetic resonance) glasses Active CN111625093B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010422755.6A CN111625093B (en) 2020-05-19 2020-05-19 Dynamic scheduling display method of massive digital point cloud data in MR (magnetic resonance) glasses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010422755.6A CN111625093B (en) 2020-05-19 2020-05-19 Dynamic scheduling display method of massive digital point cloud data in MR (magnetic resonance) glasses

Publications (2)

Publication Number Publication Date
CN111625093A true CN111625093A (en) 2020-09-04
CN111625093B CN111625093B (en) 2023-08-01

Family

ID=72259796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010422755.6A Active CN111625093B (en) 2020-05-19 2020-05-19 Dynamic scheduling display method of massive digital point cloud data in MR (magnetic resonance) glasses

Country Status (1)

Country Link
CN (1) CN111625093B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150125071A1 (en) * 2013-11-07 2015-05-07 Autodesk, Inc. Pre-segment point cloud data to run real-time shape extraction faster
US20160210784A1 (en) * 2015-01-20 2016-07-21 Scott Ramsby Augmented reality field of view object follower
US20160267720A1 (en) * 2004-01-30 2016-09-15 Electronic Scripting Products, Inc. Pleasant and Realistic Virtual/Augmented/Mixed Reality Experience
WO2017168038A1 (en) * 2016-03-31 2017-10-05 Umbra Software Oy Virtual reality streaming
US20180350044A1 (en) * 2017-06-02 2018-12-06 Wisconsin Alumni Research Foundation Systems, methods, and media for hierarchical progressive point cloud rendering
US20190018680A1 (en) * 2017-07-12 2019-01-17 Topcon Positioning Systems, Inc. Point cloud data method and apparatus
WO2019016820A1 (en) * 2017-07-20 2019-01-24 Alon Melchner A METHOD FOR PLACING, TRACKING AND PRESENTING IMMERSIVE REALITY-VIRTUALITY CONTINUUM-BASED ENVIRONMENT WITH IoT AND/OR OTHER SENSORS INSTEAD OF CAMERA OR VISUAL PROCCESING AND METHODS THEREOF
US20190043203A1 (en) * 2018-01-12 2019-02-07 Intel Corporation Method and system of recurrent semantic segmentation for image processing
CN109582134A (en) * 2018-11-09 2019-04-05 北京小米移动软件有限公司 The method, apparatus and display equipment that information is shown
US20190221039A1 (en) * 2018-01-15 2019-07-18 Worcester Polytechnic Institute Visualization of network data as a three-dimensional hierarchical data structure in a mixed reality environment
US20190311796A1 (en) * 2017-03-20 2019-10-10 SK Commercial Construction, Inc. Color analysis and control using an electronic mobile device transparent display screen integral with the use of augmented reality glasses
WO2019242262A1 (en) * 2018-06-19 2019-12-26 亮风台(上海)信息科技有限公司 Augmented reality-based remote guidance method and device, terminal, and storage medium
US20200034617A1 (en) * 2018-07-24 2020-01-30 Apical Ltd Processing image data to perform object detection

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160267720A1 (en) * 2004-01-30 2016-09-15 Electronic Scripting Products, Inc. Pleasant and Realistic Virtual/Augmented/Mixed Reality Experience
US20150123968A1 (en) * 2013-11-07 2015-05-07 Autodesk, Inc. Occlusion render mechanism for point clouds
US20150125071A1 (en) * 2013-11-07 2015-05-07 Autodesk, Inc. Pre-segment point cloud data to run real-time shape extraction faster
US20160210784A1 (en) * 2015-01-20 2016-07-21 Scott Ramsby Augmented reality field of view object follower
WO2017168038A1 (en) * 2016-03-31 2017-10-05 Umbra Software Oy Virtual reality streaming
US20190311796A1 (en) * 2017-03-20 2019-10-10 SK Commercial Construction, Inc. Color analysis and control using an electronic mobile device transparent display screen integral with the use of augmented reality glasses
US20180350044A1 (en) * 2017-06-02 2018-12-06 Wisconsin Alumni Research Foundation Systems, methods, and media for hierarchical progressive point cloud rendering
US20190018680A1 (en) * 2017-07-12 2019-01-17 Topcon Positioning Systems, Inc. Point cloud data method and apparatus
WO2019016820A1 (en) * 2017-07-20 2019-01-24 Alon Melchner A METHOD FOR PLACING, TRACKING AND PRESENTING IMMERSIVE REALITY-VIRTUALITY CONTINUUM-BASED ENVIRONMENT WITH IoT AND/OR OTHER SENSORS INSTEAD OF CAMERA OR VISUAL PROCCESING AND METHODS THEREOF
US20190043203A1 (en) * 2018-01-12 2019-02-07 Intel Corporation Method and system of recurrent semantic segmentation for image processing
US20190221039A1 (en) * 2018-01-15 2019-07-18 Worcester Polytechnic Institute Visualization of network data as a three-dimensional hierarchical data structure in a mixed reality environment
WO2019242262A1 (en) * 2018-06-19 2019-12-26 亮风台(上海)信息科技有限公司 Augmented reality-based remote guidance method and device, terminal, and storage medium
US20200034617A1 (en) * 2018-07-24 2020-01-30 Apical Ltd Processing image data to perform object detection
CN109582134A (en) * 2018-11-09 2019-04-05 北京小米移动软件有限公司 The method, apparatus and display equipment that information is shown

Also Published As

Publication number Publication date
CN111625093B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
JP4643271B2 (en) Visible surface determination system and method for computer graphics using interval analysis
CN106780709B (en) A kind of method and device of determining global illumination information
EP2051533B1 (en) 3D image rendering apparatus and method
KR100888528B1 (en) Apparatus, method, application program and computer readable medium thereof capable of pre-storing data for generating self-shadow of a 3D object
CN102289845B (en) Three-dimensional model drawing method and device
US9684997B2 (en) Efficient rendering of volumetric elements
JP5055214B2 (en) Image processing apparatus and image processing method
US9361665B2 (en) Methods and systems for viewing a three-dimensional (3D) virtual object
US9508191B2 (en) Optimal point density using camera proximity for point-based global illumination
US11704883B2 (en) Methods and systems for reprojection in augmented-reality displays
JP2012190428A (en) Stereoscopic image visual effect processing method
US10325403B2 (en) Image based rendering techniques for virtual reality
EP4213102A1 (en) Rendering method and apparatus, and device
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
RU2680355C1 (en) Method and system of removing invisible surfaces of a three-dimensional scene
CN104157000A (en) Model surface normal calculating method
US9401044B1 (en) Method for conformal visualization
KR100693134B1 (en) Three dimensional image processing
CN111625093B (en) Dynamic scheduling display method of massive digital point cloud data in MR (magnetic resonance) glasses
US11367262B2 (en) Multi-dimensional acceleration structure
JP5481751B2 (en) Concealment processing program, visualization processing method and apparatus
CN117557740B (en) Three-dimensional model segmentation level switching method and device, electronic equipment and storage medium
CN115423917B (en) Real-time drawing method and system for global three-dimensional wind field
CN116630503A (en) Digital scene 3D model rendering system and method
Binder et al. Massively Parallel Stackless Ray Tracing of Catmull-Clark Subdivision Surfaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant