CN111176426A - Three-dimensional space drawing method based on near-eye display equipment and near-eye display equipment - Google Patents

Three-dimensional space drawing method based on near-eye display equipment and near-eye display equipment Download PDF

Info

Publication number
CN111176426A
CN111176426A CN201811338150.8A CN201811338150A CN111176426A CN 111176426 A CN111176426 A CN 111176426A CN 201811338150 A CN201811338150 A CN 201811338150A CN 111176426 A CN111176426 A CN 111176426A
Authority
CN
China
Prior art keywords
data
virtual object
environment
coordinate system
eye display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811338150.8A
Other languages
Chinese (zh)
Inventor
丁建雄
张本好
陈远
胡增新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sunny Optical Zhejiang Research Institute Co Ltd
Original Assignee
Sunny Optical Zhejiang Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sunny Optical Zhejiang Research Institute Co Ltd filed Critical Sunny Optical Zhejiang Research Institute Co Ltd
Priority to CN201811338150.8A priority Critical patent/CN111176426A/en
Publication of CN111176426A publication Critical patent/CN111176426A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/048023D-info-object: information is displayed on the internal or external surface of a three dimensional manipulable object, e.g. on the faces of a cube that can be rotated by the user

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A three-dimensional space drawing method based on a near-eye display device and the near-eye display device are provided. The three-dimensional space drawing method comprises the following steps: acquiring the pose of the near-eye display equipment in a real environment to acquire pose data relative to an environment coordinate system; obtaining motion trail data relative to the environment coordinate system by tracking the motion trail of the drawing tool in the real environment; processing the pose data and the motion trajectory data to obtain virtual object data relative to an equipment coordinate system; and fusion-displaying a virtual object in the real environment based on the virtual object data, wherein the virtual object appears real relative to the real environment visible through the near-eye display device.

Description

Three-dimensional space drawing method based on near-eye display equipment and near-eye display equipment
Technical Field
The invention relates to the technical field of drawing, in particular to a three-dimensional space drawing method based on near-eye display equipment and the near-eye display equipment.
Background
In the field of traditional painting, the existing painting method is mainly two-dimensional plane painting, for example, painting on a paper surface or a flat plate. The space sculpture can expand the painting creation to a three-dimensional space to a certain extent as the expansion of the traditional plane painting. However, the existing sculpture technology can express three-dimensionally, but is limited by the spatial attributes of the sculpture carrier, and cannot be freely created in a real environment.
In addition, although the existing solid drawing technology can draw through a perspective relation to generate a spatial sense, the drawing is still only expressed on a two-dimensional plane and cannot really get rid of the limitation of the two-dimensional plane. Therefore, there is an urgent need for a method for drawing three-dimensional space in real environment, so as to get rid of the limitation of imaging carriers on drawing creation.
Disclosure of Invention
An object of the present invention is to provide a near-eye display device-based three-dimensional space drawing method and a near-eye display device, which can perform three-dimensional space drawing creation by means of the near-eye display device, and is helpful for making up the blank of real-time three-dimensional space drawing creation.
Another object of the present invention is to provide a three-dimensional space drawing method based on a near-eye display device and a near-eye display device, which can follow a conventional drawing style, so that a user can create according to an original drawing habit without additional learning cost, and is fast to operate.
Another object of the present invention is to provide a near-eye display device-based three-dimensional space drawing method and a near-eye display device, which enable a user to freely create in a real environment without limitation of an imaging support.
Another object of the present invention is to provide a three-dimensional space drawing method based on a near-eye display device and a near-eye display device, which can provide a completely new experience for users and can stimulate the drawing enthusiasm of the users.
Another object of the present invention is to provide a three-dimensional space drawing method based on a near-eye display device and a near-eye display device, which enable a user to see a drawn virtual object in real time during the drawing process, and facilitate timely feedback.
Another object of the present invention is to provide a near-eye display device and a three-dimensional space drawing method based on the same, wherein in an embodiment of the present invention, the near-eye display device can provide a more substituted authoring environment for a user, which is helpful to improve user experience.
To achieve at least one of the above objects or other objects and advantages, the present invention provides a three-dimensional space drawing method based on a near-eye display device, including the steps of:
estimating the pose of the near-eye display device in a real environment to obtain pose data relative to an environment coordinate system;
obtaining motion trail data relative to the environment coordinate system by tracking the motion trail of the drawing tool in the real environment;
processing the pose data and the motion trajectory data to obtain virtual object data relative to an equipment coordinate system; and
based on the virtual object data, a virtual object is fusion displayed in the real environment, wherein the virtual object appears real relative to the real environment visible through the near-eye display device.
In some embodiments of the invention, the step of processing the pose data and the motion trajectory data to obtain virtual object data relative to a device coordinate system comprises the steps of:
performing pose transformation on the motion trajectory data based on the pose data to obtain motion trajectory data relative to an equipment coordinate system;
fitting the motion trajectory data relative to the equipment coordinate system to obtain virtual line data relative to the equipment coordinate system; and
and endowing the virtual line data with line parameters to obtain the virtual object data.
In some embodiments of the invention, the step of processing the pose data and the motion trajectory data to obtain virtual object data relative to a device coordinate system comprises the steps of:
fitting the motion trajectory data to obtain virtual line data relative to an environment coordinate system;
based on the pose data, carrying out pose transformation on the virtual line data to obtain virtual line data relative to an equipment coordinate system; and
assigning line parameters to the virtual line data relative to the device coordinate system to obtain the virtual object data.
In some embodiments of the present invention, the step of processing the pose data and the motion trajectory data to obtain virtual object data includes the steps of:
fitting the motion trajectory data to obtain virtual line data relative to an environment coordinate system;
assigning line parameters to the virtual line data to obtain virtual object data relative to an environment coordinate system; and
and performing pose transformation on the virtual object data relative to the environment coordinate system based on the pose data to obtain the virtual object data relative to the equipment coordinate system.
In some embodiments of the invention, the line parameters are selected from any one or a combination of several of the group consisting of line color, line thickness and line type.
In some embodiments of the present invention, before the step of fusion displaying the virtual object in the real environment based on the virtual object data, the method further includes the steps of:
acquiring the environment data of the real environment by acquiring the information of the real environment; and
based on the environment data, real physical attributes are assigned to the virtual object data such that the virtual object has physical attributes of a real object.
In some embodiments of the present invention, the real physical property is selected from any one or a combination of several of the group consisting of a volume property, a gravity property, a lighting property, and a material property.
In some embodiments of the present invention, the step of fusion displaying the virtual object in the real environment based on the virtual object data includes the steps of:
constructing a new real environment based on the environment data; and
based on the virtual object data, the virtual object is fusion displayed in the new real environment such that the virtual object appears real relative to the new real environment visible through the near-eye display device.
In some embodiments of the present invention, the three-dimensional space drawing method further includes the steps of:
and saving the environment data and the virtual object data so that the virtual object can be fused with the new real environment for output.
In some embodiments of the present invention, the three-dimensional space drawing method further includes the steps of:
saving the virtual object data so that the virtual object can be output separately.
In some embodiments of the present invention, the three-dimensional space drawing method further includes the steps of:
the method includes displaying a pre-fabricated virtual object in a fused manner in the real environment, wherein the pre-fabricated virtual object appears real relative to the real environment visible through the near-eye display device.
According to another aspect of the present invention, there is also provided a near-eye display device for drawing in a real environment, comprising:
the pose estimation module is used for acquiring the pose of the near-eye display equipment in the real environment so as to obtain pose data relative to an environment coordinate system;
the target tracking module is used for tracking the motion trail of the drawing tool in the real environment so as to obtain motion trail data relative to the environment coordinate system;
a processing module for processing the pose data and the motion trajectory data to obtain virtual object data relative to an equipment coordinate system; and
a see-through display for fusion displaying a virtual object in the real environment based on the virtual object data, wherein the virtual object appears real relative to the real environment visible through the near-eye display device.
In some embodiments of the present invention, the processing module includes a pose transformation module, a data fitting module, and a first assignment module, wherein the pose transformation module is configured to perform pose transformation on the motion trajectory data based on the pose data to obtain motion trajectory data relative to a device coordinate system; the data fitting module is used for fitting the motion trail data relative to the equipment coordinate system to obtain virtual line data relative to the equipment coordinate system; the first assignment module is used for assigning line parameters to the virtual line data so as to obtain the virtual object data.
In some embodiments of the present invention, the processing module includes a pose transformation module, a data fitting module, and a first assignment module, where the data fitting module is configured to perform fitting processing on the motion trajectory data to obtain virtual line data relative to an environment coordinate system; the pose transformation module is used for carrying out pose transformation on the virtual line data based on the pose data so as to obtain the virtual line data relative to an equipment coordinate system; the first assignment module is configured to assign a line parameter to the virtual line data corresponding to the device coordinate system to obtain the virtual object data.
In some embodiments of the present invention, the processing module includes a pose transformation module, a data fitting module, and a first assignment module, where the data fitting module is configured to perform fitting processing on the motion trajectory data to obtain virtual line data relative to an environment coordinate system; the first assignment module is used for assigning line parameters to the virtual line data so as to obtain virtual object data relative to an environment coordinate system; the pose transformation module is used for carrying out pose transformation on the virtual object data relative to the environment coordinate system based on the pose data so as to obtain the virtual object data relative to the equipment coordinate system.
In some embodiments of the present invention, the near-eye display device further includes an environment acquisition module, configured to acquire information of the real environment to obtain environment data of the real environment; the processing module further comprises a second assignment module for assigning real physical attributes to the virtual object data based on the environment data, so that the virtual object has physical attributes of the real object.
In some embodiments of the invention, the processing module further comprises an environment construction module for constructing a new real environment based on the environment data, wherein the see-through display is further for fusion displaying the virtual object in the new real environment based on the virtual object data such that the virtual object appears real with respect to the new real environment visible through the near-eye display device.
In some embodiments of the present invention, the near-eye display device further includes a storage module, wherein the storage module is configured to store the virtual object data, so that the virtual object can be output separately.
Further objects and advantages of the invention will be fully apparent from the ensuing description and drawings.
These and other objects, features and advantages of the present invention will become more fully apparent from the following detailed description, the accompanying drawings and the claims.
Drawings
Fig. 1 is a flowchart illustrating a three-dimensional space drawing method based on a near-eye display device according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating processing steps of the three-dimensional space drawing method based on the near-eye display device according to the embodiment of the invention.
Fig. 3 shows an example of processing steps of the three-dimensional space drawing method based on the near-eye display device according to the above-described embodiment of the present invention.
Fig. 4 shows another example of processing steps of the near-eye display device-based three-dimensional space drawing method according to the above-described embodiment of the present invention.
Fig. 5 is a flowchart illustrating a three-dimensional space drawing method based on a near-eye display device according to another embodiment of the present invention.
Fig. 6 is a flowchart illustrating a fusion display step of the three-dimensional space drawing method based on the near-eye display device according to another embodiment of the present invention.
Fig. 7 is a system diagram of a near-eye display device according to an embodiment of the invention.
Fig. 8 illustrates an example of a near-eye display device according to an embodiment of the present invention.
FIG. 9 illustrates an example of a computing system according to an embodiment of the invention.
Detailed Description
The following description is presented to disclose the invention so as to enable any person skilled in the art to practice the invention. The preferred embodiments in the following description are given by way of example only, and other obvious variations will occur to those skilled in the art. The basic principles of the invention, as defined in the following description, may be applied to other embodiments, variations, modifications, equivalents, and other technical solutions without departing from the spirit and scope of the invention.
In the present invention, the terms "a" and "an" in the claims and the description should be understood as meaning "one or more", that is, one element may be one in number in one embodiment, and the element may be more than one in number in another embodiment. The terms "a" and "an" should not be construed as limiting the number unless the number of such elements is explicitly recited as one in the present disclosure, but rather the terms "a" and "an" should not be construed as being limited to only one of the number.
In the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present invention, it should be noted that, unless explicitly stated or limited otherwise, the terms "connected" and "connected" are to be interpreted broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be directly connected or indirectly connected through an intermediate. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Currently, various technologies may allow a user to experience a mix of real and virtual reality. For example, some display devices, such as various near-eye display (NED) devices, may allow virtual objects to be superimposed in a real environment such that the virtual objects may appear to be integrated with the real environment when viewed through the near-eye display device by a user. However, existing near-eye display devices are generally applied in the field of games, and a prefabricated virtual object is placed by scanning a marker and then displayed in a real environment through AR technology for the purpose of virtual-real fusion. But for more deep creative applications, existing near-eye display devices do not enable free drawing in real-world environments.
Referring to fig. 1 to 4, a three-dimensional space drawing method based on a near-eye display device according to an embodiment of the present invention is illustrated. Specifically, as shown in fig. 1, the three-dimensional space drawing method based on the near-eye display device includes the steps of:
s110: acquiring the pose of the near-eye display equipment in a real environment to acquire pose data relative to an environment coordinate system;
s120: obtaining motion trail data relative to the environment coordinate system by tracking the motion trail of a drawing tool in the real environment;
s130: processing the pose data and the motion trajectory data to obtain virtual object data relative to an equipment coordinate system; and
s140: based on the virtual object data, a virtual object is fusion displayed in the real environment, wherein the virtual object appears real relative to the real environment visible through the near-eye display device.
It is noted that, since a user wearing the near-eye display device usually moves or swings during the drawing process, the pose of the near-eye display device in the real environment changes, and a virtual object displayed by the near-eye display device is intended to be real in the real environment (for example, the position of the virtual object is fixed relative to the real environment visible by the near-eye display device), the motion trajectory data needs to be pose-transformed between the environment coordinate system (i.e., the reference coordinate system created according to the real environment) and the device coordinate system (i.e., the reference coordinate system created according to the near-eye display device) based on the pose data. Therefore, in the three-dimensional space drawing method, the pose of the near-eye display device in the real environment needs to be estimated by a pose estimation method to obtain pose data relative to an environment coordinate system, so that the pose information of the near-eye display device in the real environment can be grasped in real time.
For example, in the step S110, a visual pose estimation method may be adopted to acquire the pose of the near-eye display device in the real environment to obtain pose data of the near-eye display device relative to the environment coordinate system. For example, the visual pose estimation method may perform feature point matching analysis on video frames or images obtained in real time by a camera sensor provided to the near-eye display apparatus, thereby obtaining motion estimation between frames to obtain a pose estimation of the camera sensor. Whereas, since the camera sensor is relatively stationary to the near-eye display device, the motion of the near-eye display device can be estimated to obtain the pose data (such as displacement vectors and rotation vectors with respect to the ambient coordinate system) of the near-eye display device. Of course, the visual pose estimation method may also perform visual pose estimation by a camera sensor not configured to the near-eye display device, for example, by capturing an image of the near-eye display device in real time by a camera sensor fixed in the real environment to analyze and estimate the pose of the near-eye display device through multiple frames of images, which is not further limited by the present invention.
Of course, according to other examples of the present invention, in step S110, an inertial pose estimation method may also be used to estimate the pose of the near-eye display device in the real environment to obtain pose data of the near-eye display device with respect to the environment coordinate system. For example, the Inertial pose estimation method may obtain rotation and acceleration information of the near-eye display device in a real environment through an Inertial Measurement Unit (IMU) configured to the near-eye display device, and then perform motion estimation of the rotation and the displacement to obtain a pose estimation of the IMU. While since the IMU is relatively stationary with respect to the near-eye display device, the motion of the near-eye display device can also be estimated to obtain the pose data (such as displacement vectors and rotation vectors with respect to the ambient coordinate system) of the near-eye display device.
In addition, although both the visual pose estimation method and the inertial pose estimation method can estimate the pose of the near-eye display device, both methods have a certain estimation error, resulting in an inaccurate pose estimation of the near-eye display device. Therefore, in some other examples of the present invention, the step S110 may further perform comprehensive motion estimation by using a method (a visual inertial pose estimation method) in which the visual pose estimation method and the inertial pose estimation method are combined to obtain a more accurate pose estimation result. It is understood that there are many ways to realize the pose estimation, and the invention is not limited to the above-described technical solutions, that is, the pose estimation method according to the invention can be any kind, and is within the protection scope of the invention.
In order to realize three-dimensional space drawing in a real environment, the invention needs to track the motion trail of a drawing tool in the real environment to obtain the motion trail data relative to the environment coordinate system; and processing the motion trail data so as to simulate the motion trail of the drawing tool in the real environment through a virtual line, thereby drawing a virtual object in the real environment.
Illustratively, in the step S120, a target tracking technique is utilized to track the motion trajectory of the drawing tool in the real environment to obtain motion trajectory data relative to the environment coordinate system. For example, the target tracking technology may track the position of the drawing tool in the real environment in real time by using sensors such as a camera sensor, an inertial sensor, or/and a position detection sensor to obtain the motion trajectory data of the drawing tool in the real environment, thereby facilitating data processing in the near-eye display device to draw a virtual object. Of course, in other examples of the present invention, the target tracking technology may also adopt a machine learning method to predict a tracked target model according to the drawing tool, so as to achieve the purpose of real-time accurate tracking.
It is worth noting that, because the present invention utilizes the target tracking technology to track the motion trail of the drawing tool and draws the virtual object based on the motion trail data of the drawing tool to complete the drawing operation, the three-dimensional space drawing method of the present invention is not limited by the imaging carrier (such as paper surface, etc.), and can expand the drawing space into the whole real environment to achieve the purpose of free drawing in the three-dimensional space. In addition, the drawing tool is an input device for drawing, and functions like a traditional drawing brush, but the form of the drawing tool is various. For example, the form of the drawing tool may be classified into a natural form and a digital form.
Specifically, the natural form refers to using a non-digital device such as a finger, a painting brush or a finger stall as a natural painting input, that is, the painting tool with the natural form is implemented as a real object (such as a finger, a finger stall or a painting brush of a user, which does not have a function of tracking its own position) that can be tracked by a target tracking technology, such that the near-eye display device or an external device is required to track the position of the painting tool in the real environment to obtain the motion trajectory data of the painting tool, so as to provide a completely new experience for the user, and may stimulate the painting enthusiasm of the user.
The digital form refers to using a photoelectric pen, a brush with an IMU, or other digital devices as a drawing input, that is, the drawing tool with the digital form is implemented as a digital device (such as a brush with an IMU, or other devices, which has a function of tracking its own position) that can be tracked by a target tracking technology, in such a way that the drawing tool tracks its own motion trajectory in the real environment without the near-eye display device or an external device, and directly transmits the digitized motion trajectory data to the near-eye display device.
In order to enable a user to keep the original creation habit when performing drawing creation by using the three-dimensional space drawing method of the present invention, the near-eye display device needs to obtain color information or/and thickness information of the drawing tool in addition to the motion trajectory data of the drawing tool, so that the drawn virtual object has corresponding color or/and thickness, which is helpful for enriching the content of the drawing.
Illustratively, a user can use a painting brush (a painting tool with a natural form) with different colors to perform painting creation, and when the user uses a red painting brush to perform painting, the near-eye display device acquires the color of the red painting brush in addition to tracking the motion trail of the red painting brush to obtain the motion trail data of the red painting brush so as to draw a red virtual line; when a user uses a blue painting brush to paint, the near-eye display device tracks the motion trail of the blue painting brush to obtain the motion trail data of the blue painting brush, and collects the color of the blue painting brush so as to draw a blue virtual line and the like; and bright and colorful paintings can be drawn in a real environment through multiple times of drawing. This is just like the author draws the colored picture scroll on the paper, does not need the author to change own drawing habit, does not need extra study cost, and is fast to hand.
Of course, in other examples of the present invention, a user may also directly send a color instruction to the near-eye display device through a drawing tool having a digital form, so that the near-eye display device can draw a virtual line of a corresponding color in response to the color instruction, so that a color drawing can be drawn without replacing the drawing tool.
It is worth mentioning that, since the motion trajectory data obtained by the above-mentioned target tracking technology is generally position data of the drawing tool relative to the environment coordinate system, and the virtual object drawn based on the motion trajectory data needs to be fusion-displayed in the real environment through the near-eye display device, the pose of the near-eye display device will change along with the movement or shaking of the user, which causes the virtual object visible to the user through the near-eye display device to appear to be moving relative to the real environment, which is not in accordance with the observation and drawing habits of human beings. Therefore, in order to make the virtual object visible through the near-eye display device appear fixed with respect to the real environment, it is necessary to pose-transform the obtained motion trajectory data based on the pose data of the near-eye display device to obtain motion trajectory data with respect to the device coordinate system, thereby obtaining the virtual object data with respect to the device coordinate system.
Specifically, as shown in fig. 2, in this embodiment of the present invention, the step S130 includes the steps of:
s131: performing pose transformation on the motion trajectory data based on the pose data to obtain motion trajectory data relative to an equipment coordinate system;
s132: fitting the motion trajectory data relative to the equipment coordinate system to obtain virtual line data relative to the equipment coordinate system; and
s133: and endowing the virtual line data with line parameters to obtain the virtual object data.
Illustratively, the near-eye display device performs pose transformation on the motion trajectory data relative to the environment coordinate system to obtain motion trajectory data relative to a device coordinate system according to the pose data of the near-eye display device relative to the environment coordinate system after obtaining the pose data and the motion trajectory data of the painting tool, wherein the motion trajectory data relative to the device coordinate system comprises a set of position coordinates relative to the device coordinate system; then, according to the time sequence of the motion trajectory data, fitting processing (such as spline processing) is carried out on the position coordinates to obtain virtual line data relative to the equipment coordinate system, so that the near-eye display equipment can simulate the motion trajectory of the drawing tool relative to the equipment coordinate system based on the virtual line data; finally, line parameters such as line color, line thickness, and/or line type, etc. are given to the virtual line data to obtain the virtual object data with respect to the device coordinate system, so that the near-eye display device can draw a corresponding virtual object based on the virtual object data, that is, by giving color, thickness, and/or line type to the virtual lines to construct virtual objects having corresponding shapes or characteristics through the virtual lines.
It is noted that, as shown in fig. 3, in an example of the present invention, the step S130 includes the steps of:
s131': fitting the motion trajectory data to obtain virtual line data relative to an environment coordinate system;
s132': based on the pose data, carrying out pose transformation on the virtual line data to obtain virtual line data relative to an equipment coordinate system; and
s133': assigning line parameters to the virtual line data relative to the device coordinate system to obtain the virtual object data.
In this example of the present invention, after obtaining the pose data and the motion trajectory data of the drawing tool, the near-eye display device performs fitting processing (e.g., spline processing) on the position coordinates according to a time sequence of the motion trajectory data to obtain virtual line data with respect to the environment coordinate system, so that the near-eye display device can simulate a motion trajectory of the drawing tool with respect to the environment coordinate system based on the virtual line data; then, performing pose transformation on the virtual line data relative to the environment coordinate system according to the pose data of the near-eye display device relative to the environment coordinate system to obtain virtual line data relative to a device coordinate system, so that the near-eye display device can simulate the motion track of the painting tool relative to the device coordinate system based on the virtual line data; finally, line parameters such as line color, line thickness, and/or line type, etc. are given to the virtual line data to obtain the virtual object data with respect to the device coordinate system, so that the near-eye display device can draw a virtual object with respect to the device coordinate system based on the virtual object data, that is, by giving color, thickness, and/or line type to the virtual lines to construct virtual objects having corresponding shapes or characteristics by the virtual lines.
Of course, as shown in fig. 4, in another example of the present invention, the step S130 includes the steps of:
s131': fitting the motion trajectory data to obtain virtual line data relative to an environment coordinate system;
s132': assigning line parameters to the virtual line data to obtain virtual object data relative to an environment coordinate system; and
s133': and performing pose transformation on the virtual object data relative to the environment coordinate system based on the pose data to obtain the virtual object data relative to the equipment coordinate system.
In this example of the present invention, the near-eye display device processes the motion trajectory data with respect to the environment coordinate system after obtaining the pose data and the motion trajectory data of the drawing tool to obtain virtual object data with respect to the environment coordinate system, and then performs pose transformation on the virtual object data with respect to the environment coordinate system according to the pose data of the near-eye display device with respect to the environment coordinate system to obtain the virtual object data with respect to the device coordinate system, so that the near-eye display device can draw a virtual object with respect to the device coordinate system based on the virtual object data.
According to the above embodiment of the present invention, after the step S130 is completed to obtain the virtual object data with respect to the device coordinate system, a virtual object that appears real with respect to the real environment visible through the near-eye display device is fusion-displayed in the real environment by the near-eye display device based on the virtual object data. It is understood that, since the virtual object data is data with respect to the device coordinate system obtained by pose transformation, the virtual object displayed by the near-eye display device appears to be always stationary with respect to the real environment regardless of a change in the pose of the near-eye display device. In addition, the perspective display of the near-eye display device is transparent or semi-transparent, and a user can directly see through the perspective display of the near-eye display device to view the real environment, so that the near-eye display device displays the virtual object in the real environment in a fusion mode.
It should be noted that, in the above embodiment of the present invention, as shown in fig. 1, the three-dimensional space drawing method may further include the steps of: s150: saving the virtual object data so that the virtual object can be output separately. In this way, after the virtual object data is obtained, the near-eye display apparatus saves the virtual object data in a format such as a video stream, a picture, or the like as a digitized pictorial work for human consumption or secondary creation at any time.
Of course, in other examples of the present invention, the three-dimensional space drawing method may further include the steps of: a pre-fabricated virtual object is displayed in a fused manner in the real environment, wherein the position of the pre-fabricated virtual object appears to be fixed relative to the real environment visible through the near-eye display device. Like this, at the in-process of carrying out three-dimensional drawing, not only can fuse the virtual object of drawing in real time and show in real environment, can also be with the virtual object modularization of drawing in advance to together with the virtual object of drawing in real environment fuses the demonstration, thereby helps richening the drawing content, reduces the drawing degree of difficulty.
It is worth mentioning that, in order to better fusion display the virtual object in the real environment, before fusion display of the virtual object, the three-dimensional space drawing method further includes the steps of: acquiring the environment data of the real environment by acquiring the information of the real environment; and based on the environment data, giving real physical attributes to the virtual object data, so that the virtual object has the physical attributes of a real object, which is beneficial to improving the fusion display effect of the virtual object and the real environment, and the virtual object looks real relative to the real environment visible through the near-eye display device, so that a more substituted authoring environment is provided for a user, and the user experience is beneficial to improving.
Illustratively, in another embodiment of the present invention, as shown in fig. 5, the three-dimensional space drawing method includes the steps of:
s210: acquiring the environment data of the real environment by acquiring the information of the real environment;
s220: acquiring the pose of the near-eye display equipment in a real environment to acquire pose data relative to an environment coordinate system;
s230: obtaining motion trail data relative to the environment coordinate system by tracking the motion trail of a drawing tool in the real environment;
s240: processing the pose data and the motion trajectory data to obtain virtual object data relative to an equipment coordinate system;
s250: assigning real physical attributes to the virtual object data based on the environmental data such that the virtual object has physical attributes of a real object; and
s260: based on the virtual object data, a virtual object is fusion displayed in the real environment, wherein the virtual object appears real relative to the real environment visible through the near-eye display device.
In this example of the invention, the real physical property may be, but is not limited to, a physical property of a real object implemented as a volume property, a gravity property, an illumination property, and/or a material property, among others. For example, when the gravity attribute is given to the virtual object data, the virtual object will have the weight of a real object, that is, the virtual object cannot be suspended in the real environment due to the gravity, but will fall under the gravity, and will not be stationary until being supported by other real objects to satisfy the force balance, so that the fusion between the virtual object and the real environment is more realistic.
It is noted that, in the present invention, if the user cannot directly view the real environment through the near-eye display device (e.g. the see-through display of the near-eye display device is opaque), the step S260 of the three-dimensional space drawing method may include the steps of, as shown in fig. 6:
s261: constructing a new real environment based on the environment data; and
s262: based on the virtual object data, the virtual object is fusion displayed in the new real environment such that the virtual object appears real relative to the new real environment visible through the near-eye display device.
Illustratively, the near-eye display device may acquire information (e.g., color information, depth information, etc.) of the real environment by using an environment acquisition module such as an RGB-D sensor, a lidar sensor, a binocular sensor, etc., to obtain environment data of the real environment, and then construct a new real environment (i.e., a three-dimensional space environment) based on the environment data, so as to fusion-display the virtual object in the new real environment through a see-through display of the near-eye display device. In this way, even if the see-through display of the near-eye display device is opaque, the user can still view the virtual object displayed fused in the real environment through the near-eye display device.
Further, the near-eye display device can save not only the virtual object data so as to output the virtual object alone as a pictorial work, but also simultaneously save the virtual object data and the environment data so as to output the virtual object and the new real environment fused as a complete pictorial work.
Referring to fig. 7, a near-eye display device according to an embodiment of the invention is illustrated, wherein the near-eye display device 300 may comprise the see-through display 310, a pose estimation module 320, a target tracking module 330, and a processing module 340 to enable a user to perform three-dimensional space drawing in a real environment by means of the near-eye display device 300.
Specifically, the pose estimation module 320 is configured to acquire the pose of the near-eye display device 300 in the real environment to obtain pose data relative to an environment coordinate system. The target tracking module 330 is configured to track a motion trajectory of the drawing tool in the real environment to obtain motion trajectory data relative to the environment coordinate system; the processing module 340 is configured to process the pose data and the motion trajectory data to obtain virtual object data relative to a device coordinate system; and the see-through display 310 is for fusion displaying a virtual object in the real environment based on the virtual object data, wherein the virtual object appears real relative to the real environment visible through the near-eye display device 300.
Furthermore, in an example of the present invention, the see-through display 310 is further configured to blend the display of the pre-fabricated virtual object in the real environment, wherein the pre-fabricated virtual object appears real relative to the real environment visible through the near-eye display device.
In an example of the present invention, the processing module 340 includes a pose transformation module 341, a data fitting module 342, and a first assignment module 343, where the pose transformation module 341 is configured to perform pose transformation on the motion trajectory data based on the pose data to obtain motion trajectory data relative to a device coordinate system; the data fitting module 342 is configured to perform fitting processing on the motion trajectory data relative to the device coordinate system to obtain virtual line data relative to the device coordinate system; the first assignment module 343 is configured to assign a line parameter to the virtual line data, so as to obtain the virtual object data.
In an example of the present invention, the processing module 340 further includes a pose transformation module 341, a data fitting module 342, and a first assignment module 343, where the data fitting module 342 is configured to perform a fitting process on the motion trajectory data to obtain virtual line data relative to an environment coordinate system; the pose transformation module 341 is configured to perform pose transformation on the virtual line data based on the pose data to obtain virtual line data relative to a device coordinate system; the first assignment module 343 is configured to assign line parameters to the virtual line data corresponding to the device coordinate system, so as to obtain the virtual object data.
In an example of the present invention, the processing module 340 further includes a pose transformation module 341, a data fitting module 342, and a first assignment module 343, where the data fitting module 342 is configured to perform a fitting process on the motion trajectory data to obtain virtual line data relative to an environment coordinate system; the first assignment module 343 is configured to assign a line parameter to the virtual line data, so as to obtain virtual object data corresponding to an environment coordinate system; the pose transformation module 341 is configured to perform pose transformation on the virtual object data relative to the environment coordinate system based on the pose data to obtain the virtual object data relative to the device coordinate system.
Further, in this embodiment of the present invention, as shown in fig. 7, the near-eye display device 300 may further include an environment acquisition module 350 for acquiring information of the real environment to obtain environment data of the real environment. Furthermore, in this example of the present invention, the processing module 330 further includes a second assignment module 344 for assigning real physical attributes to the virtual object data based on the environment data, such that the virtual object has physical attributes of the real object.
In an example of the present invention, the processing module 340 of the near-eye display device 300 further includes an environment construction module 345 for constructing a new real-world environment based on the environment data; and the see-through display 310 is further configured to fusion display the virtual object in the new reality environment based on the virtual object data such that the virtual object appears real relative to the new reality environment visible through the near-eye display device.
It is noted that in this embodiment of the present invention, as shown in fig. 7, the near-eye display device 300 further includes a storage module 360 for storing the virtual object data so that the virtual object can be individually output as a pictorial representation.
In an example of the present invention, the storage module 360 is further configured to simultaneously save the virtual object data and the environment data, so as to output the virtual object and the new real environment as a complete pictorial representation in a fusion manner.
Referring now to fig. 8, the present invention provides an example of a pair of near-eye display devices 400 in the form of wearable glasses having a see-through display 410, wherein the see-through display 410 is transparent, a user is able to view a real environment directly through the see-through display 410, and the virtual object is displayed fused in the real environment through the see-through display 410. It will be appreciated that in other examples of the invention, the see-through display may also be translucent, still allowing the user to view the real environment directly through the see-through display. Of course, in another example of the present invention, the see-through display may also be opaque, such that a user can indirectly view the real environment through the see-through display.
It is noted that although in this example of the invention, the see-through display 410 of the near-eye display device 400 can be supported in front of both eyes of the user, allowing binocular viewing by the user, in other examples of the invention, the see-through display 410 may be supported in front of only one eye of the user. Further, the near-eye display device 400 may take any other suitable form, in addition to the wearable glasses form described above, as long as the three-dimensional space drawing method described above can be implemented.
Furthermore, as shown in fig. 8, the near-eye display device 400 further includes a pose estimation module (such as a pose sensor 420 configured to the near-eye display device 400, etc.), an object tracking module (such as an object tracking sensor 430 configured to the near-eye display device 400, etc.), a processing module (such as a processor 440 configured to the near-eye display device 400, etc.), an environment acquisition module (such as an environment acquisition sensor 450 configured to the near-eye display device 400, etc.), and a storage module (such as a memory 460 configured to the near-eye display device 400), so as to implement all or part of the steps of the above three-dimensional space drawing method, allowing a user to freely draw in the real environment.
It is worth mentioning that in other examples of the present invention, the near-eye display device may further include a computing system on the near-eye display device to perform some steps of the above method or process by the computing system. In particular, the above-described methods or processes may be implemented as an application program or service, an application programming interface, a library, and/or other computer program product.
FIG. 9 illustrates, by way of example, a non-limiting embodiment of a computing system 500 that can perform one or more of the above-described methods or processes, and illustrates a computing system 500 in simplified form. The computing system 500 may take the form of: one or more head mounted display devices, or one or more devices cooperating with a head mounted display device (e.g., personal computers, server computers, tablet computers, home entertainment computers, network computing devices, gaming devices, mobile computing devices, mobile communication devices (e.g., smart phones), and/or other computing devices).
As shown in fig. 9, the computing system 500 includes a logic machine 501 and a storage machine 502. The computing system 500 may optionally include a display subsystem 503, an input subsystem 504, a communication subsystem 505, and/or other components not shown in fig. 9.
The logic machine 501 includes one or more physical devices configured to execute instructions. For example, the logic machine 501 may be configured to execute instructions that are part of: one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, implement a technical effect, or otherwise arrive at a desired result.
The logic machine 501 may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic machine 501 may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. The processors of the logic machine 501 may be single core or multicore, and the instructions executed thereon may be configured for serial, parallel, and/or distributed processing. The various components of the logic machine 501 may optionally be distributed across two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic machine 501 may be virtualized and executed by remotely accessible networked computing devices configured in a cloud computing configuration.
The storage machine 502 comprises one or more physical devices configured to hold machine-readable instructions executable by the logic machine 501 to implement the methods and processes described herein. In implementing these methods and processes, the state of the storage machine 502 may be transformed (e.g., to hold different data).
The storage machine 502 may include removable and/or built-in devices. The storage machine 502 may include optical memory (e.g., CD, DVD, HD-DVD, blu-ray disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. The storage machine 502 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
It is understood that the storage machine 502 comprises one or more physical devices. However, aspects of the instructions described herein may alternatively be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for a limited period of time.
Aspects of the logic machine 501 and the storage machine 502 may be integrated together into one or more hardware logic components. These hardware logic components may include, for example, Field Programmable Gate Arrays (FPGAs), program and application specific integrated circuits (PASIC/ASIC), program and application specific standard products (PSSP/ASSP), system on a chip (SOC), and Complex Programmable Logic Devices (CPLDs).
Notably, when the computing system 500 includes the display subsystem 503, the display subsystem 503 may be used to present a visual representation of the data held by the storage machine 502. The visual representation may take the form of a Graphical User Interface (GUI). As the herein described methods and processes change the data held by the storage machine, and thus transform the state of the storage machine 502, the state of the display subsystem 503 may likewise be transformed to visually represent changes in the underlying data. The display subsystem 503 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with the logic machine 501 and/or the storage machine 502 in a shared enclosure, or such display devices may be peripheral display devices.
Further, when the computing system 500 includes the input subsystem 504, the input subsystem 504 may include or interface with one or more user input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem 504 may include or interface with selected Natural User Input (NUI) components. Such component parts may be integrated or peripheral and the transduction and/or processing of input actions may be processed on-board or off-board. Example NUI components may include a microphone for speech and/or voice recognition; infrared, color, stereo display and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer and/or gyroscope for motion detection and/or intent recognition; and an electric field sensing component for assessing brain activity and/or body movement; and/or any other suitable sensor.
When the computing system 500 includes the communication subsystem 505, the communication subsystem 505 may be configured to communicatively couple the computing system 500 with one or more other computing devices. The communication subsystem 505 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As a non-limiting example, the communication subsystem may be configured for communication via a wireless telephone network or a wired or wireless local or wide area network. In some embodiments, the communication subsystem 505 may allow the computing system 500 to send and/or receive messages to and/or from other devices via a network, such as the internet.
It will be appreciated that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Also, the order of the above-described processes may be changed.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are given by way of example only and are not limiting of the invention. The objects of the invention have been fully and effectively accomplished. The functional and structural principles of the present invention have been shown and described in the examples, and any variations or modifications of the embodiments of the present invention may be made without departing from the principles.

Claims (18)

1. A three-dimensional space drawing method based on a near-eye display device is characterized by comprising the following steps:
acquiring the pose of the near-eye display equipment in a real environment to acquire pose data relative to an environment coordinate system;
obtaining motion trail data relative to the environment coordinate system by tracking the motion trail of the drawing tool in the real environment;
processing the pose data and the motion trajectory data to obtain virtual object data relative to an equipment coordinate system; and
based on the virtual object data, a virtual object is fusion displayed in the real environment, wherein the virtual object appears real relative to the real environment visible through the near-eye display device.
2. The three-dimensional space painting method according to claim 1, wherein the step of processing the pose data and the motion trajectory data to obtain virtual object data with respect to a device coordinate system comprises the steps of:
based on the pose data, carrying out coordinate transformation on the motion trail data to obtain motion trail data relative to a device coordinate system;
fitting the motion trajectory data relative to the equipment coordinate system to obtain virtual line data relative to the equipment coordinate system; and
and endowing the virtual line data with line parameters to obtain the virtual object data.
3. The three-dimensional space painting method according to claim 1, wherein the step of processing the pose data and the motion trajectory data to obtain virtual object data with respect to a device coordinate system comprises the steps of:
fitting the motion trajectory data to obtain virtual line data relative to an environment coordinate system;
performing coordinate transformation on the virtual line data based on the pose data to obtain virtual line data relative to an equipment coordinate system; and
assigning line parameters to the virtual line data relative to the device coordinate system to obtain the virtual object data.
4. The three-dimensional space painting method according to claim 1, wherein the step of processing the pose data and the motion trajectory data to obtain virtual object data comprises the steps of:
fitting the motion trajectory data to obtain virtual line data relative to an environment coordinate system;
assigning line parameters to the virtual line data to obtain virtual object data relative to an environment coordinate system; and
and performing coordinate transformation on the virtual object data relative to the environment coordinate system based on the pose data to obtain the virtual object data relative to the equipment coordinate system.
5. The three-dimensional space drawing method according to any one of claims 2 to 4, wherein the line parameter is selected from any one or a combination of several from a group consisting of a line color, a line thickness, and a line type.
6. The three-dimensional space drawing method according to any one of claims 1 to 4, further comprising, before the step of fusion-displaying a virtual object in the real environment based on the virtual object data, the steps of:
acquiring the environment data of the real environment by acquiring the information of the real environment; and
based on the environment data, real physical attributes are assigned to the virtual object data such that the virtual object has physical attributes of a real object.
7. The method for three-dimensional space painting according to claim 6, wherein the real physical property is selected from any one or a combination of several of the group consisting of a volume property, a gravity property, a lighting property and a material property.
8. The three-dimensional space drawing method according to claim 6, wherein the step of fusion-displaying the virtual object in the real environment based on the virtual object data comprises the steps of:
constructing a new real environment based on the environment data; and
based on the virtual object data, the virtual object is fusion displayed in the new real environment such that the virtual object appears real relative to the new real environment visible through the near-eye display device.
9. The method of three-dimensional space painting of claim 8, further comprising the steps of:
and saving the environment data and the virtual object data so that the virtual object can be fused with the new real environment for output.
10. The three-dimensional space drawing method of any one of claims 1 to 4, further comprising the steps of:
saving the virtual object data so that the virtual object can be output separately.
11. The three-dimensional space drawing method of any one of claims 1 to 4, further comprising the steps of:
the method includes displaying a pre-fabricated virtual object in a fused manner in the real environment, wherein the pre-fabricated virtual object appears real relative to the real environment visible through the near-eye display device.
12. A near-eye display device for drawing in a real-world environment, comprising:
the pose estimation module is used for acquiring the pose of the near-eye display equipment in the real environment so as to obtain pose data relative to an environment coordinate system;
the target tracking module is used for tracking the motion trail of the drawing tool in the real environment so as to obtain motion trail data relative to the environment coordinate system;
a processing module for processing the pose data and the motion trajectory data to obtain virtual object data relative to an equipment coordinate system; and
a see-through display for fusion displaying a virtual object in the real environment based on the virtual object data, wherein the virtual object appears real relative to the real environment visible through the near-eye display device.
13. The near-eye display device of claim 12, wherein the processing module comprises a pose transformation module, a data fitting module, and a first assigning module, wherein the pose transformation module is configured to pose transform the motion trajectory data based on the pose data to obtain motion trajectory data relative to a device coordinate system; the data fitting module is used for fitting the motion trail data relative to the equipment coordinate system to obtain virtual line data relative to the equipment coordinate system; the first assignment module is used for assigning line parameters to the virtual line data so as to obtain the virtual object data.
14. The near-eye display device of claim 12, wherein the processing module comprises a pose transformation module, a data fitting module, and a first assignment module, wherein the data fitting module is configured to fit the motion trajectory data to obtain virtual line data relative to an environmental coordinate system; the pose transformation module is used for carrying out coordinate transformation on the virtual line data based on the pose data so as to obtain the virtual line data relative to an equipment coordinate system; the first assignment module is configured to assign a line parameter to the virtual line data corresponding to the device coordinate system to obtain the virtual object data.
15. The near-eye display device of claim 12, wherein the processing module comprises a pose transformation module, a data fitting module, and a first assignment module, wherein the data fitting module is configured to fit the motion trajectory data to obtain virtual line data relative to an environmental coordinate system; the first assignment module is used for assigning line parameters to the virtual line data so as to obtain virtual object data relative to an environment coordinate system; the pose transformation module is used for carrying out coordinate transformation on the virtual object data relative to the environment coordinate system based on the pose data so as to obtain the virtual object data relative to the equipment coordinate system.
16. The near-eye display device of any one of claims 12-15, further comprising an environment acquisition module for acquiring information about the real environment to obtain environment data about the real environment; the processing module further comprises a second assignment module for assigning real physical attributes to the virtual object data based on the environment data, so that the virtual object has physical attributes of the real object.
17. The near-eye display device of claim 16, wherein the processing module further comprises an environment construction module to construct a new real environment based on the environment data, wherein the see-through display is further to fusion display the virtual object in the new real environment based on the virtual object data such that the virtual object appears real relative to the new real environment visible through the near-eye display device.
18. The near-eye display device of claim 17, further comprising a storage module, wherein the storage module is configured to store the virtual object data such that the virtual object can be output separately.
CN201811338150.8A 2018-11-12 2018-11-12 Three-dimensional space drawing method based on near-eye display equipment and near-eye display equipment Pending CN111176426A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811338150.8A CN111176426A (en) 2018-11-12 2018-11-12 Three-dimensional space drawing method based on near-eye display equipment and near-eye display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811338150.8A CN111176426A (en) 2018-11-12 2018-11-12 Three-dimensional space drawing method based on near-eye display equipment and near-eye display equipment

Publications (1)

Publication Number Publication Date
CN111176426A true CN111176426A (en) 2020-05-19

Family

ID=70648100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811338150.8A Pending CN111176426A (en) 2018-11-12 2018-11-12 Three-dimensional space drawing method based on near-eye display equipment and near-eye display equipment

Country Status (1)

Country Link
CN (1) CN111176426A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112328075A (en) * 2020-11-03 2021-02-05 上海镱可思多媒体科技有限公司 Three-dimensional space drawing method, system, terminal and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150331576A1 (en) * 2014-05-14 2015-11-19 Purdue Research Foundation Manipulating virtual environment using non-instrumented physical object
EP3012712A1 (en) * 2014-10-22 2016-04-27 Bitsea GmbH Virtual drawing in real environment
WO2018136222A1 (en) * 2017-01-23 2018-07-26 Snap Inc. Three-dimensional interaction system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150331576A1 (en) * 2014-05-14 2015-11-19 Purdue Research Foundation Manipulating virtual environment using non-instrumented physical object
EP3012712A1 (en) * 2014-10-22 2016-04-27 Bitsea GmbH Virtual drawing in real environment
WO2018136222A1 (en) * 2017-01-23 2018-07-26 Snap Inc. Three-dimensional interaction system
CN110199245A (en) * 2017-01-23 2019-09-03 斯纳普公司 Three-dimension interaction system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112328075A (en) * 2020-11-03 2021-02-05 上海镱可思多媒体科技有限公司 Three-dimensional space drawing method, system, terminal and medium
CN112328075B (en) * 2020-11-03 2023-04-07 上海镱可思多媒体科技有限公司 Three-dimensional space drawing method, system, terminal and medium

Similar Documents

Publication Publication Date Title
US10083540B2 (en) Virtual light in augmented reality
US11127210B2 (en) Touch and social cues as inputs into a computer
CN107810465B (en) System and method for generating a drawing surface
US9829989B2 (en) Three-dimensional user input
US9824499B2 (en) Mixed-reality image capture
CN106255943B (en) Body locks the conversion between augmented reality and world's locking augmented reality
CN105900041B (en) It is positioned using the target that eye tracking carries out
US20190172261A1 (en) Digital project file presentation
JP2021036449A (en) System and method for augmented and virtual reality
CN105981076B (en) Synthesize the construction of augmented reality environment
US20170270711A1 (en) Virtual object pathing
CN110476142A (en) Virtual objects user interface is shown
CN110603515A (en) Virtual content displayed with shared anchor points
US11340707B2 (en) Hand gesture-based emojis
EP2887322B1 (en) Mixed reality holographic object development
KR20160148557A (en) World-locked display quality feedback
CN105393158A (en) Shared and private holographic objects
KR20150126938A (en) System and method for augmented and virtual reality
TW201246088A (en) Theme-based augmentation of photorepresentative view
KR20160022922A (en) User interface navigation
EP4248413A1 (en) Multiple device sensor input based avatar
US20160371885A1 (en) Sharing of markup to image data
US11620792B2 (en) Fast hand meshing for dynamic occlusion
CN111176426A (en) Three-dimensional space drawing method based on near-eye display equipment and near-eye display equipment
CN111176427B (en) Three-dimensional space drawing method based on handheld intelligent device and handheld intelligent device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination