CN112419508B - Method for realizing mixed reality based on large-scale space accurate positioning - Google Patents
Method for realizing mixed reality based on large-scale space accurate positioning Download PDFInfo
- Publication number
- CN112419508B CN112419508B CN202011319591.0A CN202011319591A CN112419508B CN 112419508 B CN112419508 B CN 112419508B CN 202011319591 A CN202011319591 A CN 202011319591A CN 112419508 B CN112419508 B CN 112419508B
- Authority
- CN
- China
- Prior art keywords
- precision
- user
- mixed reality
- real space
- engine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000012545 processing Methods 0.000 claims abstract description 35
- 238000009877 rendering Methods 0.000 claims abstract description 6
- 230000001360 synchronised effect Effects 0.000 claims description 5
- XSQUKJJJFZCRTK-UHFFFAOYSA-N Urea Chemical compound NC(N)=O XSQUKJJJFZCRTK-UHFFFAOYSA-N 0.000 claims description 2
- 239000004202 carbamide Substances 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 description 5
- 230000004927 fusion Effects 0.000 description 5
- 230000003190 augmentative effect Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Abstract
The invention discloses a method for realizing mixed reality based on large-scale space accurate positioning, which comprises the following steps: establishing a multimedia annotation corresponding to the object appearing in the mixed reality; establishing a virtual space by a unit 3D engine or a real3D engine to simulate a real space for processing the occlusion relationship; and the parameters of the real camera and the virtual camera are consistent by using a large-scale spatial accurate positioning mode and gyroscope included angle data of the intelligent mobile terminal, so that the calibration of pictures and the rendering and presentation of multimedia comments are realized. According to the method, the display of the mixed reality can be realized on the premise of conforming to the perspective principle and following the shielding relation of the physical world, and the user can watch the annotation of the mixed reality through the intelligent mobile terminal in a large-scale space environment.
Description
Technical Field
The invention relates to the field of mixed reality, in particular to a method for realizing mixed reality based on large-range spatial accurate positioning.
Background
In the mixed reality field, the most commonly used application technologies are hollens, SLAM and Magic Leap. Both hollens and Magic Leap are based on depth camera to track the environment measurement, and their scanning range is limited, for example hollens is about 0.8-3.1 meters, but after the user walks a longer distance from the origin, the original origin position spatial map will disappear. SLAM is based on the reflection of ground and wall to acquire environmental space's data to rebuild actual environment, can produce high-cost calculation in the use to the space of large tracts of land, and be difficult to long-term operation under the limited condition of resource, also be more prone to producing the perception aliases problem. All three are more suitable for small-scale spaces.
There are few implementations of mixed reality in a large space, and most of the existing solutions implement augmented reality. Such as the label-free edge tracking AR (augmented reality) developed by Jihyun et al in 2008, is a UMPC that integrates devices such as cameras, ultrasonic receivers, and gyroscopic sensors, and is also known as an augmented reality museum navigation system of a smart mobile terminal. Ultrasound localization is used in this system to achieve label-free AR. The technical core of the system is an edge-based tracking method and a feature point-based tracking method. The text uses the Canny operator to extract edges from the camera image, after which the correspondence between the edges extracted from the camera image and the edges calculated by projecting the 3D graphics model onto the image plane using the initial camera projection matrix is found. Street Museum APP introduced by the london museum in 2010 enables users to see virtual-real combined augmented reality scenes in mobile devices through GPS positioning technology. However, in the above research, mixed reality conforming to perspective principle and following real physical world shielding relation is not realized, and meanwhile, the problems of signal shielding and positioning accuracy are not solved well. There are researchers using UWB positioning technology to perform virtual-real fusion on a two-dimensional plane according to coordinate information and apply it to stage performance. However, there is no method for accurately displaying mixed reality in a wide three-dimensional space.
Disclosure of Invention
Based on the problems existing in the prior art, the invention aims to provide a method for realizing mixed reality based on large-scale space accurate positioning, which can solve the problems that the existing method for carrying out virtual-real fusion according to coordinate information is realized only on a two-dimensional plane and cannot realize virtual-real fusion in a large-scale three-dimensional space.
The invention aims at realizing the following technical scheme:
the embodiment of the invention provides a method for realizing mixed reality based on large-range spatial accurate positioning, which comprises the following steps:
step 1) positioning a user through a precise positioning system arranged in a large-range space for mixed reality, and sending the obtained user position information in the large-range space to a data processing server;
step 2), the intelligent mobile terminal of the user sends the data of the focal length of the camera and the included angle of the gyroscope to the data processing server;
step 3), the data processing server synchronizes the collected user position information, gyroscope included angle data and camera focal length data with a virtual camera in a 3D engine running in the data processing server, wherein a real space low-precision model which is fitted in advance, arranged and rendered in advance and modeled according to the real space of the large-scale space in a 1:1 ratio and corresponding multimedia annotation are placed in the 3D engine, and the real space low-precision model is subjected to transparentization processing;
and 4) after the camera of the intelligent mobile terminal of the user is synchronized with the virtual camera of the 3D engine, the rendered multimedia annotation and the transparent low-precision real space model are sent to the intelligent mobile terminal of the user for display, so that the multimedia annotation corresponding to the shot object appears in a real picture shot by the camera displayed on the screen of the intelligent mobile terminal of the user, and the mixed reality display is realized on the screen of the intelligent mobile terminal of the user.
According to the technical scheme provided by the invention, the method for realizing mixed reality based on large-range space accurate positioning provided by the embodiment of the invention has the beneficial effects that:
because the UWB positioning system is used for accurately positioning the user in a large-range space, the display of the mixed reality can be realized on the premise of conforming to the perspective principle and following the shielding relation of the real physical world by matching with the established real space low-precision model, and the user can watch the annotation of the mixed reality in the large-range space environment through the intelligent mobile terminal.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for implementing mixed reality based on large-scale spatial accurate positioning according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a UWB positioning system in the method provided by the embodiments of the present invention;
fig. 3 is a schematic diagram of hardware devices in the method according to the embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will clearly and fully describe the technical solutions of the embodiments of the present invention in conjunction with the specific contents of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention. What is not described in detail in the embodiments of the present invention belongs to the prior art known to those skilled in the art.
Referring to fig. 1 and 3, an embodiment of the present invention provides a method for implementing mixed reality based on accurate positioning in a large-scale space, where the method implements mixed reality by accurate positioning in a large-scale space (i.e., a large-scale space with a length or width of about 10 to 300 meters), and is applicable to education based on mixed reality technology in various large-scale indoor and outdoor environments (such as application in a scientific education scenario of a pupil, in a scientific venue or a museum, etc.), including:
step 1) positioning a user through a precise positioning system arranged in a large-range space for mixed reality, and sending the obtained user position information in the large-range space to a data processing server;
step 2), the intelligent mobile terminal of the user sends the data of the focal length of the camera and the included angle of the gyroscope to the data processing server;
step 3), the data processing server synchronizes the collected user position information, gyroscope included angle data and camera focal length data with a virtual camera in a 3D engine running in the data processing server, wherein a real space low-precision model which is fitted in advance, arranged and rendered in advance and modeled according to the real space of the large-scale space in a 1:1 ratio and corresponding multimedia annotation are placed in the 3D engine, and the real space low-precision model is subjected to transparentization processing;
and 4) after the camera of the intelligent mobile terminal of the user is synchronized with the virtual camera of the 3D engine, the rendered multimedia annotation and the transparent low-precision real space model are sent to the intelligent mobile terminal of the user for display, so that the multimedia annotation corresponding to the shot object appears in a real picture shot by the camera displayed on the screen of the intelligent mobile terminal of the user, and the mixed reality display is realized on the screen of the intelligent mobile terminal of the user.
The accurate positioning system in the step 1 of the method adopts any one of a UWB positioning system and an ultrasonic positioning system.
Referring to fig. 2, the precise positioning system preferably adopts a UWB positioning system to realize precise positioning in a large area space. The UWB positioning system adopts four base stations or base stations 1 with integral multiples of four and one or more positioning labels 2, each positioning label 2 is fixed on a user or an intelligent mobile terminal of the user, and the real-time position data of the labels 2 can be transmitted to a server 3 by matching with the base stations 1.
In step 3 of the method, the 3D engine running in the data processing server is a Unity3D engine or a real3D engine.
In step 4 of the method, a real space low-precision model modeled according to a 1:1 ratio according to the real space of the large-scale space and a corresponding multimedia annotation which are fitted in advance and rendered are placed in the 3D engine, and the real space low-precision model is subjected to transparency processing as follows:
step 41) modeling the real space of the large-scale space with low precision 1:1 to obtain a real space low-precision model, and placing the real space low-precision model into a 3D engine;
step 42) establishing a multimedia annotation and fitting the multimedia annotation to the real space low precision model in the 3D engine;
step 43) rendering the low-precision real space model and the multimedia annotation respectively, and performing transparency processing on the low-precision real space model.
In the above method, the corresponding multimedia annotation includes: at least one of video, audio, image, 3D model.
According to the method, mixed reality is realized through large-range spatial accurate positioning (UWB, ultrasonic and other large-range spatial accurate positioning modes), a virtual space is established through a 3D engine (unit 3D engine or a real3D engine) to simulate a real space so as to be used for processing shielding relation and calibration of pictures, rendering and presentation of multimedia annotation can be realized, and a user can watch the mixed reality annotation through an intelligent mobile terminal. The method is simple to realize, convenient to operate, capable of being widely applied to a large-scale space, capable of realizing virtual-real fusion in the space, strong in space sense, interactivity and immersion sense, high in positioning precision, good in using effect and convenient to popularize and use. Compared with other MR equipment, the MR equipment has low price and is convenient to popularize and use. The method is particularly suitable for education based on mixed reality technology in various large indoor and outdoor environments, such as a science venue or a museum, can enhance teaching effect, improve learning interest, mobilize various senses of children, feel the mixture of virtual and reality in a real environment, and can also be used in other displayed commercial environments.
Embodiments of the present invention are described in detail below.
Referring to fig. 1, the embodiment of the invention provides a method for realizing mixed reality based on large-range spatial accurate positioning, which comprises the following steps:
step 1), setting a UWB positioning system in a large-range space requiring mixed reality, wherein the UWB positioning system is provided with 4 base stations (or 4 base stations with integral multiples) and one or more labels, as shown in fig. 1, so that a user with the labels in the large-range space can be positioned accurately in a large area;
step 2) fixing the tag on an intelligent mobile terminal (the intelligent mobile terminal can be a smart mobile phone or other intelligent electronic equipment convenient to hold, such as a tablet personal computer, a special terminal and the like) of a user or on the body of the user, acquiring the position information of the user by a UWB positioning system, and sending the position information of the user to a data processing server in charge of processing data in real time;
step 3), the intelligent mobile terminal of the user transmits focal length data of the camera and included angle data of the gyroscope to the data processing server; the camera focal length data are used for controlling the size of the virtual picture, and the gyroscope included angle data are used for assisting in correcting the virtual picture and the real picture;
step 4) modeling the real space in a large-scale space in a low-precision 1:1 mode to obtain a real space low-precision model, and placing the real space low-precision model into a 3D engine (such as a Unity3D engine or a real3D engine); the method can be used for solving the shielding problem through a low-precision model in real space;
step 5) establishing multimedia notes in the form of at least one of video, audio, images and 3D models, and fitting the multimedia notes with a low-precision model in real space in a 3D engine; the virtual video and the real space low-precision model with the shielding relation processed are conveniently fitted and rendered in the 3D engine;
rendering the low-precision real space model and the multimedia annotation respectively after the step 6), and performing transparency processing on the low-precision real space model, so that the intelligent mobile terminal of the user can only see the rendered multimedia annotation;
step 7), synchronizing the collected user position information, gyroscope included angle data and camera focal length data with a virtual camera in the 3D engine;
and 8) after the camera of the intelligent mobile terminal of the user is synchronized with the virtual camera of the 3D engine, the rendered multimedia annotation and the transparent low-precision real space model are sent to the intelligent mobile terminal of the user for display, so that the multimedia annotation appears in a real picture shot by the camera displayed on the screen of the intelligent mobile terminal of the user, and the virtual-real fusion display on the screen of the intelligent mobile terminal of the user is realized.
Examples
In this embodiment, in a museum, first, length, width, height data, volume and position data of exhibits of the museum are measured, and the modeling software 1:1, building a real space low-precision model of the museum and synchronizing the model into Unity software.
Specifically, the system shown in fig. 2 and 3 is adopted, the UWB positioning system shown in fig. 2 adopts four base stations or base stations 1 with integral multiple of four and one or more positioning tags 2, each positioning tag 2 is fixed on a user or an intelligent mobile terminal 3 of the user, and real-time position data of the tag can be transmitted to a server 4 by matching with the base station 1;
fig. 3 illustrates a hardware configuration for realizing virtual-real mixing, where the camera 8 at the user mobile terminal is consistent with the virtual camera 6, and in the component, the user mobile terminal 3 is typically a mobile phone, or may be provided by other handheld electronic devices, and may upload social camera focal length and gyroscope angle data to the data processing server 4, so that the location information, gyroscope angle data of the handheld user mobile terminal 3, and focal length of the actual camera 8 can be synchronized to the virtual camera 6 in the Unity3D engine or the urea 3D engine, and the rendered MR annotation image is transmitted to the screen of the handheld user mobile terminal 3; occlusion relationship processing model 5, low precision 1 for real venues in modeling software by measuring: 1 modeling, wherein the model can be used for processing shielding problems; virtual video clip 7: multimedia annotations are designed in forms including text, audio, video, images and 3D models. The low-precision model for processing the virtual video and shielding relation is fitted and rendered in a Unity3D engine or a real3D engine.
The multimedia annotation corresponding to the exhibit design in the museum comprises the following annotation forms: text, images, video, audio, 3D models, etc. The placement positions and shielding relations of the multimedia annotations are processed according to a low-precision model of the real space of the museum, and the positions are placed in a Unity3D engine or a real3D engine;
rendering the low-precision real space model and the multimedia annotation of the museum in a Unity or real3D engine respectively, so that only the multimedia annotation can be displayed on the basis of correctly processing the shielding relation, and the low-precision real space model can be subjected to certain transparency processing;
invoking virtual camera position data in a Unity3D engine or a Ureal3D engine;
positioning a base station in a museum by using the UWB and fixing a tag on a mobile phone of a user;
the mobile phone of the user calls the included angle of the gyroscope and the focal length data of the camera (which can be realized by using the mixed reality APP downloaded in advance by the mobile phone of the user), and transmits the included angle and the focal length data to the data processing server: and the data processing server synchronizes the user position information, the gyroscope included angle data of the intelligent mobile terminal and the camera focal length data into a virtual camera of the Unity3D engine or the real3D engine. Thereby the position direction of the virtual camera is consistent with the mobile phone camera of the user in reality;
the user aims the mobile phone at the exhibits in the museum, the picture actually shot in the real space can be seen on the screen of the mobile phone to be mixed with the virtual multimedia comments, and each comment is placed beside the exhibit due to the processing of the shielding relation, and the phenomenon of penetrating through the wall or seeing the virtual comments in other rooms is avoided;
therefore, the multimedia annotation of the mixed reality of the museum exhibits is realized, and along with the movement of a user, the three-dimensional multimedia annotation can be watched at multiple angles, so that the mixing effect is good.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.
Claims (5)
1. The method for realizing mixed reality based on large-scale space accurate positioning is characterized by comprising the following steps:
step 1) positioning a user through a precise positioning system arranged in a large-range space for mixed reality, and sending the obtained user position information in the large-range space to a data processing server;
step 2), the intelligent mobile terminal of the user sends the data of the focal length of the camera and the included angle of the gyroscope to the data processing server;
step 3), the data processing server synchronizes the collected user position information, gyroscope included angle data and camera focal length data with a virtual camera in a 3D engine running in the data processing server, wherein a real space low-precision model which is fitted in advance, arranged and rendered in advance and modeled according to the real space of the large-scale space in a 1:1 ratio and corresponding multimedia annotation are placed in the 3D engine, and the real space low-precision model is subjected to transparentization processing;
step 4) after the camera of the intelligent mobile terminal of the user is synchronized with the virtual camera of the 3D engine, the rendered multimedia annotation and the transparent low-precision real space model are sent to the intelligent mobile terminal of the user to be displayed, so that the multimedia annotation corresponding to the shot object appears in a real picture shot by the camera displayed on the screen of the intelligent mobile terminal of the user, namely, mixed reality display is realized on the screen of the intelligent mobile terminal of the user;
the 3D engine is provided with a real space low-precision model which is fitted and arranged in advance and rendered according to the real space of the large-scale space and modeled according to the ratio of 1:1 and corresponding multimedia annotation, and the real space low-precision model is subjected to transparentization treatment as follows:
step 41) modeling the real space of the large-scale space with low precision 1:1 to obtain a real space low-precision model, and placing the real space low-precision model into the 3D engine;
step 42), establishing a multimedia annotation, and fitting the multimedia annotation with the real space low-precision model in the Unity software;
step 43) rendering the low-precision real space model and the multimedia annotation respectively, and performing transparency processing on the low-precision real space model.
2. The method for realizing mixed reality based on large-scale spatial precision positioning according to claim 1, wherein the precision positioning system in step 1 adopts any one of a UWB positioning system and an ultrasonic positioning system.
3. The method for realizing mixed reality based on large-scale spatial precision positioning according to claim 2, wherein the UWB positioning system employs four base stations or base stations of integer multiples of four and one or more positioning tags, each of which is fixed to a user or an intelligent mobile terminal of the user.
4. The method for realizing mixed reality based on large-scale spatial precision positioning according to claim 1, wherein in step 3, the 3D engine running in the data processing server is a Unity3D engine or a urea 3D engine.
5. The method for implementing mixed reality based on extensive spatially accurate positioning according to claim 1, wherein the corresponding multimedia annotation comprises: at least one of video, audio, image, 3D model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011319591.0A CN112419508B (en) | 2020-11-23 | 2020-11-23 | Method for realizing mixed reality based on large-scale space accurate positioning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011319591.0A CN112419508B (en) | 2020-11-23 | 2020-11-23 | Method for realizing mixed reality based on large-scale space accurate positioning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112419508A CN112419508A (en) | 2021-02-26 |
CN112419508B true CN112419508B (en) | 2024-03-29 |
Family
ID=74778737
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011319591.0A Active CN112419508B (en) | 2020-11-23 | 2020-11-23 | Method for realizing mixed reality based on large-scale space accurate positioning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112419508B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111179436A (en) * | 2019-12-26 | 2020-05-19 | 浙江省文化实业发展有限公司 | Mixed reality interaction system based on high-precision positioning technology |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11017601B2 (en) * | 2016-07-09 | 2021-05-25 | Doubleme, Inc. | Mixed-reality space map creation and mapping format compatibility-enhancing method for a three-dimensional mixed-reality space and experience construction sharing system |
-
2020
- 2020-11-23 CN CN202011319591.0A patent/CN112419508B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111179436A (en) * | 2019-12-26 | 2020-05-19 | 浙江省文化实业发展有限公司 | Mixed reality interaction system based on high-precision positioning technology |
Non-Patent Citations (2)
Title |
---|
江林 ; .增强现实技术在移动学习中的应用初探.数字技术与应用.2016,(12),全文. * |
陈宝权 ; 秦学英 ; .混合现实中的虚实融合与人机智能交融.中国科学:信息科学.2016,(12),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN112419508A (en) | 2021-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Huang et al. | A 3D GIS-based interactive registration mechanism for outdoor augmented reality system | |
Andersen et al. | Virtual annotations of the surgical field through an augmented reality transparent display | |
CN104484327A (en) | Project environment display method | |
CN104376118A (en) | Panorama-based outdoor movement augmented reality method for accurately marking POI | |
WO2017156949A1 (en) | Transparent display method and transparent display apparatus | |
Honkamaa et al. | Interactive outdoor mobile augmentation using markerless tracking and GPS | |
Gomez-Jauregui et al. | Quantitative evaluation of overlaying discrepancies in mobile augmented reality applications for AEC/FM | |
CN106846237A (en) | A kind of enhancing implementation method based on Unity3D | |
CN107729707B (en) | Engineering construction lofting method based on mobile augmented reality technology and BIM | |
CN108133454B (en) | Space geometric model image switching method, device and system and interaction equipment | |
Wither et al. | Using aerial photographs for improved mobile AR annotation | |
Gee et al. | Augmented crime scenes: virtual annotation of physical environments for forensic investigation | |
Zollmann et al. | VISGIS: Dynamic situated visualization for geographic information systems | |
Selvam et al. | Augmented reality for information retrieval aimed at museum exhibitions using smartphones | |
CN110160529A (en) | A kind of guide system of AR augmented reality | |
CN113253842A (en) | Scene editing method and related device and equipment | |
Afif et al. | Orientation control for indoor virtual landmarks based on hybrid-based markerless augmented reality | |
CN112419508B (en) | Method for realizing mixed reality based on large-scale space accurate positioning | |
CN108615260A (en) | The method and device that shows of augmented reality digital culture content is carried out under a kind of exception actual environment | |
CN112675541A (en) | AR information sharing method and device, electronic equipment and storage medium | |
Andersen et al. | A hand-held, self-contained simulated transparent display | |
Min et al. | Interactive registration for Augmented Reality GIS | |
CN110888530A (en) | 3D visual editor and editing method based on electronic map | |
CN106840167B (en) | Two-dimensional quantity calculation method for geographic position of target object based on street view map | |
Siegl et al. | An augmented reality human–computer interface for object localization in a cognitive vision system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |