CN110837297A - Information processing method and AR equipment - Google Patents
Information processing method and AR equipment Download PDFInfo
- Publication number
- CN110837297A CN110837297A CN201911049188.8A CN201911049188A CN110837297A CN 110837297 A CN110837297 A CN 110837297A CN 201911049188 A CN201911049188 A CN 201911049188A CN 110837297 A CN110837297 A CN 110837297A
- Authority
- CN
- China
- Prior art keywords
- pose
- image
- dimensional model
- coordinate system
- perspective
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the application discloses an information processing method, which comprises the following steps: determining that a first pose of a first AR device and a second pose of a second AR device are in the same coordinate system; wherein the second AR device has a function of creating a three-dimensional model; sending the first pose and the first visual angle of the first AR device to a second AR device; receiving a first three-dimensional model associated with a first view angle sent by a second AR device; wherein the first three-dimensional model is a model created by the second AR device based on the first pose and the first perspective; the first three-dimensional model is displayed. The embodiment of the application also discloses an AR device.
Description
Technical Field
The present application relates to, but not limited to, the field of computer technologies, and in particular, to an information processing method and an AR device.
Background
In a usage scenario of multiple Augmented Reality (AR) devices in the related art, each AR device is provided with a function module for creating a three-dimensional model, so that in the usage scenario of multiple AR devices, usage cost is increased, and each AR device needs to execute a large number of computing tasks, resulting in waste of computing resources.
Content of application
The embodiments of the present application are expected to provide an information processing method and an AR device, which solve the problems in the related art that in a usage scenario of multiple AR devices, usage cost is increased, and each AR device needs to execute a large number of computing tasks, resulting in waste of computing resources, and achieve that in a usage scenario of multiple AR devices, usage cost is reduced, and waste of computing resources is reduced.
The technical scheme of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an information processing method, where the method includes:
determining that a first pose of a first AR device and a second pose of a second AR device are in the same coordinate system; wherein the second AR device has a function of creating a three-dimensional model;
sending the first pose and the first view angle of the first AR device to the second AR device;
receiving a first three-dimensional model associated with the first perspective sent by the second AR device; wherein the first three-dimensional model is a model created by the second AR device based on the first pose and the first perspective;
displaying the first three-dimensional model.
Optionally, the determining that the first pose of the first AR device and the second pose of the second AR device are in the same coordinate system includes:
acquiring a first image; the first image comprises an image acquired by an image acquisition module of the first AR device based on a second visual angle at a first moment;
receiving a second image sent by the second AR device; wherein the second image comprises an image acquired by an image acquisition module of the second AR device at the first time based on the second perspective;
and obtaining the first pose and the second pose under the same coordinate system based on the first image and the second image.
Optionally, the determining that the first pose of the first AR device and the second pose of the second AR device are in the same coordinate system includes:
acquiring a third image; the third image comprises an image obtained by shooting a calibration plate by an image acquisition module of the first AR equipment at a second moment;
determining, based on the third image, a third pose of the first AR device relative to the calibration plate;
receiving a fourth image and a fourth pose sent by the second AR device; the fourth image comprises an image obtained by shooting the calibration plate by the image acquisition module of the second AR device at the second moment; the fourth pose comprises a pose of the second AR device determined by the second AR device relative to the calibration plate based on the fourth image;
and obtaining the first pose and the second pose in the same coordinate system based on the third pose and the fourth pose.
Optionally, the displaying the first three-dimensional model includes:
displaying the first three-dimensional model in a static scene or a dynamic scene; wherein the static scene characterizes that objects in an environment in which the first AR device and the second AR device are located are static; the dynamic scene characterizes that objects in an environment in which the first AR device and the second AR device are located are dynamic.
Optionally, the method further includes:
receiving a second three-dimensional model associated with a third perspective sent by the second AR device; wherein the third perspective is a perspective of the second AR device rotated from the first perspective based on the first pose; the second three-dimensional model is a model created by the second AR device based on the first pose and the third perspective;
determining that the first AR device changes from the first view angle to the third view angle in a static scene, and displaying the second three-dimensional model; wherein the static scene characterizes that objects in an environment in which the first AR device and the second AR device are located are static.
Optionally, after determining that the first pose of the first AR device and the second pose of the second AR device are under the same coordinate system, the method further includes:
and sending the first pose and the second pose in the same coordinate system to the second AR device.
In a second aspect, an embodiment of the present application provides an information processing method, where the method includes:
determining that a first pose of a first AR device and a second pose of a second AR device are in the same coordinate system;
receiving a first view angle of the first AR device sent by the first AR device;
creating a first three-dimensional model based on the first pose and the first perspective;
sending the first three-dimensional model to the first AR device.
Optionally, before determining that the first pose of the first AR device and the second pose of the second AR device are under the same coordinate system, the method further includes:
sending a second image to the first AR device; wherein the second image comprises an image acquired by an image acquisition module of the second AR device at a first time based on a second perspective.
Optionally, before determining that the first pose of the first AR device and the second pose of the second AR device are under the same coordinate system, the method further includes:
sending a fourth image and a fourth pose to the first AR device; the fourth image comprises an image obtained by shooting a calibration plate by an image acquisition module of the second AR equipment at a second moment; the fourth pose includes a pose of the second AR device relative to the calibration plate determined by the second AR device based on the fourth image.
Optionally, the determining that the first pose of the first AR device and the second pose of the second AR device are in the same coordinate system includes:
and receiving the first pose and the second pose in the same coordinate system, which are sent by the first AR device.
Optionally, the method further includes:
rotating the first view based on the first pose to obtain a third view;
creating a second three-dimensional model based on the first pose and the third perspective;
sending the second three-dimensional model to the first AR device.
In a third aspect, an embodiment of the present application provides a first AR device, where the first AR device includes:
a first memory for storing executable instructions;
and the first processor is used for executing the executable instructions stored in the memory to realize the steps of the information processing method of the first aspect.
In a fourth aspect, an embodiment of the present application provides a second AR device, where the second AR device includes:
a second memory for storing executable instructions;
and the second processor is used for executing the executable instructions stored in the memory to realize the steps of the information processing method of the second aspect.
In a fifth aspect, an embodiment of the present application provides a first AR device, where the first AR device includes:
a first determination unit configured to determine that a first pose of the first AR device and a second pose of the second AR device are in the same coordinate system; wherein the second AR device has a function of creating a three-dimensional model;
a first sending unit, configured to send the first pose and the first perspective of the first AR device to the second AR device;
a first receiving unit, configured to receive a first three-dimensional model associated with the first view, sent by the second AR device; wherein the first three-dimensional model is a model created by the second AR device based on the first pose and the first perspective;
and the display unit is used for displaying the first three-dimensional model.
In a sixth aspect, an embodiment of the present application provides a second AR device, where the second AR device includes:
a second determining unit, configured to determine that the first pose of the first AR device and the second pose of the second AR device are in the same coordinate system;
a second receiving unit, configured to receive the first angle of view of the first AR device sent by the first AR device;
a processing unit for creating a first three-dimensional model based on the first pose and the first perspective;
a second sending unit, configured to send the first three-dimensional model to the first AR device.
According to the information processing method and the AR device provided by the embodiment of the application, the first pose of the first AR device and the second pose of the second AR device are determined to be in the same coordinate system; wherein the second AR device has a function of creating a three-dimensional model; sending the first pose and the first visual angle of the first AR device to a second AR device; receiving a first three-dimensional model associated with a first view angle sent by a second AR device; wherein the first three-dimensional model is a model created by the second AR device based on the first pose and the first perspective; displaying the first three-dimensional model; the problem of the prior art that in a use scene of a plurality of AR devices, the use cost is high, and each AR device needs to execute a large number of calculation tasks, so that the calculation resources are wasted is solved; the 3D mesh is shared between the AR equipment with the 3Dmesh reconstruction function and the AR equipment without the 3D mesh reconstruction function, and therefore in a use scene of the multiple AR equipment, the use cost is reduced, and the waste of computing resources is reduced.
Drawings
Fig. 1 is a schematic flowchart of an information processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another information processing method provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of another information processing method provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of an information processing method according to another embodiment of the present application;
fig. 5 is a schematic flowchart of another information processing method according to another embodiment of the present application;
fig. 6 is a schematic flowchart of another information processing method according to another embodiment of the present application;
fig. 7 is a schematic diagram of a usage scenario of a multi-AR device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a first AR device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a second AR device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of another first AR device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of another second AR device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Three-dimensional Reconstruction (3D Reconstruction), which may be understood as building a three-dimensional (3D) model based on input data. A three-dimensional model of an object is reconstructed, for example, based on RGB images taken at different angles of the object, using relevant computer graphics and vision techniques.
2) A depth camera (depth camera), also called a 3D camera, is known as a camera, by which the depth distance of a shooting space can be detected.
In the related technology, the 3D scanning reconstruction of the space by using the AR device is a core function of the AR, a surface mesh and a plane of the space can be reconstructed, but in order to realize the reconstruction, depth camera and a large amount of computing resources are often needed.
Based on the foregoing, an embodiment of the present application provides an information processing method applied to a first AR device, and as shown in fig. 1, the method includes the following steps:
Wherein the second AR device has a function of creating a three-dimensional model.
The information processing method provided by the embodiment of the application is applied to a use scene of the multi-AR equipment. In the use scene, a second AR device with a function of creating a three-dimensional model is included, and the second AR device is used for realizing a 3D mesh reconstruction function; for example, the second AR device is provided with a depth camera. The first AR device is any one of the devices in the usage scenario that does not have a 3D mesh reconstruction function.
For any first AR device, before displaying the three-dimensional model reconstructed by the second AR device, determining that the first pose of the first AR device and the second pose of the second AR device are in the same coordinate system; namely, coordinate alignment is realized, and the aim of the alignment is that the positions and angles of the AR devices in the space can be mutually converted among different AR devices, namely, the space coordinates of all the AR devices are unified under a coordinate system, so that the sharing use of mesh of different AR devices is realized. Therefore, compared with the use scene of multiple AR devices in the related art, each AR device is provided with a depth camera, the cost is reduced, and meanwhile, the phenomenon of computing resource waste caused by the fact that all the AR devices achieve 3D mesh reconstruction is avoided.
And 102, sending the first pose and the first visual angle of the first AR device to a second AR device.
Here, the first pose and the first perspective of the first AR device are the basis for the second AR device to reconstruct the three-dimensional space corresponding to the first AR device.
In this embodiment of the application, when the first AR device determines that the first pose of the first AR device and the second pose of the second AR device are in the same coordinate system, the first pose and the first view angle of the first AR device are sent to the second AR device, that is, a basis for reconstructing a three-dimensional space corresponding to the first AR device is provided to the second AR device.
And 103, receiving a first three-dimensional model which is sent by the second AR device and is associated with the first view angle.
Wherein the first three-dimensional model is a model created by the second AR device based on the first pose and the first perspective.
In this embodiment of the application, after providing a basis for reconstructing a three-dimensional space corresponding to a first AR device to a second AR device, the first AR device receives a first three-dimensional model associated with a first view, which is sent by the second AR device. It is to be appreciated that the first three-dimensional model comprises a three-dimensional model as viewed from a first perspective based on a first pose of the first AR device.
And 104, displaying the first three-dimensional model.
In the embodiment of the application, after the first AR device acquires the first three-dimensional model associated with the first visual angle of the first AR device, the first three-dimensional model is displayed, so that the 3D mesh is shared between the AR device with the 3D mesh reconstruction function and the AR device without the 3D mesh reconstruction function.
The information processing method provided by the embodiment of the application determines that a first pose of a first AR device and a second pose of a second AR device are in the same coordinate system; wherein the second AR device has a function of creating a three-dimensional model; sending the first pose and the first visual angle of the first AR device to a second AR device; receiving a first three-dimensional model associated with a first view angle sent by a second AR device; wherein the first three-dimensional model is a model created by the second AR device based on the first pose and the first perspective; displaying the first three-dimensional model; the problem of the prior art that in a use scene of a plurality of AR devices, the use cost is high, and each AR device needs to execute a large number of calculation tasks, so that the calculation resources are wasted is solved; the 3D mesh is shared between the AR equipment with the 3D mesh reconstruction function and the AR equipment without the 3D mesh reconstruction function, and therefore in a use scene of the multiple AR equipment, the use cost is reduced, and the waste of computing resources is reduced.
Based on the foregoing embodiments, an embodiment of the present application provides an information processing method applied to a first AR device, and as shown in fig. 2, the method includes the following steps:
In this embodiment of the application, the step 201 of determining that the first pose of the first AR device and the second pose of the second AR device are in the same coordinate system may be implemented by the steps 201a 1-201 a 3:
step 201a1, a first image is acquired.
The first image comprises an image acquired by an image acquisition module of the first AR device based on the second visual angle at the first moment.
In the embodiment of the application, the first AR device acquires the image based on the second visual angle at the first moment through the image acquisition module of the first AR device to obtain the first image. Here, the image acquisition module of the first AR device is an acquisition module without a 3D mesh reconstruction function.
Step 201a2, receiving a second image sent by a second AR device.
The second image comprises an image acquired by the image acquisition module of the second AR device based on the second visual angle at the first moment.
In the embodiment of the application, the second AR device performs image acquisition based on the second viewing angle at the first time through the image acquisition module of the second AR device to obtain the second image. Here, the image acquisition module of the second AR device is an acquisition module having a 3D mesh reconstruction function.
Step 201a3, obtaining a first pose and a second pose under the same coordinate system based on the first image and the second image.
In this embodiment of the application, the obtaining, by the first AR device, the first pose and the second pose in the same coordinate system based on the first image and the second image includes: the first AR device determines a first feature point in the first image and a second feature point in the second image based on the first image and the second image, matches the first feature point and the second feature point to obtain a first matching result, determines a relative position relation between shooting points of the first AR device and shooting points of the second AR device based on the first matching result, and further obtains a first pose and a second pose in the same coordinate system based on the relative position relation, so that coordinate alignment between the first AR device and the second AR device is achieved.
As can be seen from the foregoing steps 201a 1-201 a3, in the information processing method provided in this embodiment of the present application, alignment between AR devices may be implemented in a manner that key frames (a key frame includes images captured by different AR devices at the same time based on the same viewing angle) are aligned, that is, spatial coordinates of different AR devices are unified in a coordinate system, so that a foundation is laid for 3D mesh sharing.
In another embodiment of the present application, the step 201 of determining that the first pose of the first AR device and the second pose of the second AR device are in the same coordinate system may also be implemented by the steps 201b 1-201 b 4:
step 201b1, acquiring a third image.
And the third image comprises an image obtained by shooting the calibration plate by the image acquisition module of the first AR equipment at the second moment.
In the embodiment of the application, the first AR device shoots the calibration board at the second moment through the image acquisition module of the first AR device to obtain the third image. It should be noted that the calibration board includes preset feature points.
Step 201b2, based on the third image, determines a third pose of the first AR device relative to the calibration plate.
In this embodiment of the application, when the first AR device obtains the third image, the third image may be analyzed, and a third pose of the first AR device with respect to the calibration board, that is, a first positional relationship between the first AR device and the calibration board, may be determined.
Step 201b3, receiving a fourth image and a fourth pose sent by the second AR device.
The fourth image comprises an image obtained by shooting the calibration plate by the image acquisition module of the second AR equipment at the second moment; the fourth pose includes a pose of the second AR device determined by the second AR device relative to the calibration plate based on the fourth image.
In the embodiment of the application, the second AR device shoots the calibration plate at the second moment through the image acquisition module of the second AR device to obtain the fourth image. In a case where the second AR device obtains the fourth image, the fourth image may be analyzed to determine a fourth pose of the second AR device with respect to the calibration board, that is, a second positional relationship between the second AR device and the calibration board.
Step 201b4, obtaining a first pose and a second pose in the same coordinate system based on the third pose and the fourth pose.
In the embodiment of the application, when the first AR device obtains the third pose and the fourth pose, the first pose and the second pose in the same coordinate system may be obtained based on the third pose and the fourth pose. Here, since the feature points on the calibration board are preset, matching the feature points on the calibration board is simpler and more accurate than the key frame-based approach.
As can be seen from the foregoing steps 201b 1-201 b4, in the information processing method provided in the embodiment of the present application, alignment between AR devices may be implemented in a calibration board alignment manner, that is, spatial coordinates of different AR devices are unified in a coordinate system, so that a foundation is laid for 3D mesh sharing.
In this embodiment of the application, after determining that the first pose of the first AR device and the second pose of the second AR device are in the same coordinate system in step 201, the following steps may be performed: and sending the first pose and the second pose in the same coordinate system to a second AR device.
In this embodiment of the application, after the first AR device aligns the coordinates between itself and the second AR device, the first pose and the second pose in the same coordinate system may be sent to the second AR device, so that the second AR device performs reconstruction of a three-dimensional space based on the coordinate system.
Wherein the second AR device has a function of creating a three-dimensional model.
Wherein the first three-dimensional model is a model created by the second AR device based on the first pose and the first perspective.
And step 204, displaying the first three-dimensional model in a static scene or a dynamic scene.
Wherein the static scene represents that objects in the environment in which the first AR device and the second AR device are located are static; the dynamic scene characterizes that objects in the environment in which the first AR device and the second AR device are located are dynamic.
In an actual application scene, for example, in a scene discussed in an indoor conference, the position of an object placed in an indoor conference place is fixed, and participants enter the conference according to seat numbers, and at this time, the scene can be regarded as a shot scene. In another practical scenario, such as in a square, with a mobile vendor, with a jogging child, and a stationary greenery, multiple users are in a dynamic scenario using the AR device.
And step 205, receiving a second three-dimensional model associated with the third view angle and sent by the second AR device.
Wherein the third view is a view that the second AR device rotates the first view based on the first pose; the second three-dimensional model is a model created by the second AR device based on the first pose and the third perspective. Here, in the rotating of the first to third angles of view by the second AR device based on the first posture, the rotation range of the angle of view may be any angle of view of 360 degrees.
In this embodiment of the application, after obtaining the first three-dimensional model reconstructed by the second AR device, the first AR device may further receive a second three-dimensional model associated with a third perspective sent by the second AR device.
And step 206, determining that the first AR device is changed from the first view angle to a third view angle in the static scene, and displaying the second three-dimensional model.
In the embodiment of the application, the first AR device determines that the first visual angle of the first AR device changes in a static scene, and determines that the first AR device changes from the first visual angle to a third visual angle, and then displays the second three-dimensional model. That is, in the case where the second AR device creates the first three-dimensional model based on the first pose and the first perspective, the second AR device may also rotate the first perspective based on the first pose to obtain a new perspective, such as a third perspective; and then the second AR equipment creates a second three-dimensional model based on the first pose and the third visual angle and sends the second three-dimensional model to the first AR equipment, so that when the first AR equipment changes the visual angle to the third visual angle, the corresponding second three-dimensional model can be rapidly displayed.
As can be seen from the above, the second AR device has a function of creating a three-dimensional model, and its computing resources are powerful enough to handle the reconstruction and rendering of the three-dimensional model; the first AR device does not have the functionality to create a three-dimensional model and may be considered a lightweight device; the first AR device may have a lower processor performance requirement than the second AR device, and thus, the configuration cost of the first AR device may be reduced; and meanwhile, the function of displaying the created space 3D mesh on the light-weight low-power-consumption AR equipment is realized.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
Based on the foregoing embodiments, an embodiment of the present application provides an information processing method applied to a second AR device, and as shown in fig. 3, the method includes the following steps:
And 303, creating a first three-dimensional model based on the first pose and the first visual angle.
In the embodiment of the application, the second AR device determines that the first pose of the first AR device and the second pose of the second AR device are in the same coordinate system, and creates the first three-dimensional model based on the first pose and the first view angle when receiving the first view angle of the first AR device sent by the first AR device.
Here, the second AR device is provided with a depth camera having a 3D mesh reconstruction function; depth cameras include, but are not limited to, structured light depth cameras, Time of Flight (TOF) cameras.
The information processing method provided by the embodiment of the application determines that a first pose of a first AR device and a second pose of a second AR device are in the same coordinate system; receiving a first visual angle of a first AR device sent by the first AR device; creating a first three-dimensional model based on the first pose and the first perspective; sending the first three-dimensional model to a first AR device; the problem of the prior art that in a use scene of a plurality of AR devices, the use cost is high, and each AR device needs to execute a large number of calculation tasks, so that the calculation resources are wasted is solved; the 3D mesh is shared between the AR equipment with the 3D mesh reconstruction function and the AR equipment without the 3D mesh reconstruction function, and therefore in a use scene of the multiple AR equipment, the use cost is reduced, and the waste of computing resources is reduced.
Based on the foregoing embodiments, an embodiment of the present application provides an information processing method applied to a second AR device, and as shown in fig. 4, the method includes the following steps:
The second image comprises an image acquired by the image acquisition module of the second AR device based on the second visual angle at the first moment.
In the embodiment of the application, the second AR device performs image acquisition based on the second viewing angle at the first time through the image acquisition module of the second AR device to obtain the second image. Further, the second AR device sends the second image to the first AR device, so that the first AR device performs coordinate alignment based on the second image acquired by the second AR device and the first image acquired by the first AR device.
And 402, receiving a first pose and a second pose which are sent by the first AR device and are in the same coordinate system.
In an embodiment of the application, the second AR device creates a first three-dimensional model in the same coordinate system determined above based on the first pose and the first perspective.
And 406, rotating the first view angle based on the first posture to obtain a third view angle.
In this embodiment of the application, the second AR device performs rotation of a preset angle based on the obtained first view angle of the first AR device, so as to obtain a third view angle. Illustratively, the preset angle value range is (0, 360) degrees.
And step 407, creating a second three-dimensional model based on the first pose and the third view angle.
And step 408, sending the second three-dimensional model to the first AR device.
In the embodiment of the application, after the second three-dimensional model is created by the second AR device, the second three-dimensional model is sent to the first AR device, so that the corresponding three-dimensional model can be quickly displayed in a static scene even after the angle of view of the first AR device is changed.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
Based on the foregoing embodiments, an embodiment of the present application provides an information processing method applied to a second AR device, and as shown in fig. 5, the method includes the following steps:
and step 501, sending the fourth image and the fourth pose to the first AR device.
The fourth image comprises an image obtained by shooting the calibration plate by the image acquisition module of the second AR equipment at the second moment; the fourth pose includes a pose of the second AR device determined by the second AR device relative to the calibration plate based on the fourth image.
In the embodiment of the application, the second AR device shoots the calibration quilt at the second moment through the image acquisition module of the second AR device to obtain the fourth image. Further, the second AR device sends the fourth image to the first AR device, so that the first AR device performs coordinate alignment based on the fourth image collected by the second AR device and a third image obtained by shooting the calibration version by the first AR device.
And 502, receiving a first pose and a second pose which are sent by the first AR device and are in the same coordinate system.
In an embodiment of the application, the second AR device creates a first three-dimensional model in the same coordinate system determined above based on the first pose and the first perspective.
And 507, creating a second three-dimensional model based on the first pose and the third visual angle.
And step 508, sending the second three-dimensional model to the first AR device.
Based on the foregoing embodiments, the information processing method provided in the embodiments of the present application is further described, and in a usage scenario of multiple AR devices, as shown in fig. 6, the method includes the following steps:
In the embodiment of the present application, the AR device may be an AR glasses, or may be a video-see-through head-mounted display (video-see-through).
Here, AR _ Device _ No _3D and AR _ Device _3D may be image-matched by key frame or aligned by observing a common calibration plate. After the AR devices are aligned with the SLAM coordinate system, the positions and angles of the respective devices in space can be mutually converted between different devices.
Exemplarily, referring to fig. 7, the first AR Device is denoted by AR _ Device _ No _3D, and the second AR Device is denoted by AR _ Device _ 3D. In the embodiment of the present application, it is considered that the AR _ Device _ No _3D and the AR _ Device _3D are located in the same coordinate system, and the amount of spatial mesh data reconstructed by the AR _ Device _3D is very large, which may cause data transmission delay, large power consumption for data transmission, and rendering load of the AR _ Device _ No _3D Device. The AR _ Device _3D can only transmit the mesh data of the current view angle of the AR _ Device _ No _3D Device, so that the transmission amount is reduced and the same effect is achieved. As shown in fig. 7, the current angle of view corresponds to the field angle range corresponding to the area indicated by 71.
Of course, the AR _ Device _3D transmits the spatial mesh reconstructed by the AR _ Device _3D Device to the AR _ Device _ No _3D Device as Full mesh in its entirety, so that the AR _ Device _ No _3D extracts a model of the response therefrom and displays it when rotated to the corresponding view angle. Here, Full mesh includes a reconstructed three-dimensional reconstruction model corresponding to a current view angle rotation of 360 degrees.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
Based on the foregoing embodiments, an embodiment of the present application provides a first AR device, where the first AR device may be applied to an information processing method provided in the embodiments corresponding to fig. 1 to 2, and as shown in fig. 8, the first AR device 8 includes:
a first memory 81 for storing executable instructions;
a first processor 82 for executing executable instructions stored in the memory to implement the steps of:
determining that a first pose of a first AR device and a second pose of a second AR device are in the same coordinate system;
sending the first pose and the first visual angle of the first AR device to a second AR device; wherein the second AR device has a function of creating a three-dimensional model;
receiving a first three-dimensional model associated with a first view angle sent by a second AR device; wherein the first three-dimensional model is a model created by the second AR device based on the first pose and the first perspective;
the first three-dimensional model is displayed.
In other embodiments of the present application, the first processor 82 is configured to execute executable instructions stored in the memory to implement the following steps:
acquiring a first image; the first image comprises an image acquired by an image acquisition module of the first AR equipment based on a second visual angle at a first moment;
receiving a second image sent by a second AR device; the second image comprises an image acquired by an image acquisition module of the second AR equipment based on a second visual angle at the first moment;
and obtaining a first pose and a second pose under the same coordinate system based on the first image and the second image.
In other embodiments of the present application, the first processor 82 is configured to execute executable instructions stored in the memory to implement the following steps:
acquiring a third image; the third image comprises an image obtained by shooting the calibration plate by the image acquisition module of the first AR equipment at the second moment;
determining a third pose of the first AR device relative to the calibration plate based on the third image;
receiving a fourth image and a fourth pose sent by a second AR device; the fourth image comprises an image obtained by shooting the calibration plate by the image acquisition module of the second AR equipment at the second moment; the fourth pose comprises a pose of the second AR device determined by the second AR device relative to the calibration plate based on the fourth image;
and obtaining a first pose and a second pose under the same coordinate system based on the third pose and the fourth pose.
In other embodiments of the present application, the first processor 82 is configured to execute executable instructions stored in the memory to implement the following steps:
displaying the first three-dimensional model in a static scene or a dynamic scene; wherein the static scene represents that objects in the environment in which the first AR device and the second AR device are located are static; the dynamic scene characterizes that objects in the environment in which the first AR device and the second AR device are located are dynamic.
In other embodiments of the present application, the first processor 82 is configured to execute executable instructions stored in the memory to implement the following steps:
receiving a second three-dimensional model associated with a third perspective sent by a second AR device; wherein the third view is a view that the second AR device rotates the first view based on the first pose; the second three-dimensional model is a model created by the second AR device based on the first pose and the third perspective;
determining that the first AR device is changed from a first view angle to a third view angle in a static scene, and displaying a second three-dimensional model; wherein the static scene characterizes that objects in the environment in which the first AR device and the second AR device are located are static.
In other embodiments of the present application, the first processor 82 is configured to execute executable instructions stored in the memory to implement the following steps:
and sending the first pose and the second pose in the same coordinate system to a second AR device.
It should be noted that, in this embodiment, a specific implementation process of the step executed by the first processor may refer to an implementation process in the information processing method provided in the embodiments corresponding to fig. 1 to 2, and details are not described here.
Based on the foregoing embodiments, an embodiment of the present application provides a second AR device, where the second AR device may be applied to an information processing method provided in the embodiments corresponding to fig. 3 to 5, and as shown in fig. 8, the second AR device 9 includes:
a second memory 91 for storing executable instructions;
a second processor 92 for executing executable instructions stored in the memory to implement the steps of:
determining that a first pose of a first AR device and a second pose of a second AR device are in the same coordinate system;
receiving a first visual angle of a first AR device sent by the first AR device;
creating a first three-dimensional model based on the first pose and the first perspective;
the first three-dimensional model is sent to a first AR device.
In other embodiments of the present application, the second processor 92 is configured to execute executable instructions stored in the memory to implement the following steps:
sending the second image to the first AR device; the second image comprises an image acquired by the image acquisition module of the second AR device based on the second visual angle at the first moment.
In other embodiments of the present application, the second processor 92 is configured to execute executable instructions stored in the memory to implement the following steps:
sending the fourth image and the fourth pose to the first AR device; the fourth image comprises an image obtained by shooting the calibration plate by the image acquisition module of the second AR equipment at the second moment; the fourth pose includes a pose of the second AR device determined by the second AR device relative to the calibration plate based on the fourth image.
In other embodiments of the present application, the second processor 92 is configured to execute executable instructions stored in the memory to implement the following steps:
and receiving a first pose and a second pose which are sent by the first AR device and are in the same coordinate system.
In other embodiments of the present application, the second processor 92 is configured to execute executable instructions stored in the memory to implement the following steps:
rotating the first view angle based on the first pose to obtain a third view angle;
creating a second three-dimensional model based on the first pose and the third perspective;
and sending the second three-dimensional model to the first AR device.
It should be noted that, in this embodiment, a specific implementation process of the step executed by the second processor may refer to an implementation process in the information processing method provided in the embodiments corresponding to fig. 3 to 5, and details are not described here.
Based on the foregoing embodiments, an embodiment of the present application provides a first AR device, which may be applied to an information processing method provided in the embodiments corresponding to fig. 1 to 2, and as shown in fig. 10, the first AR device 10 (the first AR device 10 in fig. 10 has a corresponding relationship with the first AR device 8 in fig. 8) includes:
a first determination unit 1001 configured to determine that a first pose of the first AR device and a second pose of the second AR device are in the same coordinate system; wherein the second AR device has a function of creating a three-dimensional model;
a first sending unit 1002, configured to send the first pose and the first perspective of the first AR device to the second AR device;
a first receiving unit 1003, configured to receive a first three-dimensional model associated with the first view angle, where the first three-dimensional model is sent by the second AR device; wherein the first three-dimensional model is a model created by the second AR device based on the first pose and the first perspective;
a display unit 1004 that displays the first three-dimensional model.
In other embodiments of the present application, the first determining unit is further configured to acquire a first image; the first image comprises an image acquired by an image acquisition module of the first AR device based on a second visual angle at a first moment; receiving a second image sent by the second AR device; wherein the second image comprises an image acquired by an image acquisition module of the second AR device at the first time based on the second perspective; and obtaining the first pose and the second pose in the same coordinate system based on the first image and the second image.
In other embodiments of the present application, the first determining unit is further configured to acquire a third image; the third image comprises an image obtained by shooting a calibration plate by an image acquisition module of the first AR equipment at a second moment; determining, based on the third image, a third pose of the first AR device relative to the calibration plate; receiving a fourth image and a fourth pose sent by the second AR device; the fourth image comprises an image obtained by shooting the calibration plate by the image acquisition module of the second AR device at the second moment; the fourth pose comprises a pose of the second AR device determined by the second AR device relative to the calibration plate based on the fourth image; and obtaining the first pose and the second pose in the same coordinate system based on the third pose and the fourth pose.
In other embodiments of the present application, the display unit is further configured to display the first three-dimensional model in a static scene or a dynamic scene; wherein the static scene characterizes that objects in an environment in which the first AR device and the second AR device are located are static; the dynamic scene characterizes that objects in an environment in which the first AR device and the second AR device are located are dynamic.
In other embodiments of the present application, the first receiving unit is further configured to receive a second three-dimensional model associated with a third perspective sent by the second AR device; wherein the third perspective is a perspective of the second AR device rotated from the first perspective based on the first pose; the second three-dimensional model is a model created by the second AR device based on the first pose and the third perspective;
in other embodiments of the present application, the display unit is further configured to determine that the first AR device changes from the first view angle to the third view angle in a static scene, and display the second three-dimensional model; wherein the static scene characterizes that objects in an environment in which the first AR device and the second AR device are located are static.
In other embodiments of the present application, the first sending unit is further configured to send the first pose and the second pose in the same coordinate system to the second AR device.
Based on the foregoing embodiments, an embodiment of the present application provides a second AR device, where the second AR device may be applied to an information processing method provided in the embodiments corresponding to fig. 3 to 5, and as shown in fig. 11, the second AR device 11 (the second AR device 11 in fig. 11 and the second AR device 9 in fig. 9 have a corresponding relationship) includes:
a second determining unit 1101 configured to determine that the first pose of the first AR device and the second pose of the second AR device are in the same coordinate system;
a second receiving unit 1102, configured to receive a first view of the first AR device sent by the first AR device;
a processing unit 1103 for creating a first three-dimensional model based on the first pose and the first perspective;
a second sending unit 1104, configured to send the first three-dimensional model to the first AR device.
In other embodiments of the present application, the second sending unit is further configured to send a second image to the first AR device; wherein the second image comprises an image acquired by an image acquisition module of the second AR device at a first time based on a second perspective.
In other embodiments of the present application, the second sending unit is further configured to send a fourth image and a fourth pose to the first AR device; the fourth image comprises an image obtained by shooting a calibration plate by an image acquisition module of the second AR equipment at a second moment; the fourth pose includes a pose of the second AR device relative to the calibration plate determined by the second AR device based on the fourth image.
In other embodiments of the present application, the second determining unit is configured to receive the first pose and the second pose in the same coordinate system sent by the first AR device.
In other embodiments of the present application, the processing unit is further configured to rotate the first view angle based on the first pose to obtain a third view angle; creating a second three-dimensional model based on the first pose and the third perspective;
in other embodiments of the present application, the second sending unit is further configured to send the second three-dimensional model to the first AR device.
Based on the foregoing embodiments, embodiments of the application provide a computer-readable storage medium storing one or more programs, the one or more programs being executable by one or more first processors to implement the steps of:
determining that a first pose of a first AR device and a second pose of a second AR device are in the same coordinate system;
sending the first pose and the first visual angle of the first AR device to a second AR device; wherein the second AR device has a function of creating a three-dimensional model;
receiving a first three-dimensional model associated with a first view angle sent by a second AR device; wherein the first three-dimensional model is a model created by the second AR device based on the first pose and the first perspective;
the first three-dimensional model is displayed.
In other embodiments of the present application, the one or more programs are executable by the one or more first processors and further implement the steps of:
acquiring a first image; the first image comprises an image acquired by an image acquisition module of the first AR equipment based on a second visual angle at a first moment;
receiving a second image sent by a second AR device; the second image comprises an image acquired by an image acquisition module of the second AR equipment based on a second visual angle at the first moment;
and obtaining a first pose and a second pose under the same coordinate system based on the first image and the second image.
In other embodiments of the present application, the one or more programs are executable by the one or more first processors and further implement the steps of:
acquiring a third image; the third image comprises an image obtained by shooting the calibration plate by the image acquisition module of the first AR equipment at the second moment;
determining a third pose of the first AR device relative to the calibration plate based on the third image;
receiving a fourth image and a fourth pose sent by a second AR device; the fourth image comprises an image obtained by shooting the calibration plate by the image acquisition module of the second AR equipment at the second moment; the fourth pose comprises a pose of the second AR device determined by the second AR device relative to the calibration plate based on the fourth image;
and obtaining a first pose and a second pose under the same coordinate system based on the third pose and the fourth pose.
In other embodiments of the present application, the one or more programs are executable by the one or more first processors and further implement the steps of:
displaying the first three-dimensional model in a static scene or a dynamic scene; wherein the static scene represents that objects in the environment in which the first AR device and the second AR device are located are static; the dynamic scene characterizes that objects in the environment in which the first AR device and the second AR device are located are dynamic.
In other embodiments of the present application, the one or more programs are executable by the one or more first processors and further implement the steps of:
receiving a second three-dimensional model associated with a third perspective sent by a second AR device; wherein the third view is a view that the second AR device rotates the first view based on the first pose; the second three-dimensional model is a model created by the second AR device based on the first pose and the third perspective;
determining that the first AR device is changed from a first view angle to a third view angle in a static scene, and displaying a second three-dimensional model; wherein the static scene characterizes that objects in the environment in which the first AR device and the second AR device are located are static.
In other embodiments of the present application, the one or more programs are executable by the one or more first processors and further implement the steps of:
and sending the first pose and the second pose in the same coordinate system to a second AR device.
It should be noted that, in this embodiment, a specific implementation process of the step executed by the first processor may refer to an implementation process in the information processing method provided in the embodiments corresponding to fig. 1 to 2, and details are not described here.
Based on the foregoing embodiments, embodiments of the present application provide a computer-readable storage medium storing one or more programs, the one or more programs being executable by one or more second processors to implement the steps of:
determining that a first pose of a first AR device and a second pose of a second AR device are in the same coordinate system;
receiving a first visual angle of a first AR device sent by the first AR device;
creating a first three-dimensional model based on the first pose and the first perspective;
the first three-dimensional model is sent to a first AR device.
In other embodiments of the present application, the one or more programs are executable by the one or more second processors and further implement the steps of:
sending the second image to the first AR device; the second image comprises an image acquired by the image acquisition module of the second AR device based on the second visual angle at the first moment.
In other embodiments of the present application, the one or more programs are executable by the one or more second processors and further implement the steps of:
sending the fourth image and the fourth pose to the first AR device; the fourth image comprises an image obtained by shooting the calibration plate by the image acquisition module of the second AR equipment at the second moment; the fourth pose includes a pose of the second AR device determined by the second AR device relative to the calibration plate based on the fourth image.
In other embodiments of the present application, the one or more programs are executable by the one or more second processors and further implement the steps of:
and receiving a first pose and a second pose which are sent by the first AR device and are in the same coordinate system.
In other embodiments of the present application, the one or more programs are executable by the one or more second processors and further implement the steps of:
rotating the first view angle based on the first pose to obtain a third view angle;
creating a second three-dimensional model based on the first pose and the third perspective;
and sending the second three-dimensional model to the first AR device.
It should be noted that, in this embodiment, a specific implementation process of the step executed by the second processor may refer to an implementation process in the information processing method provided in the embodiments corresponding to fig. 3 to 5, and details are not described here.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application.
Claims (12)
1. An information processing method, characterized in that the method comprises:
determining that a first pose of a first AR device and a second pose of a second AR device are in the same coordinate system; wherein the second AR device has a function of creating a three-dimensional model;
sending the first pose and the first view angle of the first AR device to the second AR device;
receiving a first three-dimensional model associated with the first perspective sent by the second AR device; wherein the first three-dimensional model is a model created by the second AR device based on the first pose and the first perspective;
displaying the first three-dimensional model.
2. The method of claim 1, wherein the determining that the first pose of the first AR device and the second pose of the second AR device are under the same coordinate system comprises:
acquiring a first image; the first image comprises an image acquired by an image acquisition module of the first AR device based on a second visual angle at a first moment;
receiving a second image sent by the second AR device; wherein the second image comprises an image acquired by an image acquisition module of the second AR device at the first time based on the second perspective;
and obtaining the first pose and the second pose in the same coordinate system based on the first image and the second image.
3. The method of claim 1, wherein the determining that the first pose of the first AR device and the second pose of the second AR device are under the same coordinate system comprises:
acquiring a third image; the third image comprises an image obtained by shooting a calibration plate by an image acquisition module of the first AR equipment at a second moment;
determining, based on the third image, a third pose of the first AR device relative to the calibration plate;
receiving a fourth image and a fourth pose sent by the second AR device; the fourth image comprises an image obtained by shooting the calibration plate by the image acquisition module of the second AR device at the second moment; the fourth pose comprises a pose of the second AR device determined by the second AR device relative to the calibration plate based on the fourth image;
and obtaining the first pose and the second pose in the same coordinate system based on the third pose and the fourth pose.
4. The method of any of claims 1-3, wherein said displaying the first three-dimensional model comprises:
displaying the first three-dimensional model in a static scene or a dynamic scene; wherein the static scene characterizes that objects in an environment in which the first AR device and the second AR device are located are static; the dynamic scene characterizes that objects in an environment in which the first AR device and the second AR device are located are dynamic.
5. The method according to any one of claims 1 to 3, further comprising:
receiving a second three-dimensional model associated with a third perspective sent by the second AR device; wherein the third perspective is a perspective of the second AR device rotated from the first perspective based on the first pose; the second three-dimensional model is a model created by the second AR device based on the first pose and the third perspective;
determining that the first AR device changes from the first view angle to the third view angle in a static scene, and displaying the second three-dimensional model; wherein the static scene characterizes that objects in an environment in which the first AR device and the second AR device are located are static.
6. The method of any of claims 1 to 3, wherein after determining that the first pose of the first AR device and the second pose of the second AR device are under the same coordinate system, the method further comprises:
and sending the first pose and the second pose in the same coordinate system to the second AR device.
7. An information processing method, characterized in that the method comprises:
determining that a first pose of a first AR device and a second pose of a second AR device are in the same coordinate system;
receiving a first view angle of the first AR device sent by the first AR device;
creating a first three-dimensional model based on the first pose and the first perspective;
sending the first three-dimensional model to the first AR device.
8. The method of claim 7, wherein prior to determining that the first pose of the first AR device and the second pose of the second AR device are under the same coordinate system, the method further comprises:
sending a second image to the first AR device; wherein the second image comprises an image acquired by an image acquisition module of the second AR device at a first time based on a second perspective.
9. The method of claim 7, wherein prior to determining that the first pose of the first AR device and the second pose of the second AR device are under the same coordinate system, the method further comprises:
sending a fourth image and a fourth pose to the first AR device; the fourth image comprises an image obtained by shooting a calibration plate by an image acquisition module of the second AR equipment at a second moment; the fourth pose includes a pose of the second AR device relative to the calibration plate determined by the second AR device based on the fourth image.
10. The method of claim 7, wherein determining that the first pose of the first AR device and the second pose of the second AR device are in the same coordinate system comprises:
and receiving the first pose and the second pose in the same coordinate system, which are sent by the first AR device.
11. The method according to any one of claims 7 to 10, further comprising:
rotating the first view based on the first pose to obtain a third view;
creating a second three-dimensional model based on the first pose and the third perspective;
sending the second three-dimensional model to the first AR device.
12. An AR device, the AR device comprising:
a memory for storing executable instructions;
a processor for executing the executable instructions stored in the memory to implement the information processing method of any one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911049188.8A CN110837297B (en) | 2019-10-31 | 2019-10-31 | Information processing method and AR equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911049188.8A CN110837297B (en) | 2019-10-31 | 2019-10-31 | Information processing method and AR equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110837297A true CN110837297A (en) | 2020-02-25 |
CN110837297B CN110837297B (en) | 2021-07-16 |
Family
ID=69575988
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911049188.8A Active CN110837297B (en) | 2019-10-31 | 2019-10-31 | Information processing method and AR equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110837297B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113315938A (en) * | 2021-04-23 | 2021-08-27 | 杭州易现先进科技有限公司 | Method and system for recording third visual angle of AR experience |
CN114489342A (en) * | 2022-01-29 | 2022-05-13 | 联想(北京)有限公司 | Image processing method and device and electronic equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160163107A1 (en) * | 2014-12-09 | 2016-06-09 | Industrial Technology Research Institute | Augmented Reality Method and System, and User Mobile Device Applicable Thereto |
CN106408668A (en) * | 2016-09-09 | 2017-02-15 | 京东方科技集团股份有限公司 | AR equipment and method for AR equipment to carry out AR operation |
CN107274483A (en) * | 2017-06-14 | 2017-10-20 | 广东工业大学 | A kind of object dimensional model building method |
CN108573530A (en) * | 2018-03-29 | 2018-09-25 | 麒麟合盛网络技术股份有限公司 | Augmented reality AR exchange methods and system |
CN109087359A (en) * | 2018-08-30 | 2018-12-25 | 网易(杭州)网络有限公司 | Pose determines method, pose determining device, medium and calculates equipment |
CN109255843A (en) * | 2018-09-26 | 2019-01-22 | 联想(北京)有限公司 | Three-dimensional rebuilding method, device and augmented reality AR equipment |
CN109582687A (en) * | 2017-09-29 | 2019-04-05 | 白欲立 | A kind of data processing method and device based on augmented reality |
CN109992111A (en) * | 2019-03-25 | 2019-07-09 | 联想(北京)有限公司 | Augmented reality extended method and electronic equipment |
-
2019
- 2019-10-31 CN CN201911049188.8A patent/CN110837297B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160163107A1 (en) * | 2014-12-09 | 2016-06-09 | Industrial Technology Research Institute | Augmented Reality Method and System, and User Mobile Device Applicable Thereto |
CN106408668A (en) * | 2016-09-09 | 2017-02-15 | 京东方科技集团股份有限公司 | AR equipment and method for AR equipment to carry out AR operation |
CN107274483A (en) * | 2017-06-14 | 2017-10-20 | 广东工业大学 | A kind of object dimensional model building method |
CN109582687A (en) * | 2017-09-29 | 2019-04-05 | 白欲立 | A kind of data processing method and device based on augmented reality |
CN108573530A (en) * | 2018-03-29 | 2018-09-25 | 麒麟合盛网络技术股份有限公司 | Augmented reality AR exchange methods and system |
CN109087359A (en) * | 2018-08-30 | 2018-12-25 | 网易(杭州)网络有限公司 | Pose determines method, pose determining device, medium and calculates equipment |
CN109255843A (en) * | 2018-09-26 | 2019-01-22 | 联想(北京)有限公司 | Three-dimensional rebuilding method, device and augmented reality AR equipment |
CN109992111A (en) * | 2019-03-25 | 2019-07-09 | 联想(北京)有限公司 | Augmented reality extended method and electronic equipment |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113315938A (en) * | 2021-04-23 | 2021-08-27 | 杭州易现先进科技有限公司 | Method and system for recording third visual angle of AR experience |
CN114489342A (en) * | 2022-01-29 | 2022-05-13 | 联想(北京)有限公司 | Image processing method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110837297B (en) | 2021-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11410331B2 (en) | Systems and methods for video communication using a virtual camera | |
CN106375748B (en) | Stereoscopic Virtual Reality panoramic view joining method, device and electronic equipment | |
US9818228B2 (en) | Mixed reality social interaction | |
US10812780B2 (en) | Image processing method and device | |
CN110728755B (en) | Method and system for roaming among scenes, model topology creation and scene switching | |
US10192363B2 (en) | Math operations in mixed or virtual reality | |
CN109584295A (en) | The method, apparatus and system of automatic marking are carried out to target object in image | |
US10049490B2 (en) | Generating virtual shadows for displayable elements | |
CN103918012A (en) | Rendering system, rendering server, control method thereof, program, and recording medium | |
CN110568923A (en) | unity 3D-based virtual reality interaction method, device, equipment and storage medium | |
JP2019532531A (en) | Panorama image compression method and apparatus | |
US9697581B2 (en) | Image processing apparatus and image processing method | |
CN104898832A (en) | Intelligent terminal based 3D real-time glass fitting method | |
CN110837297B (en) | Information processing method and AR equipment | |
JP2020502893A (en) | Oriented image stitching for spherical image content | |
CN107851329A (en) | Object is shown based on multiple models | |
CN106530408A (en) | Museum temporary exhibition planning and design system | |
CN116485973A (en) | Material generation method of virtual object, electronic equipment and storage medium | |
CN114928718A (en) | Video monitoring method and device, electronic equipment and storage medium | |
Liao et al. | Real-time spherical panorama image stitching using OpenCL | |
CN109949396A (en) | A kind of rendering method, device, equipment and medium | |
CN112862981B (en) | Method and apparatus for presenting a virtual representation, computer device and storage medium | |
Liao et al. | Gpu parallel computing of spherical panorama video stitching | |
US11978111B2 (en) | Object virtualization processing method and device, electronic device and storage medium | |
Lai et al. | Exploring manipulation behavior on video see-through head-mounted display with view interpolation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |