CN114373016A - Method for positioning implementation point in augmented reality technical scene - Google Patents

Method for positioning implementation point in augmented reality technical scene Download PDF

Info

Publication number
CN114373016A
CN114373016A CN202111633601.2A CN202111633601A CN114373016A CN 114373016 A CN114373016 A CN 114373016A CN 202111633601 A CN202111633601 A CN 202111633601A CN 114373016 A CN114373016 A CN 114373016A
Authority
CN
China
Prior art keywords
point
dimensional
plane
positioning
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111633601.2A
Other languages
Chinese (zh)
Inventor
魏鹏
孙倩
徐佳琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202111633601.2A priority Critical patent/CN114373016A/en
Publication of CN114373016A publication Critical patent/CN114373016A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for positioning a realization point in an augmented reality technical scene, which comprises the following steps: selecting a point in a two-dimensional plane or curved surface of a coordinate system; and adding a dimension to form a third dimension on the basis of the coordinate positioning of the point so as to determine the three-dimensional coordinates of a space point. The user does not need to move in space or perform body movement by limbs, the learning and adaptation cost is reduced for the operation with large workload or long working time in the operation environment that the original familiar equipment is used and the functions are only added, and the user can finish long-time high-intensity work without taking off head display equipment or glasses or walking around in a scene. The learning cost is relatively reduced, the learning compatibility is better, and the user acceptance is higher.

Description

Method for positioning implementation point in augmented reality technical scene
Technical Field
The invention relates to the technical field of computers, in particular to a method for positioning a realization point in an augmented reality technical scene.
Background
The scene experience equipment for the extended reality technologies (VR, AR and MR) is a brand-new man-machine interaction means created by using various technical sets such as a simulation technology, a computer graphics man-machine interface technology, a multimedia technology, a sensing technology, a network technology and the like and then using a computer and the latest sensor technology, and most of the related external input equipment in the current market mainly comprises various forms such as a ring, a bracelet, a touch pen, a handle and the like to perform real-time scene experience.
With the development and utilization of the augmented reality technology (VR, AR and MR), a user will perform long-time and high-intensity work in the virtual or augmented reality scene or a scene combining the virtual and augmented reality scenes, and the positioning technology can perform spatial positioning and operation on related elements in the scene as required. At present, the scene experience means of carrying the handle and wearing the head display equipment and the glasses can not be accurately positioned like the handle, the user is not suitable for working requirements of certain strength and precision, moreover, the inaccurate positioning depends on novel equipment, the extended reality technology (VR, AR and MR) is less explored as a new field, and the learning cost of the user is higher.
The multi-camera-based multi-target position capturing and positioning system and method (CN106843460B) glasses serving as a virtual vision provider can only be used as a part of the whole virtual reality frame, and how to project an experiencer to a virtual environment and control the movement of a virtual character through the self movement of the experiencer is a problem to be solved urgently in the next step of virtual reality. The most direct method for solving the problem is to use indoor positioning and position capturing technology; the existing indoor wireless positioning system mainly adopts short-distance wireless technologies such as a mobile base station, infrared, ultrasonic waves, Bluetooth, Wi-Fi and RFID, and the equipment is high in price, complex in installation and configuration and incapable of meeting the requirement of virtual reality experience on precision. At present, the computer vision technology is widely used by various manufacturers by capturing the position movement of the glasses through the camera, but due to the limitation of a single camera, the glasses can only capture the simple head movement of a single target in a small range, and cannot be expanded to the whole room or a larger area.
In the prior art, a multi-camera multi-target position capturing and positioning system comprises a server, at least two cameras, a plurality of VR glasses, a client and a photosphere; the method comprises the following specific steps: the number of the client, the number of the photospheres and the number of the VR glasses are the same; the LED light ball provides the position of an experiencer for the camera; the server receives the image data sent by the camera and processes the image data to obtain the experiencer position data; the client reads data sent by the VR glasses and sends the data to the server; and receiving and processing the position data of the experiencers sent by the server, converting the position data of all the experiencers into coordinates of a virtual environment, generating the virtual environment or a model in real time, and rendering a virtual picture to VR glasses in real time. The scheme can realize that an experiencer completely breaks away from the limited free movement before, and provides virtual reality experience close to a real environment; and multiple persons can be supported to interact in the same virtual environment together. Because the scheme relates to a plurality of cameras and image acquisition, firstly, light is a factor influencing unstable positioning information, secondly, the capture of the cameras has great errors, so that the position data is not accurate enough, and the scheme can not realize the space positioning of any point with certain precision.
The data processing method and the device (CN112530024A) for the virtual scene have different spatial positioning engine algorithms in the virtual scene of multi-user interaction, wherein the types or models of different mobile terminals may be different (such as a mobile phone and VR glasses, or an apple mobile phone and Huacheng mobile phones). Coordinate system synchronization (content synchronization) between different mobile terminals is a core technology, and virtual content in multi-user VR or AR or MR application can be shared and interacted in the same coordinate system space through the coordinate system synchronization. One existing solution is to implement coordinate system synchronization based on a regular geometric shape plane (e.g., a square desktop). The method comprises the following specific steps: the mobile terminal calculates the coordinates of the central point of the horizontal desktop at the current position as the origin of the terminal reference coordinate system. The desktop edge information is then used to calculate the coordinate axes of the reference coordinate system. Further, a terminal reference coordinate system is established. The calculation processes of the mobile terminals of different users are all the same, so that the same reference coordinate system is established, the pose output by the mobile terminal space positioning engine is utilized, the pose of the terminal relative to the reference coordinate system can be calculated, the relative pose between the terminals is further calculated, and the synchronization of the coordinate systems is realized. According to the scheme, the multi-user VR or AR or MR application interaction can be realized on the same desktop by the mobile terminals of different types. Since the scheme depends on a regular geometric plane, the application scene is limited, and the scheme cannot realize coordinate system synchronization in any plane or non-plane scene.
Disclosure of Invention
In order to solve the technical problem, the invention provides a method for realizing point positioning in an augmented reality technology (VR, AR and MR) scene, which ensures that a user realizes three-dimensional accurate positioning of a spatial point in the augmented reality technology (VR, AR and MR) scene. The invention adopts a positioning mode of firstly determining the two-dimensional coordinates of a point (firstly finding a two-dimensional plane) and then adding a dimension to determine the three-dimensional coordinates of a space point, the positioning or movement of any point in the space does not depend on the space movement of the equipment, and the plane movement of the equipment can realize the three-dimensional positioning movement of the point. The invention does not need the user to move in space or the limbs do body movement, reduces the learning and adaptation cost for the operation with large workload or long working time and the operation environment of using the original familiar equipment to only increase the functions, and can complete the long-time high-intensity work without taking off the head display equipment or the glasses and walking around in the scene. The learning cost is relatively reduced, the learning compatibility is better, and the user acceptance is higher.
The invention is realized by at least one of the following technical schemes.
A method for realizing point positioning in an augmented reality technical scene comprises the following steps: firstly, a point is selected in a two-dimensional plane or a curved surface of a coordinate system, and a dimension is added on the basis of the positioning of the two-dimensional coordinate of the point to form a third dimension, so that the three-dimensional coordinate of a space point is determined.
Furthermore, the coordinate system is a rectangular coordinate system, in a coordinate plane xOy of the rectangular coordinate system, the plane movement of the external input device on the working desktop is mapped into a two-dimensional coordinate (x, y) of the point in the extended reality scene space in the coordinate plane xOy, the external input device is started to control the third dimension, so that the point in the extended reality scene space generates a numerical value in the z direction, and the point has a three-dimensional coordinate (x, y, z) in the scene space, thereby realizing the spatial positioning of the point.
Further, in the rectangular spatial coordinate system with the origin at the point O, it is assumed that one surface in the global coordinate system is a common surface α, and that a point a 'is located on the surface α, and the external input device control point a' generates two-dimensional coordinates on the surface α. An external input device (such as a function area added to a mouse or a function key of a handle) controls the point a 'to move to the point a on OA', i.e., a numerical value is generated in a third dimension, thereby realizing the positioning of the point from a two-dimensional plane to a three-dimensional space.
Further, for the movement of the point: the external input device controls the target point A, so that the mapping point of the target point A on the plane alpha is converted into a point B 'from a point A', and the external input device controls the point B 'to move to a point B on an OB', thereby realizing the movement of the point from the point A to the point B.
Further, the coordinate system is a spherical coordinate system, and in a spherical coordinate system with an origin of O, the point P is determined
Figure BDA0003440917770000041
Theta, point Q is the projection of the locating point P on the xOy plane,
Figure BDA0003440917770000042
the angle rotated by the x-axis to the directed line segment OQ in the anticlockwise direction is shown, and theta is the included angle between the directed line segment OP and the z-axis in the positive direction, so that the positioning is carried out in a two-dimensional plane; and the external input equipment inputs in the added third dimension, so that a numerical value r is generated in the direction of the directed line segment OP, and the r is the distance between the origin O and the point P, thereby realizing the two-dimensional to three-dimensional positioning of the point.
Further, the mapping point of the point Q in the scene is the point P, which is dynamically represented as the projection point Q of the point P on the xOy plane moving on the plane QOz, and the length of the line segment OP is h, i.e. a value is generated in the direction of the third dimension OP, thereby realizing the two-dimensional to three-dimensional positioning of the point.
Further, for the movement of the point: point P1Projection point Q on xOy plane1Determination of the plane Q1Oz, and let line segment OP1With a length l, moving the projection point Q on the xOy plane1Ultimate point Q2Determining the plane Q2Oz, and let line segment OP2Length is k, thereby realizing point by P1Is oriented to P2Is moved.
Further onThe coordinate system is a cylindrical coordinate system, and in the cylindrical coordinate system, theta is equal to theta0A half-plane representing the z-axis, r ═ r0Representing a cylindrical surface with z as its axis, and inputting r-r by an external input device0Determining a cylindrical surface, determining two-dimensional coordinates (theta, z) of a point on the two-dimensional curved surface, determining different positions of the cylindrical surface through r assignment, and cooperatively determining three-dimensional coordinates (r, theta, z) of the space point.
Furthermore, the spatial point is mapped on a cylindrical surface taking the z axis as a central axis, so that a two-dimensional coordinate is determined in the two-dimensional curved surface, the position of the cylindrical surface is changed, namely different numerical values are generated in the third dimension, and the two-dimensional to three-dimensional positioning of the point is realized.
Further, for spatial movement of points: there are two spatial points A, B, mapped on respective two cylinders r ═ r1、r=r2When the space point A is on the two-dimensional curved surface r ═ r2The projection on is A', where r is r2The movement of point a' to point B in the plane maps to the movement of point a to point B in space.
Compared with the prior art, the invention has the beneficial effects that:
the user does not need to move in space or perform body movement by limbs, and for the operation with large workload or long working time, the learning and adaptation cost is reduced in the operation environment that the original familiar equipment is used and only the functions are added, which is obviously a more reasonable choice.
The user does not need to take off the head display device or the glasses and does not need to walk around in the scene, and long-time high-intensity work can be completed. The learning cost is relatively reduced, the learning compatibility is better, and the user acceptance is higher.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention.
Fig. 1 is a flowchart of a method for positioning an implementation point in an augmented reality scenario according to this embodiment;
FIG. 2 is a schematic diagram illustrating positioning of a point in a rectangular coordinate system in an augmented reality scenario according to this embodiment;
FIG. 3 is a diagram illustrating a moving location of a point in a rectangular coordinate system in an augmented reality scenario according to an embodiment;
FIG. 4 is a schematic diagram illustrating positioning of a point in a spherical coordinate system in an augmented reality scenario according to an embodiment;
fig. 5 is a schematic diagram of positioning a point in a cylindrical coordinate system in an augmented reality scene according to this embodiment.
Detailed Description
Reference will now be made in detail to the present preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
Example 1
A method for realizing point positioning in an augmented reality technical scene, as shown in FIG. 1, includes the steps of firstly selecting a point in a two-dimensional plane or curved surface of a coordinate system, and adding a dimension to form a third dimension on the basis of positioning the two-dimensional coordinate of the point, thereby determining the three-dimensional coordinate of a space point.
As shown in fig. 2, in the rectangular coordinates, there is a coordinate plane xOy, the planar movement of the external input device (e.g. a mouse) on the working desktop is mapped to the two-dimensional coordinates (x, y) of the extended display scene space point in the coordinate plane xOy, the external input device is enabled to control the function key or the function area of the third dimension, so that the extended display scene space point generates a numerical value in the z direction, i.e. the three-dimensional coordinates (x, y, z) of the point in the scene space, thereby implementing the spatial location of the point.
As shown in fig. 3, in the rectangular spatial coordinate system with the origin at the point O, it is assumed that one surface in the global coordinate system is a common surface α, and that there is a point a 'on the surface α, and the external input device (e.g., the mouse moves on the work table) controls the point a' to generate two-dimensional coordinates on the surface α. An external input device (such as a function area added to a mouse or a function key of a handle) controls the point a 'to move to the point a on OA', i.e., a numerical value is generated in a third dimension, thereby realizing the positioning of the point from a two-dimensional plane to a three-dimensional space. The point A is controlled by the external input device for the movement of the point, the mapping point of the control point A on the surface alpha is converted from A 'to B', and the external input device (such as a function area added by a mouse or a function key of a handle) controls the point B 'to move to the point B on the OB', so that the point is moved from the point A to the point B.
Example 2
As shown in FIG. 4, the point P is determined in a spherical coordinate system with an origin O
Figure BDA0003440917770000071
The value of theta (point Q is the projection of the location point P on the xOy plane,
Figure BDA0003440917770000072
an angle rotated by the x-axis to the directed line segment OQ in the counterclockwise direction, and theta is an included angle between the directed line segment OP and the z-axis in the forward direction) so as to be positioned in a two-dimensional plane; the external input device inputs in an added third dimension,the two-dimensional to three-dimensional positioning of the point is realized by generating a numerical value r (r is the distance between the origin O and the point P) in the direction of the directed line segment OP. The mapping point of the point Q in the scene is the point P, which is dynamically represented as the projection point Q of the point P on the xOy plane moving on the plane QOz, and the length of the line segment OP is h, i.e. a value is generated in the direction of the third dimension OP, thereby realizing the two-dimensional to three-dimensional positioning of the point.
For the movement of points on a spherical coordinate system: point P1Projection point Q on xOy plane1Determination of the plane Q1Oz, and let line segment OP1With a length l, moving the projection point Q on the xOy plane1Ultimate point Q2Determining the plane Q2Oz, and let line segment OP2Length is k, thereby realizing point by P1Is oriented to P2Is moved.
Example 3
As shown in fig. 5, in the cylindrical coordinate system, θ ═ θ0A half-plane representing the z-axis, r ═ r0Representing a cylindrical surface with z as axis. Inputting r-r by external input device0Determining a cylindrical surface, determining two-dimensional coordinates (theta, z) of points on the two-dimensional curved surface, determining different positions of the cylindrical surface through r assignment, and cooperatively determining three-dimensional coordinates (r, theta, z) of space points. The space points are mapped on a cylindrical surface with the z axis as the central axis, so that a two-dimensional coordinate is determined in the two-dimensional curved surface, the position of the cylindrical surface is changed, namely different numerical values are generated in the third dimension, and the two-dimensional to three-dimensional positioning of the points is realized.
Spatial movement of points on a cylindrical coordinate system: there are two spatial points A, B, mapped on respective two cylinders r ═ r1、r=r2When the space point A is on the two-dimensional curved surface r ═ r2The projection on is A', where r is r2The movement of point a' to point B in the plane maps to the movement of point a to point B in space.
The positions of the origins of the three coordinate systems can be set according to requirements. In particular, the plane movement of the mouse working desktop maps the two-dimensional coordinates of the positioning method (comprising three coordinate systems), a space point needs to be determined by coordinates of three dimensions, and two dimensions of the mouse plane movement mapping coordinates in each coordinate system are arranged and combinedIn a manner that
Figure BDA0003440917770000091
And (4) comprehensively considering three types of coordinate systems, wherein the total number of the positioning methods is 18.
The foregoing is only illustrative of the embodiments of the present invention and application of the technical principles. The present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described by the above embodiments, the present invention is not limited to the above embodiments, and may include more other equivalent embodiments without departing from the concept of the present invention, and is not limited to the embodiments.

Claims (10)

1. A method for realizing point positioning in an augmented reality technical scene is characterized by comprising the following steps: firstly, a point is selected in a two-dimensional plane or a curved surface of a coordinate system, and a dimension is added on the basis of the positioning of the two-dimensional coordinate of the point to form a third dimension, so that the three-dimensional coordinate of a space point is determined.
2. The method according to claim 1, wherein the coordinate system is a rectangular coordinate system, in a coordinate plane xOy of the rectangular coordinate system, the planar movement of the external input device on the working desktop is mapped to a two-dimensional coordinate (x, y) of the point in the coordinate plane xOy of the augmented reality scene space, the external input device is enabled to control the third dimension, so that the point in the augmented reality scene space generates a numerical value in the z direction, and the point has a three-dimensional coordinate (x, y, z) in the scene space, thereby realizing the spatial positioning of the point.
3. The method as claimed in claim 1, wherein in the rectangular spatial coordinate system with the origin at the point O, a common plane α is defined as one plane in the global coordinate system, a point a 'is defined on the common plane α, and the external input device control point a' generates two-dimensional coordinates on the common plane α. An external input device (such as a function area added to a mouse or a function key of a handle) controls the point a 'to move to the point a on OA', i.e., a numerical value is generated in a third dimension, thereby realizing the positioning of the point from a two-dimensional plane to a three-dimensional space.
4. A method for enabling the localization of points in an augmented reality technical scenario according to claim 2, characterized in that for the movement of a point: the external input device controls the target point A, so that the mapping point of the target point A on the plane alpha is converted into a point B 'from a point A', and the external input device controls the point B 'to move to a point B on an OB', thereby realizing the movement of the point from the point A to the point B.
5. Method for realizing the positioning of points in an augmented reality technical scenario according to claim 1, wherein the coordinate system is a spherical coordinate system, and the point P is determined in a spherical coordinate system with an origin O
Figure FDA0003440917760000011
Theta, point Q is the projection of the locating point P on the xOy plane,
Figure FDA0003440917760000012
the angle rotated by the x-axis to the directed line segment OQ in the anticlockwise direction is shown, and theta is the included angle between the directed line segment OP and the z-axis in the positive direction, so that the positioning is carried out in a two-dimensional plane; and the external input equipment inputs in the added third dimension, so that a numerical value r is generated in the direction of the directed line segment OP, and the r is the distance between the origin O and the point P, thereby realizing the two-dimensional to three-dimensional positioning of the point.
6. The method of claim 5, wherein the mapping point of the point Q in the scene is the point P, the projection point Q dynamically represented as the point P on the xOy plane moves on the plane QOz, and the line segment OP is made to be h long, i.e. a value is generated in the direction of the third dimension OP, thereby realizing the two-dimensional to three-dimensional positioning of the point.
7. A method for enabling the localization of points in an augmented reality technical scenario according to claim 5, wherein for the movement of a point: point P1Projection point Q on xOy plane1Determination of the plane Q1Oz, and let line segment OP1With a length l, moving the projection point Q on the xOy plane1Ultimate point Q2Determining the plane Q2Oz, and let line segment OP2Length is k, thereby realizing point by P1Is oriented to P2Is moved.
8. Method for locating a realization point in an augmented reality technical scene according to claim 1, characterised in that the coordinate system is a cylindrical coordinate system in which θ ═ θ0A half-plane representing the z-axis, r ═ r0Representing a cylindrical surface with z as its axis, and inputting r-r by an external input device0Determining a cylindrical surface, determining two-dimensional coordinates (theta, z) of a point on the two-dimensional curved surface, determining different positions of the cylindrical surface through r assignment, and cooperatively determining three-dimensional coordinates (r, theta, z) of the space point.
9. The method of claim 8, wherein the spatial points are mapped onto a cylindrical surface with the z-axis as the central axis, thereby determining a two-dimensional coordinate in the two-dimensional curved surface, and changing the position of the cylindrical surface, i.e. generating different values in a third dimension, to achieve two-to-three positioning of the points.
10. Method for enabling the localization of points in an augmented reality technical scenario according to claim 8, characterized in that for the spatial movement of points: there are two spatial points A, B, mapped on respective two cylinders r ═ r1、r=r2When the space point A is on the two-dimensional curved surface r ═ r2The projection on is A', where r is r2The movement of point a' to point B in the plane maps to the movement of point a to point B in space.
CN202111633601.2A 2021-12-28 2021-12-28 Method for positioning implementation point in augmented reality technical scene Pending CN114373016A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111633601.2A CN114373016A (en) 2021-12-28 2021-12-28 Method for positioning implementation point in augmented reality technical scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111633601.2A CN114373016A (en) 2021-12-28 2021-12-28 Method for positioning implementation point in augmented reality technical scene

Publications (1)

Publication Number Publication Date
CN114373016A true CN114373016A (en) 2022-04-19

Family

ID=81143006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111633601.2A Pending CN114373016A (en) 2021-12-28 2021-12-28 Method for positioning implementation point in augmented reality technical scene

Country Status (1)

Country Link
CN (1) CN114373016A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311419A (en) * 2022-10-11 2022-11-08 杭州钛鑫科技有限公司 Three-dimensional scene dynamic configuration method based on coordinate mapping dimension reduction and parametric configuration

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311419A (en) * 2022-10-11 2022-11-08 杭州钛鑫科技有限公司 Three-dimensional scene dynamic configuration method based on coordinate mapping dimension reduction and parametric configuration

Similar Documents

Publication Publication Date Title
US20240054735A1 (en) Real-time shared augmented reality experience
EP2691938B1 (en) Selective hand occlusion over virtual projections onto physical surfaces using skeletal tracking
CN1307510C (en) Single camera system for gesture-based input and target indication
WO2020078250A1 (en) Data processing method and device for virtual scene
JP2012168646A (en) Information processing apparatus, information sharing method, program, and terminal device
JP2022008987A (en) Tracking of position and orientation of virtual controller in virtual reality system
US11722845B2 (en) System for combining sensor data to determine a relative position
US20180113596A1 (en) Interface for positioning an object in three-dimensional graphical space
KR20120010041A (en) Method and system for authoring of augmented reality contents on mobile terminal environment
CN114373016A (en) Method for positioning implementation point in augmented reality technical scene
CN107229055B (en) Mobile equipment positioning method and mobile equipment positioning device
CN113724309A (en) Image generation method, device, equipment and storage medium
CN113190113A (en) Ultra-wideband positioning virtual reality system and positioning method for realizing position and direction
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal
JP2018132319A (en) Information processing apparatus, control method of information processing apparatus, computer program, and memory medium
WO2022176450A1 (en) Information processing device, information processing method, and program
CN110471577B (en) 360-degree omnibearing virtual touch control method, system, platform and storage medium
CN108261761B (en) Space positioning method and device and computer readable storage medium
CN111476873A (en) Mobile phone virtual doodling method based on augmented reality
CN111524240A (en) Scene switching method and device and augmented reality equipment
EP4047458A1 (en) Three-dimensional graphics generation system
CN114388056B (en) AR-based protein section generation method
CN110554784B (en) Input method, input device, display device and storage medium
US20230089061A1 (en) Space recognition system, space recognition method, information terminal, and server apparatus
WO2023179264A1 (en) Air input method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination