CN110211239B - Augmented reality method, apparatus, device and medium based on label-free recognition - Google Patents

Augmented reality method, apparatus, device and medium based on label-free recognition Download PDF

Info

Publication number
CN110211239B
CN110211239B CN201910466330.2A CN201910466330A CN110211239B CN 110211239 B CN110211239 B CN 110211239B CN 201910466330 A CN201910466330 A CN 201910466330A CN 110211239 B CN110211239 B CN 110211239B
Authority
CN
China
Prior art keywords
information
real
augmented reality
imu
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910466330.2A
Other languages
Chinese (zh)
Other versions
CN110211239A (en
Inventor
嵇望
陈默
张羽
王哲
赵强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yuanchuan Xinye Technology Co ltd
Original Assignee
Hangzhou Yuanchuan Xinye Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yuanchuan Xinye Technology Co ltd filed Critical Hangzhou Yuanchuan Xinye Technology Co ltd
Priority to CN201910466330.2A priority Critical patent/CN110211239B/en
Publication of CN110211239A publication Critical patent/CN110211239A/en
Application granted granted Critical
Publication of CN110211239B publication Critical patent/CN110211239B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The invention discloses an augmented reality method based on unmarked recognition, which relates to the technical field of indoor augmented reality, can overcome the limitation of marked augmented reality, realize the function of unmarked augmented reality, and has stable performance under the conditions of large light change and rapid movement. The method comprises the following steps: acquiring visual information and IMU information; calibrating real-time video stream information; tracking and positioning the calibrated IMU information to obtain the real-time pose of the camera; and calculating a real-time display visual angle of the virtual 3D model, and drawing the real-time virtual 3D model under the real-time display visual angle on the virtual object placing plane for combined display. The invention also discloses an augmented reality device, electronic equipment and a computer storage medium based on the unmarked identification. By combining the IMU information and the visual information, the invention can obtain better augmented reality effect.

Description

Augmented reality method, apparatus, device and medium based on label-free recognition
Technical Field
The invention relates to the field of indoor augmented reality, in particular to an augmented reality method, device, equipment and medium based on unmarked identification.
Background
Augmented Reality (AR) is a new technology for seamlessly integrating real world information and virtual world information, and is implemented by applying virtual information to the real world through simulation and superposition of scientific technologies such as computers and the like of entity information which is difficult to experience in a certain time space range of the real world originally, so that the entity information is perceived by human senses and the sense experience beyond reality is achieved. Briefly, augmented reality technology enables real environments and virtual objects to be overlaid in real time onto the same picture or space to coexist.
At present, people use a manual identification method to obtain camera information, including position information and posture information, in an AR system required for virtual and real registration. The camera information is acquired based on the artificial identification method, although the identification and tracking can be conveniently carried out, the defect of changing scene authenticity exists, and the augmented reality effect is unnatural. The AR application development framework ARToolKit as currently popular is a classical representation of such approaches. In order to solve the problem that the display scene is required to be always located within the sight range of the user, researchers select a new method for completing virtual-real registration by using natural features in the display scene, such as planes, feature points, line segments and the like.
However, as the application range of AR technology is wider, it is necessary to cope with more complicated unknown environments and changing human eyes when augmented reality is constructed. To increase the flexibility and utility of AR technology, improvements in visual positioning technology are needed. In the monocular vision-based real-time synchronous positioning and mapping (SLAM) research proposed by doctor Davision, a full-state extended Kalman filter is used to track a small number of Harris corners so as to update the posture of the camera frame by frame in an indoor environment, and the method makes an important contribution in the field of visual positioning. However, the performance of the algorithm is unstable under the conditions of large light change and fast motion, mainly because the monocular vision system cannot recover the absolute scale, so that the application of the algorithm in the virtual reality world is limited.
Disclosure of Invention
In order to overcome the defects of the prior art, an object of the present invention is to provide an augmented reality method based on label-free recognition, which overcomes the limitations of labeled augmented reality, implements the function of label-free augmented reality, and has stable performance under the conditions of large light variation and fast motion.
One of the purposes of the invention is realized by adopting the following technical scheme:
an augmented reality method based on label-free recognition comprises the following steps:
receiving visual information and IMU information, and preprocessing the visual information and the IMU information;
acquiring real-time video stream information of a camera, and performing angular velocity offset calibration and velocity vector, scale and weight vector calibration by combining preprocessed IMU information to obtain calibrated IMU information comprising angular velocity calibration information, velocity vector calibration information, absolute scale calibration information and weight vector calibration information;
tracking and positioning the calibrated IMU information to obtain the real-time pose of the camera;
calculating a normal vector of a virtual object placing plane according to the weight vector calibration information;
and calculating a real-time display visual angle of the virtual 3D model according to the real-time pose of the camera, and finally drawing the real-time virtual 3D model under the real-time display visual angle on the virtual object placing plane for combined display.
Further, the preprocessing of the visual information is: acquiring real-time three-dimensional scene information by using an RGB (red, green and blue) camera, and extracting feature points in an image; the IMU information is preprocessed as follows: the method comprises the steps of acquiring real-time IMU information of a camera, performing pre-scoring on the IMU information acquired in the time between two visual frames, and estimating position information, speed information and rotation information of the camera.
Further, the method for acquiring the real-time video stream information of the camera comprises the following steps: running the real-time video stream for a plurality of times according to a visual SLAM algorithm, and simultaneously recording video stream information including position information, speed information and rotation information in the plurality of times;
the method for carrying out angular velocity bias calibration by combining the preprocessed IMU information comprises the following steps: performing hand-eye calibration on the rotation information in a plurality of time periods and the rotation information in the IMU information obtained by the vision SLAM algorithm to correct the rotation offset;
the method for calibrating the speed vector, the scale and the weight vector by combining the preprocessed IMU information comprises the following steps: and enabling the speed information and the position information obtained by the visual SLAM algorithm in a plurality of time periods to be equal to the speed information and the position information of the IMU information, and complementing the speed, the scale and the weight information which are lacked by the visual SLAM algorithm.
Further, the specific steps of tracking and positioning the calibrated IMU information by using the visual SLAM algorithm are as follows:
tracking the calibrated IMU information and the preprocessed visual information, calculating the pose of a camera, tracking a local map, generating a key frame according to a set threshold value, and adding the key frame into a key frame sequence;
checking the key frame sequence, processing newly generated key frames, eliminating redundant matching points, generating new matching points, optimizing a local map and eliminating redundant key frames;
and checking the key frame queue again, performing loop detection by using a pre-established visual dictionary, and judging whether to perform closed-loop correction and global map optimization according to a loop detection result.
Further, when the loop detection result shows that loop occurs, closed-loop correction and global map optimization are performed.
Another object of the present invention is to provide an augmented reality apparatus based on label-free recognition, which realizes an augmented reality function by fusing IMU information and visual information.
The second purpose of the invention is realized by adopting the following technical scheme:
augmented reality device based on marker-free recognition, it includes:
the information acquisition module is used for acquiring visual information and IMU information and preprocessing the visual information and the IMU information;
the vision inertial navigation joint initialization module is used for carrying out angle offset calibration and velocity vector, absolute scale and gravity vector calibration based on the preprocessed IMU information;
the positioning and tracking module tracks the information calibrated by the vision inertial navigation joint initialization module and acquires the real-time pose of the camera;
and the synthetic display module is used for respectively calculating a display visual angle for displaying the virtual 3D model and a normal vector of a virtual article placing plane according to the real-time pose and the gravity vector calibration of the camera, and drawing the virtual 3D model under the display visual angle for synthetic display.
Further, the positioning and tracking module comprises a tracking module, a local map module and a loop detection module, wherein the tracking module is used for calculating the pose of the camera, tracking the local map and constructing a key frame sequence; the local map module is used for optimizing the key frame sequence and the local map; and the loop detection module is used for carrying out loop detection on the key frame sequence, and carrying out closed-loop correction and global map optimization.
It is a further object of the present invention to provide an electronic device for performing one of the above objects, comprising a processor, a storage medium, and a computer program stored in the storage medium, which when executed by the processor implements the above mentioned augmented reality method based on markerless recognition.
It is a fourth object of the present invention to provide a computer readable storage medium storing one of the objects of the invention, having stored thereon a computer program which, when executed by a processor, implements the above-mentioned augmented reality method based on label-free recognition.
Compared with the prior art, the invention has the beneficial effects that:
the camera pose recognition tracking method based on the visual information has the advantages that the camera pose recognition tracking is carried out by adopting an identification-free method, a scene does not need to be arranged, the reality of the scene is not changed, IMU information is fused on the basis of the visual information, the defect that the traditional AR method is unstable under the conditions of large light change and quick movement is overcome, the absolute scale of a real scene can be directly calculated by adopting a visual and inertial navigation tight coupling mode, the normal vector of a virtual object placing plane is calculated according to gravity vector calibration, the accurate construction of a virtual 3D model is realized, and the better augmented reality effect is achieved.
Drawings
FIG. 1 is a flow chart of an augmented reality method based on label-free recognition according to the present invention;
fig. 2 is a block diagram of the augmented reality device based on label-free recognition according to embodiment 2;
fig. 3 is a block diagram of the electronic apparatus of embodiment 3.
Detailed Description
The present invention will now be described in more detail with reference to the accompanying drawings, in which the description of the invention is given by way of illustration and not of limitation. Various embodiments may be combined with each other to form other embodiments not shown in the following description.
Example 1
The embodiment provides an augmented reality method based on unmarked recognition, which aims to perform positioning tracking of camera pose and estimation of virtual scene plane through fusion of visual SLAM and IMU, and obtain better augmented reality effect on the basis of avoiding changing the original scene.
According to the above principle, the augmented reality method based on the unmarked recognition is introduced, as shown in fig. 1, the augmented reality method based on the unmarked recognition specifically includes the following steps:
s1: visual information and IMU information are received, and the visual information and the IMU information are preprocessed;
in S1, the visual information is acquired from a visual system, the IMU information is acquired from an inertial navigation unit, and the preprocessing of the visual information is as follows: acquiring real-time three-dimensional scene information by using an RGB (red, green and blue) camera, and extracting feature points in an image; the IMU information comprises angular velocity and acceleration, namely real-time motion information of the camera, pre-scoring is carried out on IMU information obtained in a time period of an interval between two visual frames, and position information, velocity information and rotation information of the camera are estimated.
S2: acquiring real-time video stream information, and performing angular velocity offset calibration and velocity vector, scale and weight vector calibration by combining the preprocessed IMU information to obtain calibrated IMU information comprising angular velocity calibration information, velocity vector calibration information, absolute scale calibration information and weight vector calibration information;
in S2, the real-time video stream obtained by the camera runs 10 frames according to a visual SLAM algorithm, and SLAM data of the 10 frames are recorded, wherein the SLAM data comprises position information, speed information and rotation information;
performing angular speed offset calibration and speed, scale and gravity calibration based on the IMU information preprocessed in the S1; the method specifically comprises the following steps: performing hand-eye calibration on the rotation information in the SLAM data and the rotation information in the IMU information to correct the rotation offset to obtain angular velocity offset calibration information; and (3) enabling the speed information and the position information in the SLAM data to be equal to the speed information and the position information in the IMU information, and complementing the speed, the scale and the weight information which are lacked in the visual SLAM data to obtain speed vector calibration information, absolute scale calibration information and weight vector calibration information.
S3: tracking and positioning the calibrated IMU information by using a visual SLAM algorithm to obtain the real-time pose of the camera;
in S3, the calibrated IMU information includes angular velocity offset calibration information, velocity vector calibration information, absolute scale calibration information, and weight vector calibration information.
The specific steps of tracking and positioning the calibrated IMU information are as follows:
completing thread tracking by using the calibrated IMU information, receiving characteristic points and IMU information (acceleration and angular velocity) in the visual information, calculating to obtain a camera pose and a local map, and judging whether a key frame is generated according to a set threshold; adding the generated key frame into the key frame sequence;
checking the key frame sequence, processing newly generated key frames, eliminating redundant matching points, generating new matching points, optimizing a local map, and eliminating redundant key frames;
and checking the key frame queue again, performing loop detection by using a pre-established visual dictionary, and judging whether to perform closed-loop correction and global map optimization according to a loop detection result.
And when the loop detection result shows that loop occurs, closed-loop correction and global map optimization are performed to improve the accuracy and robustness of camera pose tracking.
S4: and calculating a normal vector of the virtual object placing plane according to the weight vector calibration information by using a RANSAC algorithm.
The virtual object placement plane is a plane for placing a virtual object, which is obtained from the weight vector calibration information (i.e., the direction of gravity) in S2.
S5: and calculating a real-time display visual angle of the virtual 3D model according to the real-time pose of the camera, and finally drawing the real-time virtual 3D model under the real-time display visual angle for combined display on the virtual object placing plane by utilizing the rendering algorithm of OpenGL.
The method for judging whether to generate the key frame according to the set threshold comprises the following steps: when at least one of the three conditions for generating the key frame is met, generating the key frame; otherwise, no key frame is generated.
The three conditions for generating the key frame are as follows: 1. the local map thread is idle; 2. 20 frames after the last key frame; 3. the number of feature points extracted by the current frame is more than 50, and the common-view feature points with the previous three key frames are less than 90%.
Therefore, the above threshold can be understood as the feature point number is equal to 50, and the common-view feature point with the first three key frames is equal to 90%.
Example 2
Embodiment 2 discloses a device corresponding to the unmarked recognition-based augmented reality method of embodiment 1, which is a virtual device structure of the above embodiment, and as shown in fig. 2, the device includes:
an information obtaining module 210, configured to obtain visual information and IMU information, and preprocess the visual information and IMU information;
the vision inertial navigation joint initialization module 220 is used for calibrating angle offset and velocity vectors, absolute scales and gravity vectors based on the IMU information;
the positioning tracking module 230 acquires the real-time pose of the camera by using the information calibrated by the vision inertial navigation combined initialization module;
and the synthetic display module 240 calculates a display view angle for displaying the virtual 3D model and a normal vector of the virtual object placement plane according to the real-time pose and gravity vector calibration of the camera, and draws the virtual 3D model under the display view angle by using an OpenGL algorithm for synthetic display.
Preferably, the positioning and tracking module 230 includes a tracking module 231, a local map module 232 and a loop detection module 233, the tracking module 231 is used for calculating the pose of the camera and tracking the local map, and constructing a sequence of key frames; a local map module 232, configured to optimize the sequence of key frames and the local map; and a loop detection module 233, configured to perform loop detection on the key frame sequence, and perform closed-loop correction and global map optimization.
Example 3
Fig. 3 is a schematic structural diagram of an electronic device according to embodiment 3 of the present invention, as shown in fig. 3, the electronic device includes a processor 310, a memory 320, an input device 330, and an output device 340; the number of processors 310 in the computer device may be one or more, and one processor 310 is taken as an example in fig. 3; the processor 310, the memory 320, the input device 330 and the output device 340 in the electronic apparatus may be connected by a bus or other means, and the connection by the bus is exemplified in fig. 3.
The memory 320 is a computer-readable storage medium, which can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the augmented reality method based on the markerless recognition in the embodiment of the present invention (for example, the information acquisition module 210, the visual inertial navigation joint initialization module 220, the positioning and tracking module 230, and the synthesis and display module 240 in the augmented reality device based on the markerless recognition). The processor 310 executes various functional applications and data processing of the electronic device by executing software programs, instructions and modules stored in the memory 320, that is, implements the augmented reality method based on label-free recognition of embodiment 1.
The memory 320 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 320 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 320 may further include memory located remotely from the processor 310, which may be connected to the electronic device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 330 may be used to receive visual information, IMU information, and the like. The output device 340 is used for outputting the virtual 3D model.
Example 4
Embodiment 4 of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are used for an augmented reality method based on label-free recognition, the method including:
acquiring sample data, wherein the sample data is obtained by sampling a coil end of a relay for multiple times, receiving visual information and IMU information by the acquired coil, and preprocessing the visual information and the IMU information;
acquiring real-time video stream information, and performing angular velocity offset calibration and velocity vector, scale and weight vector calibration by combining the preprocessed IMU information to obtain calibrated IMU information comprising angular velocity calibration information, velocity vector calibration information, absolute scale calibration information and weight vector calibration information;
tracking and positioning the calibrated IMU information by using a visual SLAM algorithm to obtain the real-time pose of the camera;
calculating a normal vector of a virtual object placing plane according to the weight vector calibration information by using a RANSAC algorithm;
and calculating a real-time display visual angle of the virtual 3D model according to the real-time pose of the camera, and finally drawing the real-time virtual 3D model under the real-time display visual angle for combined display on the virtual object placing plane by utilizing the rendering algorithm of OpenGL.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the method operations described above, and may also perform related operations in the augmented reality method based on label-free recognition provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly can be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes instructions for enabling an electronic device (which may be a mobile phone, a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the augmented reality method apparatus based on label-free recognition, the included units and modules are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be implemented; in addition, the specific names of the functional units are only for the convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
Various other modifications and changes may be made by those skilled in the art based on the above-described technical solutions and concepts, and all such modifications and changes should fall within the scope of the claims of the present invention.

Claims (9)

1. An augmented reality method based on label-free recognition is characterized by comprising the following steps:
receiving visual information and IMU information, and preprocessing the visual information and the IMU information;
acquiring real-time video stream information of a camera, and performing angular velocity offset calibration and velocity vector, scale and weight vector calibration by combining preprocessed IMU information to obtain calibrated IMU information comprising angular velocity calibration information, velocity vector calibration information, absolute scale calibration information and weight vector calibration information;
tracking and positioning the calibrated IMU information to obtain the real-time pose of the camera;
calculating a normal vector of a virtual object placing plane according to the weight vector calibration information;
and calculating a real-time display visual angle of the virtual 3D model according to the real-time pose of the camera, and finally drawing the real-time virtual 3D model under the real-time display visual angle on the virtual object placing plane for combined display.
2. The augmented reality method based on label-free recognition of claim 1, wherein the preprocessing of the visual information is: acquiring real-time three-dimensional scene information by using an RGB camera, and extracting feature points in an image; the IMU information is preprocessed as follows: after the real-time IMU information of the camera is acquired, the IMU information acquired in the time between two visual frames is pre-scored, and the position information, the speed information and the rotation information of the camera are pre-estimated.
3. The augmented reality method based on label-free recognition according to claim 1 or 2, wherein the method for acquiring the real-time video stream information of the camera is as follows: running the real-time video stream for a plurality of times according to a visual SLAM algorithm, and simultaneously recording video stream information including position information, speed information and rotation information in the plurality of times;
the method for calibrating the angular velocity bias by combining the preprocessed IMU information comprises the following steps: performing hand-eye calibration on the rotation information in a plurality of time periods and the rotation information in IMU information obtained by the vision SLAM algorithm to correct the rotation offset;
the method for calibrating the velocity vector, the scale and the weight vector by combining the preprocessed IMU information comprises the following steps: and enabling the speed information and the position information obtained by the visual SLAM algorithm in a plurality of time periods to be equal to the speed information and the position information of the IMU information, and complementing the speed, the scale and the weight information which are lacked by the visual SLAM algorithm.
4. The augmented reality method based on label-free recognition of claim 3, wherein the specific steps of tracking and positioning the calibrated IMU information by using the visual SLAM algorithm are as follows:
tracking the calibrated IMU information and the preprocessed visual information, calculating the pose of a camera, tracking a local map, generating a key frame according to a set threshold value, and adding the key frame into a key frame sequence;
checking the key frame sequence, processing newly generated key frames, eliminating redundant matching points, generating new matching points, optimizing a local map and eliminating redundant key frames;
and checking the key frame queue again, performing loop detection by using a pre-established visual dictionary, and judging whether to perform closed-loop correction and global map optimization according to a loop detection result.
5. The label-free recognition-based augmented reality method of claim 4, wherein when the loop detection result indicates that a loop occurs, closed-loop correction and global map optimization are performed.
6. An augmented reality device based on markerless recognition, comprising:
the information acquisition module is used for acquiring visual information and IMU information and preprocessing the visual information and the IMU information;
the vision inertial navigation joint initialization module is used for carrying out angle offset calibration and velocity vector, absolute scale and gravity vector calibration based on the preprocessed IMU information;
the positioning and tracking module tracks the information calibrated by the vision inertial navigation joint initialization module and acquires the real-time pose of the camera;
and the synthetic display module is used for respectively calculating a display visual angle for displaying the virtual 3D model and a normal vector of a virtual article placing plane according to the real-time pose and the gravity vector calibration of the camera, and drawing the virtual 3D model under the display visual angle for synthetic display.
7. The augmented reality device of claim 6, wherein the localization tracking module comprises a tracking module, a local map module and a loop detection module, the tracking module is used for calculating camera pose and tracking local map, and constructing a sequence of key frames; the local map module is used for optimizing the key frame sequence and the local map; and the loop detection module is used for carrying out loop detection on the key frame sequence, and carrying out closed-loop correction and global map optimization.
8. An electronic device comprising a processor, a storage medium, and a computer program stored in the storage medium, wherein the computer program, when executed by the processor, implements the marker-free recognition based augmented reality method of any one of claims 1 to 5.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method for augmented reality based on markerless recognition of any of claims 1 to 5.
CN201910466330.2A 2019-05-30 2019-05-30 Augmented reality method, apparatus, device and medium based on label-free recognition Active CN110211239B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910466330.2A CN110211239B (en) 2019-05-30 2019-05-30 Augmented reality method, apparatus, device and medium based on label-free recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910466330.2A CN110211239B (en) 2019-05-30 2019-05-30 Augmented reality method, apparatus, device and medium based on label-free recognition

Publications (2)

Publication Number Publication Date
CN110211239A CN110211239A (en) 2019-09-06
CN110211239B true CN110211239B (en) 2022-11-08

Family

ID=67789677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910466330.2A Active CN110211239B (en) 2019-05-30 2019-05-30 Augmented reality method, apparatus, device and medium based on label-free recognition

Country Status (1)

Country Link
CN (1) CN110211239B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110672094B (en) * 2019-10-09 2021-04-06 北京航空航天大学 Distributed POS multi-node multi-parameter instant synchronous calibration method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105953796A (en) * 2016-05-23 2016-09-21 北京暴风魔镜科技有限公司 Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN106408515A (en) * 2016-08-31 2017-02-15 郑州捷安高科股份有限公司 Augmented reality-based vision synthesis system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150103183A1 (en) * 2013-10-10 2015-04-16 Nvidia Corporation Method and apparatus for device orientation tracking using a visual gyroscope

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105953796A (en) * 2016-05-23 2016-09-21 北京暴风魔镜科技有限公司 Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN106408515A (en) * 2016-08-31 2017-02-15 郑州捷安高科股份有限公司 Augmented reality-based vision synthesis system

Also Published As

Publication number Publication date
CN110211239A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
Sahu et al. Artificial intelligence (AI) in augmented reality (AR)-assisted manufacturing applications: a review
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
US11238606B2 (en) Method and system for performing simultaneous localization and mapping using convolutional image transformation
Tanskanen et al. Live metric 3D reconstruction on mobile phones
Zollmann et al. Augmented reality for construction site monitoring and documentation
CN110246147A (en) Vision inertia odometer method, vision inertia mileage counter device and mobile device
US20130335529A1 (en) Camera pose estimation apparatus and method for augmented reality imaging
CN108447097A (en) Depth camera scaling method, device, electronic equipment and storage medium
CN110660098B (en) Positioning method and device based on monocular vision
CN110349212B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
KR20200014858A (en) Location measurement and simultaneous mapping method and apparatus
WO2021093679A1 (en) Visual positioning method and device
CN115641401A (en) Construction method and related device of three-dimensional live-action model
Fang et al. Multi-sensor based real-time 6-DoF pose tracking for wearable augmented reality
CN115239888B (en) Method, device, electronic equipment and medium for reconstructing three-dimensional face image
Arndt et al. From points to planes-adding planar constraints to monocular SLAM factor graphs
KR20180035359A (en) Three-Dimensional Space Modeling and Data Lightening Method using the Plane Information
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN110211239B (en) Augmented reality method, apparatus, device and medium based on label-free recognition
CN112731503B (en) Pose estimation method and system based on front end tight coupling
US10843068B2 (en) 6DoF inside-out tracking game controller
Zhu et al. PairCon-SLAM: Distributed, online, and real-time RGBD-SLAM in large scenarios
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal
WO2023184278A1 (en) Method for semantic map building, server, terminal device and storage medium
Laskar et al. Robust loop closures for scene reconstruction by combining odometry and visual correspondences

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 23011, Yuejiang commercial center, No. 857, Xincheng Road, Puyan street, Binjiang District, Hangzhou, Zhejiang 311611

Applicant after: Hangzhou Yuanchuan Xinye Technology Co.,Ltd.

Address before: 23 / F, World Trade Center, 857 Xincheng Road, Binjiang District, Hangzhou City, Zhejiang Province, 310051

Applicant before: Hangzhou Yuanchuan New Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant