CN114092526B - Augmented reality method and device based on object 3D pose visual tracking - Google Patents

Augmented reality method and device based on object 3D pose visual tracking Download PDF

Info

Publication number
CN114092526B
CN114092526B CN202210073191.9A CN202210073191A CN114092526B CN 114092526 B CN114092526 B CN 114092526B CN 202210073191 A CN202210073191 A CN 202210073191A CN 114092526 B CN114092526 B CN 114092526B
Authority
CN
China
Prior art keywords
pose
energy function
camera
coordinate system
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210073191.9A
Other languages
Chinese (zh)
Other versions
CN114092526A (en
Inventor
李特
王彬
顾建军
曹昕
李佳宸
秦学英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202210073191.9A priority Critical patent/CN114092526B/en
Publication of CN114092526A publication Critical patent/CN114092526A/en
Application granted granted Critical
Publication of CN114092526B publication Critical patent/CN114092526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an augmented reality method and device based on object 3D pose visual tracking, wherein the method comprises the following steps: acquiring a first energy function under a first camera visual angle and a second energy function under a second camera visual angle, and calculating pose transformation increments of a current frame and a previous frame of a target object under an object center coordinate system; updating the pose of the target object under the first camera view angle according to the pose transformation increment to obtain a first pose; the pose transformation increment is sent to the second processing unit, so that the second processing unit updates the pose of the target object under the second camera view angle according to the pose transformation increment to obtain a second pose; and sending the first pose to the display equipment so that the display equipment acquires a third pose of the display equipment under a world coordinate system and calculates a fifth pose of the current frame of the target object under the coordinate system of the display equipment according to the first pose, the third pose and the fourth pose, wherein the fourth pose is the pose of the first camera under the world coordinate system.

Description

Augmented reality method and device based on object 3D pose visual tracking
Technical Field
The application relates to the technical field of augmented reality, in particular to an augmented reality method and device based on object 3D pose visual tracking.
Background
Three-dimensional object pose tracking is one of Augmented Reality (AR) techniques, and solves the pose of an object in real time by estimating the relative positional relationship between a camera and the three-dimensional object in real time. Three-dimensional object tracking technology has wide application, for example, it can be applied to AR games, AR navigation in environments such as shopping malls using mobile devices, and electronic instruction manual for instrument maintenance, and real-time rendering steps or devices to be processed on a screen by tracking instruments. Real-time high-precision three-dimensional object posture tracking is always the direction of efforts of researchers.
However, in the process of implementing the present invention, the inventors found that the following problems exist in the prior art:
on one hand, the precision of the existing tracking method is mainly the camera sight line direction, the precision required by augmented reality cannot be achieved, or the depth information is used for assisting, but the frame rate of the tracking method using the depth information is very low, because a large amount of extra time is needed to match the depth information with the image information, the tracking efficiency is low; on the other hand, more display devices are expected to integrate object tracking and presentation on one apparatus, which not only puts high demands on the configuration of the apparatus, but also increases the complexity of the apparatus.
Disclosure of Invention
The embodiment of the application aims to provide an augmented reality method and device based on object 3D pose visual tracking, so as to solve the technical problems of low tracking efficiency, high device configuration requirement and high device complexity in the related technology.
According to a first aspect of the embodiments of the present application, an augmented reality method based on object 3D pose visual tracking is provided, and applied to a first processing unit, the method includes:
acquiring a first energy function under a first camera view angle and a second energy function under a second camera view angle;
calculating pose transformation increment of the current frame and the previous frame of the target object under an object center coordinate system according to the first energy function and the second energy function;
updating the pose of the target object under the first camera view angle according to the pose transformation increment to obtain a first pose;
sending the pose transformation increment to a second processing unit so that the second processing unit updates the pose of the target object at a second camera view angle according to the pose transformation increment to obtain a second pose;
and sending the first pose to display equipment so that the display equipment can acquire a third pose of the display equipment under a world coordinate system and calculate a fifth pose of the current frame of the target object under the coordinate system of the display equipment according to the first pose, the third pose and the fourth pose, wherein the fourth pose is the pose of the first camera under the world coordinate system.
Further, calculating the pose transformation increment of the current frame and the previous frame of the target object in the object center coordinate system according to the first energy function and the second energy function, including:
calculating a comprehensive energy function of the double-view joint optimization according to the first energy function and the second energy function;
minimizing the comprehensive energy function to obtain a primary pose transformation increment;
updating a sixth pose according to the preliminary pose transformation increment, wherein the sixth pose is a pose of a frame on the target object in a camera coordinate system of the first camera;
updating the first energy function according to the first image and the sixth pose under the first camera view angle;
sending the preliminary pose transformation increment to a second camera so that the second camera updates a seventh pose according to the preliminary pose transformation increment, updates a second energy function according to a second image under a second camera view angle and the seventh pose, and sends the updated second energy function to the first camera, wherein the seventh pose is a pose of a frame on the target object under a camera coordinate system of the second camera;
Receiving an updated second energy function sent by the second camera;
and repeating the step of calculating a double-view combined optimized comprehensive energy function to the step of receiving the updated second energy function sent by the second camera for a preset number of times according to the first energy function and the second energy function, and obtaining pose transformation increments of the current frame and the previous frame of the target object in an object center coordinate system according to the primary pose transformation increment obtained in the repeated process each time.
According to a second aspect of the embodiments of the present application, an augmented reality apparatus based on object 3D pose visual tracking is provided, and is applied to a first processing unit, and includes:
the first acquisition module is used for acquiring a first energy function under a first camera view angle and a second energy function under a second camera view angle;
the first calculation module is used for calculating pose transformation increments of a current frame and a previous frame of the target object under an object center coordinate system according to the first energy function and the second energy function;
the first updating module is used for updating the pose of the target object under the first camera view angle according to the pose transformation increment to obtain a first pose;
the first sending module is used for sending the pose transformation increment to a second processing unit so that the second processing unit can update the pose of the target object under a second camera view angle according to the pose transformation increment to obtain a second pose;
And the second sending module is used for sending the first pose to display equipment so that the display equipment acquires a third pose of the display equipment under a world coordinate system and calculates a fifth pose of the current frame of the target object under the coordinate system of the display equipment according to the first pose, the third pose and the fourth pose, wherein the fourth pose is the pose of the first camera under the world coordinate system.
According to a third aspect of the embodiments of the present application, an augmented reality method based on object 3D pose visual tracking is provided, and applied to a second processing unit, the method includes:
sending a second energy function at a second camera view angle to a first processing unit to enable the first processing unit to acquire a first energy function at a first camera view angle and a second energy function at a second camera view angle, calculating the pose transformation increment of the current frame and the previous frame of the target object under the object center coordinate system according to the first energy function and the second energy function, updating the pose of the target object under the first camera view angle according to the pose transformation increment to obtain a first pose, sending the first pose to display equipment, enabling the display device to acquire a third pose of the display device under a world coordinate system and calculating a fifth pose of the current frame of the target object under the coordinate system of the display device according to the first pose, the third pose and a fourth pose, wherein the fourth pose is the pose of the first camera under the world coordinate system;
Receiving the pose transformation increment sent by the first processing unit;
and updating the pose of the target object under the second camera view angle according to the pose transformation increment to obtain a second pose.
According to a fourth aspect of the embodiments of the present application, there is provided an augmented reality apparatus based on object 3D pose visual tracking, applied to a second processing unit, including:
a third sending module, configured to send a second energy function at the second camera view angle to the first processing unit, such that the first processing unit acquires a first energy function at a first camera view angle and a second energy function at a second camera view angle, calculating the pose transformation increment of the current frame and the previous frame of the target object under the object center coordinate system according to the first energy function and the second energy function, updating the pose of the target object under the first camera viewing angle according to the pose transformation increment to obtain a first pose, sending the first pose to display equipment, enabling the display device to acquire a third pose of the display device under a world coordinate system and calculate a fifth pose of the current frame of the target object under the coordinate system of the display device according to the first pose, the third pose and a fourth pose, wherein the fourth pose is the pose of the first camera under the world coordinate system;
The first receiving module is used for receiving the pose transformation increment sent by the first processing unit;
and the second updating module is used for updating the pose of the target object under the second camera view angle according to the pose transformation increment to obtain a second pose.
According to a fifth aspect of the embodiments of the present application, an augmented reality method based on object 3D pose visual tracking is provided, which is applied to a display device, and includes:
receiving a first pose sent by a first processing unit, wherein the first pose is obtained by acquiring a first energy function under a first camera view angle and a second energy function under a second camera view angle by the first processing unit, calculating pose transformation increments of a current frame and a previous frame of a target object under an object center coordinate system according to the first energy function and the second energy function, and updating the pose of the target object under the first camera view angle according to the pose transformation increments;
and acquiring a third pose of the self-body in a world coordinate system and calculating a fifth pose of the current frame of the target object in the coordinate system of the display device according to the first pose, the third pose and the fourth pose, wherein the fourth pose is the pose of the first camera in the world coordinate system.
Further, the display device obtains a third pose of the display device itself in a world coordinate system, and calculates a fifth pose of the current frame of the target object in the display device coordinate system according to the first pose, the third pose and the fourth pose, including:
acquiring a third image under the visual angle of the display equipment;
extracting the features of the third image and matching the third image with a feature map to obtain a third pose of the display equipment under a world coordinate system;
calculating a conversion relation between a camera coordinate system of the first camera and a display equipment coordinate system according to the third pose and the fourth pose;
and calculating a fifth pose of the current frame of the target object under a coordinate system of display equipment according to the first pose and the conversion relation.
According to a sixth aspect of the embodiments of the present application, there is provided an augmented reality method based on object 3D pose visual tracking, applied to a display device, including:
the second receiving module is used for receiving the first pose sent by the first processing unit, wherein the first pose is obtained by the first processing unit acquiring a first energy function under a first camera view angle and a second energy function under a second camera view angle, calculating pose transformation increments of a current frame and a previous frame of a target object under an object center coordinate system according to the first energy function and the second energy function, and updating the pose of the target object under the first camera view angle according to the pose transformation increments;
And the second calculation module is used for acquiring a third pose of the self-body in a world coordinate system and calculating a fifth pose of the current frame of the target object in a display device coordinate system according to the first pose, the third pose and a fourth pose, wherein the fourth pose is the pose of the first camera in the world coordinate system.
According to a seventh aspect of embodiments herein, there is provided an electronic apparatus comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method as in any one of the first or third or fifth aspects.
According to an eighth aspect of embodiments herein, there is provided a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to any one of the first, third or fifth aspects.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
according to the embodiment, the pose of the target object under the first camera view angle is updated by the first processing unit and the pose of the target object under the second camera view angle is updated by the second processing unit according to the pose transformation increment, so that the multi-processing unit is used for calculating, the tracking efficiency is improved, and the requirements on device configuration and the complexity of the device are reduced; the first pose is sent to the display equipment, the display equipment calculates the fifth pose of the current frame of the target object under the coordinate system of the display equipment and presents the visual tracking effect of the target object, the object tracking and the segmentation of the presentation are realized while the augmented reality is realized, and the configuration requirement of the device and the complexity of the device are further reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the principles of the application.
Fig. 1 is a flowchart illustrating an augmented reality method (applied to a first processing unit) based on object 3D pose visual tracking according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating step S12, according to an exemplary embodiment.
Fig. 3 is a block diagram of an augmented reality apparatus (applied to a first processing unit) based on object 3D pose visual tracking according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating an augmented reality method (applied to the second processing unit) based on object 3D pose visual tracking according to an exemplary embodiment.
Fig. 5 is a block diagram of an augmented reality apparatus (applied to a second processing unit) based on object 3D pose visual tracking according to an exemplary embodiment.
Fig. 6 is a flowchart illustrating an augmented reality method (applied to a display device) based on object 3D pose visual tracking according to an exemplary embodiment.
Fig. 7 is a flowchart illustrating step S52, according to an exemplary embodiment.
Fig. 8 is a block diagram of an augmented reality apparatus (applied to a display device) based on object 3D pose visual tracking according to an exemplary embodiment.
FIG. 9 is a schematic diagram of an electronic device shown in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The noun explains:
object center coordinate system: a coordinate system with the center of the target object as the origin of coordinates.
First camera coordinate system: the center of the camera lens is taken as the origin, the sight line direction of the camera is the positive direction of the Z axis, the vertical downward direction is the positive direction of the Y axis after the camera is horizontally placed, and finally the coordinate system is a right-hand coordinate system, so the positive direction of the X axis can be obtained.
Second camera coordinate system: the same way as the first camera coordinate system is established.
Display device coordinate system: the coordinate system of the first camera is established in the same way
It should be noted that, the process of establishing the world coordinate system in the present invention is as follows: the marker is placed at a proper position in the scene, a world coordinate system is established by taking a certain most proper marker center as an origin, wherein the most proper marker means that the marker is closest to the centers of the images shot by the two cameras, and the marker can be placed more so as to improve the precision.
It should be noted that the first processing unit is a processing unit connected to the first camera and receiving a picture taken by the first camera; the second processing unit is connected with the second camera and receives the picture shot by the second unit; the display device comprises a third camera, a processing unit and a display unit, which can be used in two forms: one is to use a fixed screen, set up a plurality of cameras in the scene, can get the effect of switching different visual angles to observe the object or observe the object with the coordinate system of the fixed screen by operating and switching different cameras, this has enriched the content that the augmented reality can express; the other is that the pose relationship of the free camera relative to the scene can be positioned in the free camera, so that the configuration requirement on the head-mounted display device can be reduced while the accuracy is greatly improved by separating the object tracking and the display positioning, and the free camera can be a head-mounted display device or mobile equipment, such as a mobile phone and a tablet computer.
Example 1:
fig. 1 is a flowchart illustrating an augmented reality method based on object 3D pose visual tracking according to an exemplary embodiment, where the method is applied to a first processing unit, as shown in fig. 1, and may include the following steps:
Step S11: acquiring a first energy function under a first camera view angle and a second energy function under a second camera view angle;
step S12: calculating pose transformation increments of the current frame and the previous frame of the target object under an object center coordinate system according to the first energy function and the second energy function;
step S13: updating the pose of the target object under the first camera view angle according to the pose transformation increment to obtain a first pose;
step S14: sending the pose transformation increment to a second processing unit so that the second processing unit updates the pose of the target object under a second camera view angle according to the pose transformation increment to obtain a second pose;
step S15: and sending the first pose to display equipment so that the display equipment acquires a third pose of the display equipment under a world coordinate system and calculates a fifth pose of the current frame of the target object under the coordinate system of the display equipment according to the first pose, the third pose and the fourth pose, wherein the fourth pose is the pose of the first camera under the world coordinate system.
As can be seen from the foregoing embodiments, in the present application, the first processing unit updates the pose of the target object at the first camera viewing angle and the second processing unit updates the pose of the target object at the second camera viewing angle according to the pose transformation increment, respectively, so that the multi-processing unit performs calculation, thereby reducing the requirements for device configuration and the complexity of the device; the first pose is sent to the display equipment, the display equipment calculates the fifth pose of the current frame of the target object under the coordinate system of the display equipment and presents the visual tracking effect of the target object, the object tracking and the segmentation of the presentation are realized while the augmented reality is realized, and the configuration requirement of the device and the complexity of the device are further reduced.
In a specific implementation of step S11, a first energy function at a first camera view angle and a second energy function at a second camera view angle are obtained;
specifically, the first energy function is an energy function of the target object in the target object coordinate system with respect to a first image at a first camera view angle, and the second energy function is an energy function of the target object in the target object coordinate system with respect to a second image at a second camera view angle.
Specifically, the first energy function is calculated by the first processing unit according to the first image at the first camera view angle and the sixth pose of the frame on the target object in the first camera coordinate system, and the calculation method of the energy function may be multiple, such as an area-based monocular three-dimensional tracking method and an edge-based monocular three-dimensional tracking method, where the general calculation formula is as follows:
Figure DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE003
an energy function representing a monocular region-based three-dimensional tracking method at the view angle of the ith camera,
Figure DEST_PATH_IMAGE004
representing the image at the view angle of the ith camera,
Figure DEST_PATH_IMAGE005
representing the pose of the last frame expressed by a target object lie algebra under a camera i coordinate system,
Figure DEST_PATH_IMAGE006
representing the pose of the target object in the last frame of the lie group representation in the i coordinate system of the camera,
Figure DEST_PATH_IMAGE007
And representing the pose transformation of the current frame and the previous frame under the object center coordinate system.
Specifically, the second energy function is calculated by the second processing unit according to a second image at a second camera view angle and a seventh pose of a frame on the target object in the second camera coordinate system by using the same calculation method as the first energy function, and is sent to the first processing unit to obtain the second energy function.
In a specific implementation of step S12, calculating pose transformation increments of the current frame and the previous frame of the target object in the object center coordinate system according to the first energy function and the second energy function;
specifically, as shown in fig. 2, this step may include the following sub-steps:
step S21: calculating a comprehensive energy function of the double-view joint optimization according to the first energy function and the second energy function;
in particular, the first energy function is added to the second energy function, the reason for direct addition mainly considering that the sharing of the two cameras in the adjustment of the object pose is comparable. And moreover, due to the combination of the image information under the visual angles of the two cameras, the optimized and obtained object pose is more accurate.
Step S22: minimizing the comprehensive energy function to obtain a preliminary pose transformation increment;
Specifically, a newton-gaussian iteration method and an LM method may be selected to minimize the comprehensive energy function, in this embodiment, the newton-gaussian iteration method is used to minimize the comprehensive energy function, the LM iteration method needs to calculate an optimal gradient step each time, and time consumption of this block cannot be estimated, which may greatly affect the stability of the program. Therefore, the Newton-Gaussian iteration method is selected, so that the real-time performance is guaranteed, and the precision is competitive with the LM method.
In Newton-Gaussian iterative method, the primary pose transformation increment
Figure 669567DEST_PATH_IMAGE007
The calculation equation of (c) is as follows:
Figure DEST_PATH_IMAGE008
Figure DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE011
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE012
and
Figure DEST_PATH_IMAGE013
jacobian matrices respectively representing energy functions of cameras i
Figure DEST_PATH_IMAGE014
Heisen matrix
Figure DEST_PATH_IMAGE015
Figure DEST_PATH_IMAGE016
To represent
Figure DEST_PATH_IMAGE017
The inverse of the matrix of (a) is,
Figure DEST_PATH_IMAGE018
to represent
Figure DEST_PATH_IMAGE019
The transposed matrix of (2).
Figure DEST_PATH_IMAGE020
Is the image at the ith camera view angle.
Step S23: updating a sixth pose according to the preliminary pose transformation increment, wherein the sixth pose is the pose of a frame on the target object under the camera coordinate system of the first camera;
step S24: updating the first energy function according to the first image and the sixth pose under the first camera view angle;
step S25: sending the preliminary pose transformation increment to a second camera, so that the second camera updates a seventh pose according to the preliminary pose transformation increment, updates a second energy function according to a second image under a second camera view angle and the seventh pose, and sends the updated second energy function to the first camera, wherein the seventh pose is a pose of a frame on the target object under a camera coordinate system of the second camera;
Step S26: receiving an updated second energy function transmitted by the second camera;
step S27: and repeating the step of calculating the comprehensive energy function of the double-view joint optimization to the step of receiving the updated second energy function sent by the second camera for a preset number of times according to the first energy function and the second energy function, and obtaining the pose transformation increment of the current frame and the previous frame of the target object in the object center coordinate system according to the primary pose transformation increment obtained in the repeated process each time.
In the implementation of step S23-step S27, assuming that the preset number of iterations is N, there are N pose increments for the poses before each two frames of images. After each pose increment is obtained, we multiply the current pose (4 x 4 matrix) by the pose increment (4 x 4 matrix) and take the result as the last frame of image pose input for the next iteration. The advantage of using an iterative design is that excessive or insufficient pose increments caused by one iteration optimization can be avoided. Since we do not really compute the optimal optimization step size, but use the black-plug matrix to act as the step size.
In a specific implementation of step S13, updating the pose of the target object at the first camera view angle according to the pose transformation increment to obtain a first pose;
Specifically, a first position
Figure DEST_PATH_IMAGE021
The update formula is as follows:
Figure DEST_PATH_IMAGE023
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE024
representing the mapping of pose transformation deltas from lie algebra space to lie group space (i.e. matrix space),
Figure DEST_PATH_IMAGE025
representing a position of the target object at the first camera perspectiveA posture.
In a specific implementation of step S14, the pose transformation increment is sent to a second processing unit, so that the second processing unit updates the pose of the target object at a second camera viewing angle according to the pose transformation increment to obtain a second pose;
specifically, the process of updating the pose of the target object at the second camera viewing angle by the second processing unit according to the pose transformation increment to obtain the second pose is the same as the process of updating the pose of the target object at the first camera viewing angle by the first processing unit according to the pose transformation increment in step S13 to obtain the first pose, which is not repeated here.
In a specific implementation of step S15, the first pose is sent to a display device, so that the display device obtains a third pose of the display device in a world coordinate system, and calculates a fifth pose of the current frame of the target object in the display device coordinate system according to the first pose, the third pose, and the fourth pose, where the fourth pose is a pose of the first camera in the world coordinate system.
Specifically, this step is to implement enhanced display on the display device side.
Corresponding to the embodiment of the augmented reality method based on the object 3D pose visual tracking, the application also provides an embodiment of an augmented reality device based on the object 3D pose visual tracking.
Fig. 3 is a block diagram illustrating an augmented reality device based on object 3D pose visual tracking according to an example embodiment. Referring to fig. 3, the apparatus applied to the first processing unit may include:
a first obtaining module 21, configured to obtain a first energy function at a first camera view angle and a second energy function at a second camera view angle;
the first calculating module 22 is configured to calculate pose transformation increments of the current frame and the previous frame of the target object in the object center coordinate system according to the first energy function and the second energy function;
the first updating module 23 is configured to update the pose of the target object under the first camera viewing angle according to the pose transformation increment to obtain a first pose;
the first sending module 24 is configured to send the pose transformation increment to a second processing unit, so that the second processing unit updates the pose of the target object at a second camera viewing angle according to the pose transformation increment to obtain a second pose;
A second sending module 25, configured to send the first pose to a display device, so that the display device obtains a third pose of the display device itself in a world coordinate system, and calculates a fifth pose of the current frame of the target object in the display device coordinate system according to the first pose, the third pose, and the fourth pose, where the fourth pose is a pose of the first camera in the world coordinate system.
Example 2:
fig. 4 is a flowchart illustrating an augmented reality method based on object 3D pose visual tracking according to an exemplary embodiment, where the method is applied in a second processing unit, as shown in fig. 4, and may include the following steps:
step S31: sending a second energy function at a second camera view angle to a first processing unit so that the first processing unit acquires a first energy function at a first camera view angle and a second energy function at a second camera view angle, calculating the pose transformation increment of the current frame and the previous frame of the target object under the object center coordinate system according to the first energy function and the second energy function, updating the pose of the target object under the first camera viewing angle according to the pose transformation increment to obtain a first pose, sending the first pose to display equipment, enabling the display device to acquire a third pose of the display device under a world coordinate system and calculate a fifth pose of the current frame of the target object under the coordinate system of the display device according to the first pose, the third pose and a fourth pose, wherein the fourth pose is the pose of the first camera under the world coordinate system;
Step S32: receiving the pose transformation increment sent by the first processing unit;
step S33: and updating the pose of the target object under the second camera view angle according to the pose transformation increment to obtain a second pose.
Specifically, the specific implementation of steps S31-S33 is described in embodiment 1, and will not be described herein.
Corresponding to the embodiment of the augmented reality method based on the object 3D pose visual tracking, the application also provides an embodiment of an augmented reality device based on the object 3D pose visual tracking.
FIG. 5 is a block diagram of an augmented reality apparatus for visual tracking based on 3D pose of an object, according to an example embodiment. Referring to fig. 5, the apparatus is applied to a second processing unit, and may include
A third sending module 31, configured to send the second energy function at the second camera viewing angle to the first processing unit, such that the first processing unit acquires a first energy function at a first camera view angle and a second energy function at a second camera view angle, calculating pose transformation increment of the current frame and the previous frame of the target object under an object center coordinate system according to the first energy function and the second energy function, updating the pose of the target object under the first camera viewing angle according to the pose transformation increment to obtain a first pose, sending the first pose to display equipment, enabling the display device to acquire a third pose of the display device under a world coordinate system and calculate a fifth pose of the current frame of the target object under the coordinate system of the display device according to the first pose, the third pose and a fourth pose, wherein the fourth pose is the pose of the first camera under the world coordinate system;
A first receiving module 32, configured to receive the pose transformation increment sent by the first processing unit;
and the second updating module 33 is configured to update the pose of the target object at the second camera viewing angle according to the pose transformation increment to obtain a second pose.
Example 3:
fig. 6 is a flowchart illustrating an augmented reality method based on object 3D pose visual tracking according to an exemplary embodiment, where the method is applied to a display device, as shown in fig. 6, and may include the following steps:
step S41: receiving a first pose sent by a first processing unit, wherein the first pose is obtained by acquiring a first energy function under a first camera view angle and a second energy function under a second camera view angle by the first processing unit, calculating pose transformation increments of a current frame and a previous frame of a target object under an object center coordinate system according to the first energy function and the second energy function, and updating the pose of the target object under the first camera view angle according to the pose transformation increments;
step S42: and acquiring a third pose of the self-body in a world coordinate system and calculating a fifth pose of the current frame of the target object in the coordinate system of the display device according to the first pose, the third pose and the fourth pose, wherein the fourth pose is the pose of the first camera in the world coordinate system.
Specifically, the specific implementation of step S41 is described in embodiment 1, and is not described herein again.
Specifically, as shown in fig. 7, step S42 includes the following sub-steps:
step S51: the display equipment acquires a third image under the visual angle of the display equipment;
step S52: extracting features of the third image and matching the third image with a feature map to obtain a third pose of the display device under a world coordinate system;
specifically, we use SLAM algorithms, such as PATM, ORB-SLAM, etc., to obtain the third pose. In the embodiment, the ORB-SLAM algorithm is used, because the features extracted by the ORB-SLAM method are richer and more robust, and a loop detection strategy is added, the ORB-SLAM method can better adapt to relatively larger scenes. Firstly, using a feature extraction function in an ORB-SLAM algorithm to extract ORB features (namely FAST feature points and BRIEF feature descriptors) of an image, and then using a feature matching function to reposition the pose of the camera in a scene, wherein the feature map is constructed by using the ORB features, and the ORB features are composed of the FAST feature points and the BRIEF feature descriptors together.
Step S53: calculating a conversion relation between a camera coordinate system of the first camera and a display equipment coordinate system according to the third pose and the fourth pose;
Specifically, the conversion relationship between the first camera coordinate system and the display device coordinate system
Figure DEST_PATH_IMAGE026
The calculation formula of (c) is as follows:
Figure DEST_PATH_IMAGE028
the described
Figure DEST_PATH_IMAGE029
The pose of the first camera under the world coordinate system is the fourth pose;
Figure DEST_PATH_IMAGE030
and displaying the pose of the equipment under the world coordinate system, namely a third pose.
Step S54: and calculating a fifth pose of the current frame of the target object under a coordinate system of display equipment according to the first pose and the conversion relation.
Specifically, the fifth pose can be obtained from the formula in step S33
Figure DEST_PATH_IMAGE031
Corresponding to the embodiment of the augmented reality method based on the object 3D pose visual tracking, the application also provides an embodiment of an augmented reality device based on the object 3D pose visual tracking.
Fig. 8 is a block diagram illustrating an augmented reality device based on object 3D pose visual tracking according to an example embodiment. Referring to fig. 8, the apparatus is applied to a display device, and may include:
a second receiving module 41, configured to receive the first pose sent by the first processing unit, where the first processing unit obtains a first energy function at a first camera view angle and a second energy function at a second camera view angle, calculates a pose transformation increment of a current frame and a previous frame of the target object in an object center coordinate system according to the first energy function and the second energy function, and updates the pose of the target object at the first camera view angle according to the pose transformation increment to obtain the first pose; sending the pose transformation increment to a second processing unit so that the second processing unit updates the pose of the target object under a second camera view angle according to the pose transformation increment to obtain a second pose;
And a second calculating module 42, configured to obtain a third pose of the user in the world coordinate system and calculate a fifth pose of the current frame of the target object in the coordinate system of the display device according to the first pose, the third pose and a fourth pose, where the fourth pose is a pose of the first camera in the world coordinate system.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
For the device embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement without inventive effort.
Example 4:
correspondingly, the present application further provides an electronic device, comprising: one or more processors; a memory for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement an augmented reality method based on visual tracking of object 3D poses as described above. As shown in fig. 9, for a hardware structure diagram of any device with data processing capability where an augmented reality method based on object 3D pose visual tracking according to an embodiment of the present invention is located, in addition to the processor and the memory shown in fig. 9, any device with data processing capability where an apparatus in an embodiment is located may also include other hardware according to an actual function of the any device with data processing capability, which is not described again.
Example 5:
accordingly, the present application further provides a computer readable storage medium, on which computer instructions are stored, and the instructions, when executed by a processor, implement the augmented reality method based on object 3D pose visual tracking as described above. The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing device described in any previous embodiment. The computer readable storage medium may also be an external storage device of the wind turbine, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), and the like, provided on the device. Further, the computer readable storage medium may include both an internal storage unit and an external storage device of any data processing capable device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing capable device, and may also be used for temporarily storing data that has been output or is to be output.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the present disclosure. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (9)

1. An augmented reality method based on object 3D pose visual tracking is applied to a first processing unit and comprises the following steps:
acquiring a first energy function under a first camera view angle and a second energy function under a second camera view angle;
calculating pose transformation increments of the current frame and the previous frame of the target object under an object center coordinate system according to the first energy function and the second energy function;
Updating the pose of the target object under the first camera view angle according to the pose transformation increment to obtain a first pose;
sending the pose transformation increment to a second processing unit so that the second processing unit updates the pose of the target object at a second camera view angle according to the pose transformation increment to obtain a second pose;
sending the first pose to display equipment so that the display equipment can acquire a third pose of the display equipment under a world coordinate system and calculate a fifth pose of the current frame of the target object under the coordinate system of the display equipment according to the first pose, the third pose and a fourth pose, wherein the fourth pose is the pose of the first camera under the world coordinate system;
calculating the pose transformation increment of the current frame and the previous frame of the target object under an object center coordinate system according to the first energy function and the second energy function, wherein the method comprises the following steps of:
calculating a comprehensive energy function of the double-view joint optimization according to the first energy function and the second energy function;
minimizing the comprehensive energy function to obtain a primary pose transformation increment;
updating a sixth pose according to the preliminary pose transformation increment, wherein the sixth pose is a pose of a frame on the target object in a camera coordinate system of the first camera;
Updating the first energy function according to the first image and the sixth pose under the first camera view angle;
sending the preliminary pose transformation increment to a second camera so that the second camera updates a seventh pose according to the preliminary pose transformation increment, updates a second energy function according to a second image under a second camera view angle and the seventh pose, and sends the updated second energy function to the first camera, wherein the seventh pose is a pose of a frame on the target object under a camera coordinate system of the second camera;
receiving an updated second energy function sent by the second camera;
and repeating the step of calculating a double-view combined optimized comprehensive energy function to the step of receiving the updated second energy function sent by the second camera for a preset number of times according to the first energy function and the second energy function, and obtaining pose transformation increments of the current frame and the previous frame of the target object in an object center coordinate system according to the primary pose transformation increment obtained in the repeated process each time.
2. An augmented reality device based on object 3D position appearance visual tracking, its characterized in that is applied to first processing unit, includes:
The first acquisition module is used for acquiring a first energy function under a first camera view angle and a second energy function under a second camera view angle;
the first calculation module is used for calculating pose transformation increments of a current frame and a previous frame of the target object under an object center coordinate system according to the first energy function and the second energy function;
the first updating module is used for updating the pose of the target object under the first camera view angle according to the pose transformation increment to obtain a first pose;
the first sending module is used for sending the pose transformation increment to a second processing unit so that the second processing unit updates the pose of the target object under a second camera view angle according to the pose transformation increment to obtain a second pose;
the second sending module is used for sending the first pose to display equipment so that the display equipment acquires a third pose of the display equipment under a world coordinate system and calculates a fifth pose of the current frame of the target object under the coordinate system of the display equipment according to the first pose, the third pose and the fourth pose, wherein the fourth pose is a pose of the first camera under the world coordinate system;
Calculating the pose transformation increment of the current frame and the previous frame of the target object under an object center coordinate system according to the first energy function and the second energy function, wherein the method comprises the following steps of:
calculating a comprehensive energy function of the double-view joint optimization according to the first energy function and the second energy function;
minimizing the comprehensive energy function to obtain a primary pose transformation increment;
updating a sixth pose according to the preliminary pose transformation increment, wherein the sixth pose is a pose of a frame on the target object in a camera coordinate system of the first camera;
updating the first energy function according to the first image and the sixth pose under the first camera view angle;
sending the preliminary pose transformation increment to a second camera so that the second camera updates a seventh pose according to the preliminary pose transformation increment, updates a second energy function according to a second image under a second camera view angle and the seventh pose, and sends the updated second energy function to the first camera, wherein the seventh pose is a pose of a frame on the target object under a camera coordinate system of the second camera;
Receiving an updated second energy function sent by the second camera;
and repeating the step of calculating the comprehensive energy function of the double-view joint optimization to the step of receiving the updated second energy function sent by the second camera for a preset number of times according to the first energy function and the second energy function, and obtaining the pose transformation increment of the current frame and the previous frame of the target object in the object center coordinate system according to the primary pose transformation increment obtained in the repeated process each time.
3. An augmented reality method based on object 3D pose visual tracking is applied to a second processing unit and comprises the following steps:
sending a second energy function at a second camera view angle to a first processing unit to enable the first processing unit to acquire a first energy function at a first camera view angle and a second energy function at a second camera view angle, calculating the pose transformation increment of the current frame and the previous frame of the target object under the object center coordinate system according to the first energy function and the second energy function, updating the pose of the target object under the first camera view angle according to the pose transformation increment to obtain a first pose, sending the first pose to display equipment, enabling the display device to acquire a third pose of the display device under a world coordinate system and calculate a fifth pose of the current frame of the target object under the coordinate system of the display device according to the first pose, the third pose and a fourth pose, wherein the fourth pose is the pose of the first camera under the world coordinate system;
Receiving the pose transformation increment sent by the first processing unit;
updating the pose of the target object under a second camera view angle according to the pose transformation increment to obtain a second pose;
calculating pose transformation increment of the current frame and the previous frame of the target object under an object center coordinate system according to the first energy function and the second energy function, wherein the pose transformation increment comprises the following steps:
calculating a comprehensive energy function of the double-view joint optimization according to the first energy function and the second energy function;
minimizing the comprehensive energy function to obtain a preliminary pose transformation increment;
updating a sixth pose according to the preliminary pose transformation increment, wherein the sixth pose is the pose of a frame on the target object under the camera coordinate system of the first camera;
updating the first energy function according to the first image and the sixth pose under the first camera view angle;
sending the preliminary pose transformation increment to a second camera, so that the second camera updates a seventh pose according to the preliminary pose transformation increment, updates a second energy function according to a second image under a second camera view angle and the seventh pose, and sends the updated second energy function to the first camera, wherein the seventh pose is a pose of a frame on the target object under a camera coordinate system of the second camera;
Receiving an updated second energy function sent by the second camera;
and repeating the step of calculating a double-view combined optimized comprehensive energy function to the step of receiving the updated second energy function sent by the second camera for a preset number of times according to the first energy function and the second energy function, and obtaining pose transformation increments of the current frame and the previous frame of the target object in an object center coordinate system according to the primary pose transformation increment obtained in the repeated process each time.
4. An augmented reality device based on object 3D position appearance visual tracking is characterized in that, be applied to the second processing unit, includes:
a third sending module, configured to send a second energy function at the second camera view angle to the first processing unit, such that the first processing unit acquires a first energy function at a first camera view angle and a second energy function at a second camera view angle, calculating the pose transformation increment of the current frame and the previous frame of the target object under the object center coordinate system according to the first energy function and the second energy function, updating the pose of the target object under the first camera view angle according to the pose transformation increment to obtain a first pose, sending the first pose to display equipment, enabling the display device to acquire a third pose of the display device under a world coordinate system and calculate a fifth pose of the current frame of the target object under the coordinate system of the display device according to the first pose, the third pose and a fourth pose, wherein the fourth pose is the pose of the first camera under the world coordinate system;
The first receiving module is used for receiving the pose transformation increment sent by the first processing unit;
the second updating module is used for updating the pose of the target object under a second camera view angle according to the pose transformation increment to obtain a second pose;
calculating pose transformation increment of the current frame and the previous frame of the target object under an object center coordinate system according to the first energy function and the second energy function, wherein the pose transformation increment comprises the following steps:
calculating a comprehensive energy function of the double-view joint optimization according to the first energy function and the second energy function;
minimizing the comprehensive energy function to obtain a preliminary pose transformation increment;
updating a sixth pose according to the preliminary pose transformation increment, wherein the sixth pose is the pose of a frame on the target object under the camera coordinate system of the first camera;
updating the first energy function according to the first image and the sixth pose under the first camera view angle;
sending the preliminary pose transformation increment to a second camera, so that the second camera updates a seventh pose according to the preliminary pose transformation increment, updates a second energy function according to a second image under a second camera view angle and the seventh pose, and sends the updated second energy function to the first camera, wherein the seventh pose is a pose of a frame on the target object under a camera coordinate system of the second camera;
Receiving an updated second energy function sent by the second camera;
and repeating the step of calculating a double-view combined optimized comprehensive energy function to the step of receiving the updated second energy function sent by the second camera for a preset number of times according to the first energy function and the second energy function, and obtaining pose transformation increments of the current frame and the previous frame of the target object in an object center coordinate system according to the primary pose transformation increment obtained in the repeated process each time.
5. An augmented reality method based on object 3D pose visual tracking is characterized in that the augmented reality method is applied to display equipment and comprises the following steps:
receiving a first pose sent by a first processing unit, wherein the first pose is obtained by acquiring a first energy function under a first camera view angle and a second energy function under a second camera view angle by the first processing unit, calculating pose transformation increments of a current frame and a previous frame of a target object under an object center coordinate system according to the first energy function and the second energy function, and updating the pose of the target object under the first camera view angle according to the pose transformation increments;
acquiring a third pose of the self-body in a world coordinate system and calculating a fifth pose of the current frame of the target object in a display device coordinate system according to the first pose, the third pose and the fourth pose, wherein the fourth pose is the pose of the first camera in the world coordinate system;
Calculating the pose transformation increment of the current frame and the previous frame of the target object under an object center coordinate system according to the first energy function and the second energy function, wherein the method comprises the following steps of:
calculating a comprehensive energy function of the double-view joint optimization according to the first energy function and the second energy function;
minimizing the comprehensive energy function to obtain a primary pose transformation increment;
updating a sixth pose according to the preliminary pose transformation increment, wherein the sixth pose is a pose of a frame on the target object in a camera coordinate system of the first camera;
updating the first energy function according to the first image and the sixth pose under the first camera view angle;
sending the preliminary pose transformation increment to a second camera so that the second camera updates a seventh pose according to the preliminary pose transformation increment, updates a second energy function according to a second image under a second camera view angle and the seventh pose, and sends the updated second energy function to the first camera, wherein the seventh pose is a pose of a frame on the target object under a camera coordinate system of the second camera;
Receiving an updated second energy function sent by the second camera;
and repeating the step of calculating a double-view combined optimized comprehensive energy function to the step of receiving the updated second energy function sent by the second camera for a preset number of times according to the first energy function and the second energy function, and obtaining pose transformation increments of the current frame and the previous frame of the target object in an object center coordinate system according to the primary pose transformation increment obtained in the repeated process each time.
6. The method of claim 5, wherein the display device obtains a third pose of the display device itself in a world coordinate system, and calculates a fifth pose of the current frame of the target object in the display device coordinate system according to the first pose, the third pose, and the fourth pose, and comprises:
acquiring a third image under the visual angle of the display equipment;
extracting the features of the third image and matching the third image with a feature map to obtain a third pose of the display equipment under a world coordinate system;
calculating a conversion relation between a camera coordinate system of the first camera and a display equipment coordinate system according to the third pose and the fourth pose;
and calculating a fifth pose of the current frame of the target object under a coordinate system of display equipment according to the first pose and the conversion relation.
7. An augmented reality method based on object 3D pose visual tracking is characterized in that the augmented reality method is applied to display equipment and comprises the following steps:
the second receiving module is used for receiving the first pose sent by the first processing unit, wherein the first pose is obtained by acquiring a first energy function at a first camera view angle and a second energy function at a second camera view angle by the first processing unit, calculating pose transformation increments of a current frame and a previous frame of a target object in an object center coordinate system according to the first energy function and the second energy function, and updating the pose of the target object in the first camera view angle according to the pose transformation increments;
the second calculation module is used for acquiring a third pose of the second camera in a world coordinate system and calculating a fifth pose of the current frame of the target object in a coordinate system of the display device according to the first pose, the third pose and a fourth pose, wherein the fourth pose is the pose of the first camera in the world coordinate system;
calculating the pose transformation increment of the current frame and the previous frame of the target object under an object center coordinate system according to the first energy function and the second energy function, wherein the method comprises the following steps of:
Calculating a comprehensive energy function of the double-view joint optimization according to the first energy function and the second energy function;
minimizing the comprehensive energy function to obtain a primary pose transformation increment;
updating a sixth pose according to the preliminary pose transformation increment, wherein the sixth pose is a pose of a frame on the target object in a camera coordinate system of the first camera;
updating the first energy function according to the first image and the sixth pose under the first camera view angle;
sending the preliminary pose transformation increment to a second camera so that the second camera updates a seventh pose according to the preliminary pose transformation increment, updates a second energy function according to a second image under a second camera view angle and the seventh pose, and sends the updated second energy function to the first camera, wherein the seventh pose is a pose of a frame on the target object under a camera coordinate system of the second camera;
receiving an updated second energy function sent by the second camera;
and repeating the step of calculating the comprehensive energy function of the double-view joint optimization to the step of receiving the updated second energy function sent by the second camera for a preset number of times according to the first energy function and the second energy function, and obtaining the pose transformation increment of the current frame and the previous frame of the target object in the object center coordinate system according to the primary pose transformation increment obtained in the repeated process each time.
8. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of claim 1 or claim 3 or any of claims 5-6.
9. A computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, implement the steps of the method according to claim 1 or claim 3 or any one of claims 5 to 6.
CN202210073191.9A 2022-01-21 2022-01-21 Augmented reality method and device based on object 3D pose visual tracking Active CN114092526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210073191.9A CN114092526B (en) 2022-01-21 2022-01-21 Augmented reality method and device based on object 3D pose visual tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210073191.9A CN114092526B (en) 2022-01-21 2022-01-21 Augmented reality method and device based on object 3D pose visual tracking

Publications (2)

Publication Number Publication Date
CN114092526A CN114092526A (en) 2022-02-25
CN114092526B true CN114092526B (en) 2022-06-28

Family

ID=80309059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210073191.9A Active CN114092526B (en) 2022-01-21 2022-01-21 Augmented reality method and device based on object 3D pose visual tracking

Country Status (1)

Country Link
CN (1) CN114092526B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113029128A (en) * 2021-03-25 2021-06-25 浙江商汤科技开发有限公司 Visual navigation method and related device, mobile terminal and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9264702B2 (en) * 2013-08-19 2016-02-16 Qualcomm Incorporated Automatic calibration of scene camera for optical see-through head mounted display
US10580148B2 (en) * 2017-12-21 2020-03-03 Microsoft Technology Licensing, Llc Graphical coordinate system transform for video frames
CN110505468B (en) * 2018-05-18 2021-02-05 北京亮亮视野科技有限公司 Test calibration and deviation correction method for augmented reality display equipment
CN111427447B (en) * 2020-03-04 2023-08-29 青岛小鸟看看科技有限公司 Virtual keyboard display method, head-mounted display device and system
CN111897429A (en) * 2020-07-30 2020-11-06 腾讯科技(深圳)有限公司 Image display method, image display device, computer equipment and storage medium
CN112132940A (en) * 2020-09-16 2020-12-25 北京市商汤科技开发有限公司 Display method, display device and storage medium
CN112367426B (en) * 2020-11-09 2021-06-04 Oppo广东移动通信有限公司 Virtual object display method and device, storage medium and electronic equipment
CN112700505B (en) * 2020-12-31 2022-11-22 山东大学 Binocular three-dimensional tracking-based hand and eye calibration method and device and storage medium
CN113793389B (en) * 2021-08-24 2024-01-26 国网甘肃省电力公司 Virtual-real fusion calibration method and device for augmented reality system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113029128A (en) * 2021-03-25 2021-06-25 浙江商汤科技开发有限公司 Visual navigation method and related device, mobile terminal and storage medium

Also Published As

Publication number Publication date
CN114092526A (en) 2022-02-25

Similar Documents

Publication Publication Date Title
WO2019205852A1 (en) Method and apparatus for determining pose of image capture device, and storage medium therefor
Kawai et al. Diminished reality based on image inpainting considering background geometry
WO2019242262A1 (en) Augmented reality-based remote guidance method and device, terminal, and storage medium
JP2020507850A (en) Method, apparatus, equipment, and storage medium for determining the shape of an object in an image
US8928736B2 (en) Three-dimensional modeling apparatus, three-dimensional modeling method and computer-readable recording medium storing three-dimensional modeling program
CN106875431B (en) Image tracking method with movement prediction and augmented reality implementation method
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
CN107358633A (en) Join scaling method inside and outside a kind of polyphaser based on 3 points of demarcation things
JP2013141049A (en) Server and terminal utilizing world coordinate system database
KR20150013709A (en) A system for mixing or compositing in real-time, computer generated 3d objects and a video feed from a film camera
JP2002259992A (en) Image processor and its method as well as program code and storage medium
CN104156998A (en) Implementation method and system based on fusion of virtual image contents and real scene
CN112819892B (en) Image processing method and device
CN109785373A (en) A kind of six-freedom degree pose estimating system and method based on speckle
CN110580720A (en) camera pose estimation method based on panorama
US8509522B2 (en) Camera translation using rotation from device
JP2016066187A (en) Image processor
CN108205820B (en) Plane reconstruction method, fusion method, device, equipment and storage medium
CN114092526B (en) Augmented reality method and device based on object 3D pose visual tracking
WO2023160445A1 (en) Simultaneous localization and mapping method and apparatus, electronic device, and readable storage medium
CN112017242A (en) Display method and device, equipment and storage medium
CN111915739A (en) Real-time three-dimensional panoramic information interactive information system
WO2023098737A1 (en) Three-dimensional reconstruction method, electronic device, and computer-readable storage medium
CN115278049A (en) Shooting method and device thereof
JP6341540B2 (en) Information terminal device, method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant