CN114445451A - Planar image tracking method, terminal and storage medium - Google Patents

Planar image tracking method, terminal and storage medium Download PDF

Info

Publication number
CN114445451A
CN114445451A CN202111552723.9A CN202111552723A CN114445451A CN 114445451 A CN114445451 A CN 114445451A CN 202111552723 A CN202111552723 A CN 202111552723A CN 114445451 A CN114445451 A CN 114445451A
Authority
CN
China
Prior art keywords
images
pose
image
frames
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111552723.9A
Other languages
Chinese (zh)
Inventor
熊友谊
张文金
熊爱武
王勇
张孝文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Okay Information Technology Co ltd
Original Assignee
Guangzhou Okay Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Okay Information Technology Co ltd filed Critical Guangzhou Okay Information Technology Co ltd
Priority to CN202111552723.9A priority Critical patent/CN114445451A/en
Publication of CN114445451A publication Critical patent/CN114445451A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a plane image tracking method, a terminal and a storage medium, wherein the logic control system generation method comprises the following steps: s101: acquiring the global position and posture of the first three frames of images and the relative position and posture between two adjacent frames of images according to the corresponding relation of the characteristic points of the target images in the first three frames of images, and acquiring the 3D coordinates of the characteristic points; s102: calculating the global pose of each frame of image after the previous three frames of images and the relative pose between the two adjacent frames of images according to the 3D coordinates, and optimizing the global pose and the 3D coordinates; s103: and respectively calculating relative poses through the feature points and the global poses of two adjacent frames of images, optimizing the relative poses according to errors of the relative poses, solving the optimal global poses through the relative poses, and tracking in real time by utilizing the global poses. The invention avoids the problems that a large number of parameters are needed for calculation and the calculation amount is increased, has low requirement on hardware, realizes the efficient and stable tracking of the image and reduces the cost of motion tracking.

Description

Planar image tracking method, terminal and storage medium
Technical Field
The present invention relates to the field of image tracking, and in particular, to a planar image tracking method, a terminal, and a storage medium.
Background
With the development of computer vision technology, the requirements of people on the visual experience are higher and higher, and the rising of AR is undoubtedly well meeting the requirements of people, wherein the motion tracking based on the plane image is one of the core technologies and applications of AR.
For the motion tracking of the plane image, the steps mainly comprise detection and identification, feature point matching, pose calculation and optimization, the motion tracking is carried out in a deep learning mode, in recent years, a plurality of good models emerge in the deep learning direction, and the method has good effects on semantic understanding, semantic segmentation and the like. However, the deep neural network model has huge parameters, needs a lot of high-quality training sets and test sets, has a large amount of calculation and high requirements on hardware, increases the use cost of motion tracking, and limits the use and development of motion tracking.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a plane image tracking method, a terminal and a storage medium, after an image of a target image is shot, 3D coordinates of feature points are obtained according to the feature points in the first three frames of images of the target image, the global pose and the relative pose of the subsequently shot image are calculated according to the 3D coordinates, the global pose is optimized through the error of the relative pose, and the image is subjected to motion tracking according to the optimized global pose, so that the problems that a large number of parameters are needed to be used for calculation and the calculation amount is increased are solved, the requirement on hardware is low, the image is efficiently and stably tracked, the cost of motion tracking is reduced, and the use range and the development direction of the motion tracking are expanded.
In order to solve the above problems, the present invention adopts a technical solution as follows: a planar image tracking method, the planar image tracking method comprising: s101: selecting a target image, acquiring the global pose of the first three frames of images and the relative pose between two adjacent frames of images according to the corresponding relation of the feature points of the target image in the first three frames of images of the target image, and acquiring the 3D coordinates of the feature points through the global pose and the adjacent pose; s102: calculating the global pose of each frame of image after the first three frames of images and the relative pose between the two adjacent frames of images according to the 3D coordinates, and optimizing the global pose and the 3D coordinates through a back projection error function; s103: and respectively calculating relative poses through the feature points and the global pose of two adjacent frames of images, optimizing the relative poses according to errors of the relative poses, calculating the global pose of the images through the optimized relative poses, and tracking and optimizing the pose of each frame of image in real time by using the global pose.
Further, the step of selecting the target image specifically includes: acquiring selection conditions of a target image, selecting the target image according to the selection conditions, and extracting and storing image coordinates and feature vectors of feature points in the target image.
Further, the step of obtaining the global pose of the first three frames of images and the relative pose between the two adjacent frames of images according to the corresponding relationship of the feature points of the target images in the first three frames of images of the target image specifically includes: determining a coordinate system where the shooting equipment is located when the first frame of image is shot as a world coordinate system, acquiring the corresponding relation of the feature points of the target image in the first three frames of images, and calculating the relative pose between the two adjacent frames of images and the global pose of the second frame of image and the third frame of image relative to the world coordinate system according to the corresponding relation.
Further, the step of obtaining the corresponding relationship between the feature points of the target image in the first three frames of images specifically includes: and extracting the characteristic points and the characteristic vectors in the target image, matching the characteristic points and the characteristic vectors with the characteristic points and the characteristic vectors of the target image to calculate a homography matrix, and acquiring the corresponding relation of the characteristic points in the first three frames of images according to the homography matrix.
Further, the step of acquiring the 3D coordinates of the feature points by the global pose and the adjacent pose further includes: and optimizing the global pose and the 3D coordinate through a back projection error function.
Further, the step of calculating the global pose of each frame of image after the first three frames of images and the relative pose between two adjacent frames of images according to the 3D coordinates specifically includes: and generating a 2D-3D point pair according to the corresponding relation of the feature points of the current frame and the first frame and the 3D coordinates of the feature points, calculating the global pose of the current frame through the 2D-3D point pair, and calculating the relative pose of the current frame and the previous frame through the corresponding relation of the feature points of the current frame and the previous frame.
Further, the step of optimizing the global pose and the 3D coordinates by a back-projection error function further includes: and putting the current frame into a sliding window, and removing the image with the minimum sequence number sequence in the sliding window when the frame number in the sliding window is greater than a preset value.
Further, the step of optimizing the relative pose according to the error of the relative pose specifically includes: and acquiring the total error of the images in the sliding window according to the error, and optimizing the relative pose of two adjacent frames of images in the sliding window according to the total error.
Based on the same inventive concept, the invention also provides an intelligent terminal, which comprises a processor and a memory, wherein the processor is connected with the memory in a communication way, and the memory stores a computer program, and the computer program is used for executing the plane image tracking method.
Based on the same inventive concept, the present invention further proposes a computer-readable storage medium storing program data for executing the planar image tracking method as described above.
Compared with the prior art, the invention has the beneficial effects that: after the image of the target image is shot, the 3D coordinates of the feature points are obtained according to the feature points in the first three frames of images of the target image, the global pose and the relative pose of the subsequently shot image are calculated according to the 3D coordinates, the global pose is optimized through the error of the relative pose, and the motion tracking of the image is carried out according to the optimized global pose, so that the problems that a large number of parameters are needed to be used for calculation and the calculated amount is increased are solved, the requirement on hardware is low, the efficient and stable tracking of the image is realized, the cost of the motion tracking is reduced, and the use range and the development direction of the motion tracking are expanded.
Drawings
FIG. 1 is a flowchart illustrating a planar image tracking method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a planar image tracking method according to another embodiment of the present invention;
FIG. 3 is a block diagram of an embodiment of an intelligent terminal according to the present invention;
fig. 4 is a block diagram of an embodiment of a computer-readable storage medium of the present invention.
Detailed Description
The following description of the embodiments of the present application is provided by way of specific examples, and other advantages and effects of the present application will be readily apparent to those skilled in the art from the disclosure herein. The present application is capable of other and different embodiments and its several details are capable of modifications and/or changes in various respects, all without departing from the spirit of the present application. It should be noted that the various embodiments of the present disclosure, described and illustrated in the figures herein generally, may be combined with one another without conflict, and that the structural components or functional modules therein may be arranged and designed in a variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
Referring to fig. 1 to 2, fig. 1 is a flowchart illustrating a planar image tracking method according to an embodiment of the present invention; FIG. 2 is a flowchart illustrating a planar image tracking method according to another embodiment of the present invention. The planar image tracking method of the present invention will be described in detail with reference to fig. 1 to 2.
In other embodiments, the device executing the planar image tracking method may be a mobile phone, a notebook computer, a desktop computer, a smart hand slider, smart glasses, and other devices capable of performing the calculation of the relative pose and the global pose.
The planar image tracking method comprises the following steps:
s101: selecting a target image, acquiring the global position and posture of the first three frames of images and the relative position and posture between the two adjacent frames of images according to the corresponding relation of the characteristic points of the target image in the first three frames of images of the target image, and acquiring the 3D coordinates of the characteristic points through the global position and the adjacent position and posture.
In this embodiment, the target image is a planar image, and the step of selecting the target image specifically includes: and acquiring the selection condition of the target image, selecting the target image according to the selection condition, and extracting and storing the image coordinates and the feature vectors of the feature points in the target image. And ensuring that the target image meets the requirement of feature detection and the final motion tracking effect by setting a selection condition.
In one embodiment, the selection conditions are: the target image must have sufficient texture that the area of the solid (only one color) regions cannot be more than 10% of the total area; the texture should follow a natural distribution, rather than simple repetition; the texture should fill the entire image, at least to the 90% scale; the size cannot be too small, suggesting a minimum pixel value of 300. And taking the image meeting the selection condition as a target image, carrying out shift feature detection on the target image, extracting feature points and feature vectors of the target image, and storing image coordinates and feature vectors of the feature points.
In this embodiment, a camera on a mobile phone may be used to capture a plurality of frames of images of a target image and perform motion tracking based on the images captured by the camera, or the mobile phone may be connected to another camera, and the captured images may be used to capture images and perform motion tracking.
The steps of acquiring the global position and posture of the first three frames of images and the relative position and posture between the two adjacent frames of images according to the corresponding relation of the feature points of the target images in the first three frames of images of the target image specifically comprise: determining a coordinate system where the shooting equipment is located when the first frame of image is shot as a world coordinate system, acquiring the corresponding relation of the feature points of the target image in the first three frames of images, and calculating the relative pose between the two adjacent frames of images and the global pose of the second frame of image and the third frame of image relative to the world coordinate system according to the corresponding relation.
Specifically, the step of obtaining the corresponding relationship between the feature points of the target image in the first three frames of images specifically includes: extracting the characteristic points and the characteristic vectors in the target image, matching the characteristic points and the characteristic vectors with the characteristic points and the characteristic vectors of the target image to calculate a homography matrix, and acquiring the corresponding relation of the characteristic points in the first three frames of images according to the homography matrix.
In this embodiment, after the global pose and the adjacent pose are obtained, a triangularization algorithm is adopted to calculate a 3D coordinate of a feature point of the target image based on the global pose and the adjacent pose, wherein the 3D coordinate is a coordinate of the feature point in a world coordinate system.
In a specific embodiment, a coordinate system of a camera corresponding to a first frame image is taken as a world coordinate system, sift feature detection is sequentially carried out on images of the first three frames, feature points and feature vectors are extracted, RANSAC matching is carried out on the feature points and the feature vectors respectively according to features of a target image, homography matrixes are calculated, feature point corresponding relations between the three frames of images are obtained according to the homography matrixes, and therefore the relative pose between two adjacent frames and the pose (global pose) of the two frames (second frame image and third frame image) relative to the world coordinate system (coordinate system of the first frame image) are calculated. And calculating the 3D coordinates of the feature points of the target image according to a triangulation algorithm.
Because noise exists in the data of the global pose and the 3D coordinates, the step of acquiring the 3D coordinates of the feature points through the global pose and the adjacent pose further comprises the following steps: and optimizing the global pose and the 3D coordinate through a back projection error function.
In this embodiment, the back projection error function is:
Figure BDA0003418267940000071
wherein K is camera internal reference, E belongs to the global pose of current frame optimization, uiImage coordinates, X, belonging to feature points projected onto the current frameiThe 3D coordinates representing the feature points just calculated,
Figure BDA0003418267940000072
representing optimized 3D coordinates, SE (3) representing a three-dimensional Euclidean transform group, Ri×jA matrix of i rows and j columns whose elements are real numbers is represented. The data of the feature points is much smaller than the data of the whole image, so that the calculation speed and the real-time performance are improved.
S102: and calculating the global pose of each frame of image after the first three frames of images and the relative pose between the two adjacent frames of images according to the 3D coordinates, and optimizing the global pose and the 3D coordinates through a back projection error function.
The step of calculating the global pose of each frame of image after the previous three frames of images and the relative pose between the two adjacent frames of images according to the 3D coordinates specifically comprises the following steps: and generating a 2D-3D point pair according to the corresponding relation of the feature points of the current frame and the first frame and the 3D coordinates of the feature points, calculating the global pose of the current frame through the 2D-3D point pair, and calculating the relative pose of the current frame and the previous frame through the corresponding relation of the feature points of the current frame and the previous frame.
In one embodiment, after initial values of poses of images of the first three frames and 3D coordinates of feature points are obtained, sift feature detection is continuously carried out on each next frame, RANSAC feature matching is carried out on the frame and a target image, a homography matrix is calculated, feature point correspondence between image pairs is obtained according to the homography matrix, firstly, 2D-3D point pairs (3D points (feature points) in the first frame and 2D points projected to the current frame) are obtained according to the feature point correspondence between the current frame and the first frame, the poses of the frame relative to the first frame are calculated by using an EPnP algorithm, then, an essential matrix is obtained according to the feature point correspondence between the current frame and the previous frame, and then, relative poses are obtained by solving.
In this embodiment, the back projection error function is the same as the back projection error function used in the previous step:
Figure BDA0003418267940000073
wherein K is camera internal reference, E belongs to the global pose of current frame optimization, uiImage coordinates, X, belonging to feature points projected onto the current framei3D coordinates representing feature points (optimized using last time)The value of (d) to (d),
Figure BDA0003418267940000081
representing the optimized 3D coordinates, SE (3) representing a three-dimensional Euclidean transform group, Ri×jA matrix of i rows and j columns whose elements are real numbers is represented.
Wherein, the step of optimizing the global pose and the 3D coordinates by a back projection error function further comprises: and putting the current frame into a sliding window, and removing the image with the minimum sequence number sequence in the sliding window when the frame number in the sliding window is greater than a preset value.
In this embodiment, the size of the sliding window is 6 frames, the step size is one frame, and the preset value is 6.
S103: and respectively calculating relative poses through the feature points and the global poses of two adjacent frames of images, optimizing the relative poses according to errors of the relative poses, calculating the global poses of the images through the optimized relative poses, and tracking and optimizing the poses of each frame of image in real time by using the global poses.
And calculating the relative pose according to the corresponding relation of the feature points in the two adjacent frames of images, acquiring the global pose of the two adjacent frames of images, calculating the relative pose between the two adjacent frames of images by using the global pose, and acquiring the error between the two calculated relative poses.
In this embodiment, the step of optimizing the relative pose according to the error of the relative pose specifically includes: and acquiring the total error of the images in the sliding window according to the error, and optimizing the relative pose of two adjacent frames of images in the sliding window through the total error.
In one embodiment, the relative pose calculated from feature points between adjacent frames is noted as
Figure BDA0003418267940000082
Recording the relative pose deduced from the global poses of the adjacent frames as TijNow define the error e of bothij(represents passing throughFormula calculated relative pose TijAnd relative pose calculated by feature points
Figure BDA0003418267940000083
The degree of deviation) is:
Figure BDA0003418267940000091
the total error is recorded as
Figure BDA0003418267940000092
Figure BDA0003418267940000093
Where N denotes adjacent frames within the sliding window, TiRepresents the global pose, T, of the ith framejAnd (3) representing the global pose of the j-th frame, wherein SE (3) represents a three-dimensional Euclidean transformation group, then optimizing the g2o open source library using C + + in the formula (3) to obtain an optimal relative pose, solving the optimal global pose of each frame in the sliding window according to the optimal relative pose, and continuously performing real-time motion tracking and pose optimization (solving the 3D coordinates of the optimized feature points) on each next frame in a circulating manner.
Has the beneficial effects that: after the image of the target image is shot by the planar image tracking method, the 3D coordinates of the feature points are obtained according to the feature points in the first three frames of images of the target image, the global pose and the relative pose of the subsequently shot images are calculated according to the 3D coordinates, the global pose is optimized through the error of the relative pose, and the motion tracking of the image is carried out according to the optimized global pose, so that the problems that a large number of parameters are needed to be used for calculation and the calculated amount is increased are solved, the requirement on hardware is low, the efficient and stable tracking of the image is realized, the cost of the motion tracking is reduced, and the use range and the development direction of the motion tracking are expanded.
Based on the same inventive concept, the present invention further provides an intelligent terminal, please refer to fig. 3, fig. 3 is a structural diagram of an embodiment of the intelligent terminal of the present invention, and the intelligent terminal of the present invention is described with reference to fig. 3.
In this embodiment, the intelligent terminal includes a processor and a memory, the processor is connected to the memory in communication, and the memory stores a computer program, and the computer program is used to execute the plane image tracking method according to the above embodiment.
In some embodiments, memory may include, but is not limited to, high speed random access memory, non-volatile memory. Such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable functional device, a discrete Gate or transistor functional device, or a discrete hardware component.
Based on the same inventive concept, the present invention further provides a computer-readable storage medium, please refer to fig. 4, fig. 4 is a structural diagram of an embodiment of the computer-readable storage medium of the present invention, and the computer-readable storage medium of the present invention is described with reference to fig. 4.
In the present embodiment, a computer-readable storage medium stores program data used for executing the planar image tracking method as described in the above embodiments.
The computer-readable storage medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (compact disc-read only memories), magneto-optical disks, ROMs (read only memories), RAMs (random access memories), EPROMs (erasable programmable read only memories), EEPROMs (electrically erasable programmable read only memories), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions. The computer readable storage medium may be an article of manufacture that is not accessible to the computer device or may be a component that is used by an accessed computer device.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A planar image tracking method, comprising:
s101: selecting a target image, acquiring the global pose of the first three frames of images and the relative pose between two adjacent frames of images according to the corresponding relation of the feature points of the target image in the first three frames of images of the target image, and acquiring the 3D coordinates of the feature points through the global pose and the adjacent pose;
s102: calculating the global pose of each frame of image after the first three frames of images and the relative pose between the two adjacent frames of images according to the 3D coordinates, and optimizing the global pose and the 3D coordinates through a back projection error function;
s103: and respectively calculating relative poses through the feature points and the global pose of two adjacent frames of images, optimizing the relative poses according to errors of the relative poses, calculating the global pose of the images through the optimized relative poses, and tracking and optimizing the pose of each frame of image in real time by using the global pose.
2. The method for tracking a planar image according to claim 1, wherein the step of selecting the target image specifically comprises:
and acquiring the selection condition of the target image, selecting the target image according to the selection condition, and extracting and storing the image coordinates and the feature vectors of the feature points in the target image.
3. The planar image tracking method according to claim 1, wherein the step of obtaining the global poses of the first three frames of images and the relative poses between two adjacent frames of images according to the corresponding relationship of the feature points of the target image in the first three frames of images of the target image specifically comprises:
determining a coordinate system where the shooting equipment is located when the first frame of image is shot as a world coordinate system, acquiring the corresponding relation of the feature points of the target image in the first three frames of images, and calculating the relative pose between the two adjacent frames of images and the global pose of the second frame of image and the third frame of image relative to the world coordinate system according to the corresponding relation.
4. The planar image tracking method according to claim 3, wherein the step of obtaining the correspondence between the feature points of the target image in the first three frames of images specifically comprises:
and extracting the characteristic points and the characteristic vectors in the target image, matching the characteristic points and the characteristic vectors with the characteristic points and the characteristic vectors of the target image to calculate a homography matrix, and acquiring the corresponding relation of the characteristic points in the first three frames of images according to the homography matrix.
5. The planar image tracking method as set forth in claim 4, wherein the step of acquiring the 3D coordinates of the feature points by the global pose and the adjacent pose further comprises, after the step of acquiring the 3D coordinates of the feature points by the global pose and the adjacent pose:
and optimizing the global pose and the 3D coordinate through a back projection error function.
6. The method for tracking plane image according to claim 5, wherein the step of calculating the global pose of each frame image after the first three frames of images and the relative pose between two adjacent frames of images according to the 3D coordinates comprises:
and generating a 2D-3D point pair according to the corresponding relation of the feature points of the current frame and the first frame and the 3D coordinates of the feature points, calculating the global pose of the current frame through the 2D-3D point pair, and calculating the relative pose of the current frame and the previous frame through the corresponding relation of the feature points of the current frame and the previous frame.
7. The method for tracking a planar image according to claim 1, wherein said step of optimizing said global pose, 3D coordinates by back-projection error function further comprises:
and putting the current frame into a sliding window, and removing the image with the minimum sequence number sequence in the sliding window when the frame number in the sliding window is greater than a preset value.
8. The planar image tracking method as claimed in claim 7, wherein the step of optimizing the relative pose according to the error of the relative pose specifically comprises:
and acquiring the total error of the images in the sliding window according to the error, and optimizing the relative pose of two adjacent frames of images in the sliding window according to the total error.
9. An intelligent terminal, characterized in that the intelligent terminal comprises a processor, a memory, the processor being communicatively connected to the memory, the memory storing a computer program, the computer program being adapted to perform the method for planar image tracking according to any of claims 1 to 8.
10. A computer-readable storage medium characterized in that the computer-readable storage medium stores program data for executing the planar image tracking method according to any one of claims 1 to 8.
CN202111552723.9A 2021-12-17 2021-12-17 Planar image tracking method, terminal and storage medium Pending CN114445451A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111552723.9A CN114445451A (en) 2021-12-17 2021-12-17 Planar image tracking method, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111552723.9A CN114445451A (en) 2021-12-17 2021-12-17 Planar image tracking method, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN114445451A true CN114445451A (en) 2022-05-06

Family

ID=81364172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111552723.9A Pending CN114445451A (en) 2021-12-17 2021-12-17 Planar image tracking method, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN114445451A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342800A (en) * 2023-02-21 2023-06-27 中国航天员科研训练中心 Semantic three-dimensional reconstruction method and system for multi-mode pose optimization

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342800A (en) * 2023-02-21 2023-06-27 中国航天员科研训练中心 Semantic three-dimensional reconstruction method and system for multi-mode pose optimization
CN116342800B (en) * 2023-02-21 2023-10-24 中国航天员科研训练中心 Semantic three-dimensional reconstruction method and system for multi-mode pose optimization

Similar Documents

Publication Publication Date Title
WO2020238560A1 (en) Video target tracking method and apparatus, computer device and storage medium
Wang et al. Detect globally, refine locally: A novel approach to saliency detection
CN114782691B (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
Zhao et al. Alike: Accurate and lightweight keypoint detection and descriptor extraction
WO2021043168A1 (en) Person re-identification network training method and person re-identification method and apparatus
Wang et al. 360sd-net: 360 stereo depth estimation with learnable cost volume
WO2020119527A1 (en) Human action recognition method and apparatus, and terminal device and storage medium
Zhong et al. High-resolution depth maps imaging via attention-based hierarchical multi-modal fusion
CN111161395B (en) Facial expression tracking method and device and electronic equipment
CN112215050A (en) Nonlinear 3DMM face reconstruction and posture normalization method, device, medium and equipment
CN115205489A (en) Three-dimensional reconstruction method, system and device in large scene
CN112232134B (en) Human body posture estimation method based on hourglass network and attention mechanism
CN110825900A (en) Training method of feature reconstruction layer, reconstruction method of image features and related device
CN106447762A (en) Three-dimensional reconstruction method based on light field information and system
CN112084849A (en) Image recognition method and device
CN110378250B (en) Training method and device for neural network for scene cognition and terminal equipment
CN113724379B (en) Three-dimensional reconstruction method and device for fusing image and laser point cloud
CN115410030A (en) Target detection method, target detection device, computer equipment and storage medium
CN113850136A (en) Yolov5 and BCNN-based vehicle orientation identification method and system
CN113112542A (en) Visual positioning method and device, electronic equipment and storage medium
CN114140623A (en) Image feature point extraction method and system
CN116030498A (en) Virtual garment running and showing oriented three-dimensional human body posture estimation method
Dong et al. A colorization framework for monochrome-color dual-lens systems using a deep convolutional network
CN114445451A (en) Planar image tracking method, terminal and storage medium
CN114511682A (en) Three-dimensional scene reconstruction method and device based on laser radar and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination