CN110992400A - Dynamic projection mapping object tracking method and device based on edge - Google Patents
Dynamic projection mapping object tracking method and device based on edge Download PDFInfo
- Publication number
- CN110992400A CN110992400A CN201911108272.2A CN201911108272A CN110992400A CN 110992400 A CN110992400 A CN 110992400A CN 201911108272 A CN201911108272 A CN 201911108272A CN 110992400 A CN110992400 A CN 110992400A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- edge
- contour
- target
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000013507 mapping Methods 0.000 title claims abstract description 27
- 238000001514 detection method Methods 0.000 claims abstract description 11
- 238000005070 sampling Methods 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 8
- 239000000284 extract Substances 0.000 claims description 6
- 238000003708 edge detection Methods 0.000 abstract description 10
- 238000004590 computer program Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 239000007787 solid Substances 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a dynamic projection mapping object tracking method and device based on edges, and relates to the field of object tracking. The method comprises the following steps: detecting the contour of a known three-dimensional target model; acquiring a current frame including a target object; and tracking the target object in the current frame in real time, extracting a two-dimensional edge of the target object, enabling the outline of the target model to correspond to the two-dimensional edge of the target object, and estimating the three-dimensional posture of the target object from the current frame by minimizing the distance between the outline of the target model and the two-dimensional edge. The device includes: the device comprises a detection module, an acquisition module and an estimation module. The method and the device realize dynamic projection mapping, and improve the real-time performance and robustness of edge detection while maintaining the high-speed characteristic of edge detection.
Description
Technical Field
The present application relates to the field of object tracking, and in particular, to a method and an apparatus for tracking an object based on edge dynamic projection mapping.
Background
In recent years, dynamic projection mapping, which projects an image onto a real target object that moves freely, has received much attention as a singular appearance editing technique. Image projection of moving objects requires real-time, high-precision estimation of the three-dimensional pose of the object.
Some classical methods, such as the RAPiD method and the line segment method, are methods that use high speed to obtain edges, and these methods achieve high speed requirements by extracting linear parts in images. Some researchers in recent years estimate the three-dimensional pose of a target object using the three-dimensional shape obtained by the RGB-D camera. Christoph proposes to obtain the three-dimensional shape of an object by extracting three-dimensional feature points of the object by using a feature point method, and estimate the posture of the object.
However, the classical approach limits the target object to an object having a linear shape, with poor accuracy for objects with blurred edges. Although the RGB-D camera can easily obtain a three-dimensional shape from depth information, the estimation accuracy is degraded and the projected image is misaligned with respect to a moving object due to instability and time delay of the camera. The three-dimensional feature point extraction and alignment in the feature point method requires a large amount of processing time, and thus real-time processing is very difficult.
Disclosure of Invention
It is an object of the present application to overcome the above problems or to at least partially solve or mitigate the above problems.
According to an aspect of the present application, there is provided an edge-based dynamic projection mapping object tracking method, including:
detecting the contour of a known three-dimensional target model;
acquiring a current frame including a target object;
and tracking the target object in the current frame in real time, extracting a two-dimensional edge of the target object, enabling the outline of the target model to correspond to the two-dimensional edge of the target object, and estimating the three-dimensional posture of the target object from the current frame by minimizing the distance between the outline of the target model and the two-dimensional edge.
Optionally, detecting the contour of the known three-dimensional object model comprises:
uniformly generating sampling points on a known three-dimensional target model, detecting the profile by acquiring the sampling points with the brightness gradient larger than a specified threshold value, and mapping the detected three-dimensional profile points into two-dimensional profile points in a two-dimensional edge space.
Optionally, extracting a two-dimensional edge of the target object, and corresponding the contour of the target model to the two-dimensional edge of the target object, includes:
extracting a two-dimensional edge of the target object using a Canny edge detector, the two-dimensional edge including additional edges of the foreign object detected by non-uniformity from the surface and the periphery of the target object, and each edge having a candidate point thereon;
and projecting the contour of the target model to the vicinity of the two-dimensional edge of the target object in the current frame, selecting candidate points on the same edge as the contour of the target model from the candidate points of the two-dimensional edge, and establishing the corresponding relation between the two-dimensional edge and the contour of the target model.
Optionally, selecting candidate points on the same edge as the edge in the contour of the target model includes:
and selecting candidate points on the same edge in the contour of the target model by taking the edge closest to the tangential direction as the same edge.
Optionally, detecting the contour of the known three-dimensional object model comprises:
the contour of the known three-dimensional object model is detected on the GPU graphics processor and performed in parallel with the extraction of the two-dimensional edges.
According to another aspect of the present application, there is provided an edge-based dynamic projection mapping object tracking apparatus, comprising:
a detection module configured to detect a contour of a known three-dimensional object model;
an acquisition module configured to acquire a current frame including a target object;
an estimation module configured to track a target object in the current frame in real time, extract a two-dimensional edge of the target object, correspond a contour of the target model to the two-dimensional edge of the target object, and estimate a three-dimensional pose of the target object from the current frame by minimizing a distance between the contour of the target model and the two-dimensional edge.
Optionally, the detection module is specifically configured to:
uniformly generating sampling points on a known three-dimensional target model, detecting the profile by acquiring the sampling points with the brightness gradient larger than a specified threshold value, and mapping the detected three-dimensional profile points into two-dimensional profile points in a two-dimensional edge space.
Optionally, the estimation module is specifically configured to:
extracting a two-dimensional edge of the target object using a Canny edge detector, the two-dimensional edge including additional edges of the foreign object detected by non-uniformity from the surface and the periphery of the target object, and each edge having a candidate point thereon;
and projecting the contour of the target model to the vicinity of the two-dimensional edge of the target object in the current frame, selecting candidate points on the same edge as the contour of the target model from the candidate points of the two-dimensional edge, and establishing the corresponding relation between the two-dimensional edge and the contour of the target model.
Optionally, the estimation module is specifically configured to:
and selecting candidate points on the same edge in the contour of the target model by taking the edge closest to the tangential direction as the same edge.
Optionally, the detection module is specifically configured to:
the contour of the known three-dimensional object model is detected on the GPU graphics processor and performed in parallel with the extraction of the two-dimensional edges.
According to yet another aspect of the application, there is provided a computing device comprising a memory, a processor and a computer program stored in the memory and executable by the processor, wherein the processor implements the method as described above when executing the computer program.
According to yet another aspect of the application, a computer-readable storage medium, preferably a non-volatile readable storage medium, is provided, having stored therein a computer program which, when executed by a processor, implements a method as described above.
According to yet another aspect of the application, there is provided a computer program product comprising computer readable code which, when executed by a computer device, causes the computer device to perform the method described above.
According to the technical scheme, the contour of a known three-dimensional target model is detected, a current frame comprising a target object is obtained, the target object in the current frame is tracked in real time, the two-dimensional edge of the target object is extracted, the contour of the target model corresponds to the two-dimensional edge of the target object, the three-dimensional posture of the target object is estimated from the current frame by minimizing the distance between the contour of the target model and the two-dimensional edge, dynamic projection mapping is achieved, the high-speed characteristic of edge detection is maintained, the real-time performance and robustness of the edge detection are improved, and the method and the device can be applied to various target objects and can be used as a realistic image expression technology.
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
FIG. 1 is a flowchart of a method for edge-based dynamic projection mapping object tracking according to an embodiment of the present application;
FIG. 2 is a flow diagram of a method for edge-based dynamic projection mapping object tracking according to another embodiment of the present application;
FIG. 3 is a block diagram of an edge-based dynamic projection mapping object tracking device according to another embodiment of the present application;
FIG. 4 is a block diagram of a computing device according to another embodiment of the present application;
fig. 5 is a diagram of a computer-readable storage medium structure according to another embodiment of the present application.
Detailed Description
FIG. 1 is a flowchart of a method for edge-based dynamic projection mapping object tracking according to an embodiment of the present application. Referring to fig. 1, the method includes:
101: detecting the contour of a known three-dimensional target model;
102: acquiring a current frame including a target object;
103: the method comprises the steps of tracking a target object in a current frame in real time, extracting a two-dimensional edge of the target object, enabling a contour of a target model to correspond to the two-dimensional edge of the target object, and estimating a three-dimensional posture of the target object from the current frame by minimizing the distance between the contour of the target model and the two-dimensional edge.
In this embodiment, optionally, the detecting the contour of the known three-dimensional object model includes:
uniformly generating sampling points on a known three-dimensional target model, detecting the profile by acquiring the sampling points with the brightness gradient larger than a specified threshold value, and mapping the detected three-dimensional profile points into two-dimensional profile points in a two-dimensional edge space.
In this embodiment, optionally, extracting a two-dimensional edge of the target object, and corresponding the contour of the target model to the two-dimensional edge of the target object, includes:
extracting a two-dimensional edge of the target object using a Canny edge detector, the two-dimensional edge including additional edges of the external object detected by the non-uniformity from the surface and the periphery of the target object, and each edge having a candidate point thereon;
projecting the contour of the target model to the vicinity of the two-dimensional edge of the target object in the current frame, selecting candidate points on the same edge as the contour of the target model from the candidate points of the two-dimensional edge, and establishing the corresponding relation between the two-dimensional edge and the contour of the target model.
In this embodiment, optionally, selecting candidate points on the same edge as the edge in the contour of the target model includes:
and selecting candidate points on the same edge in the contour of the target model by taking the edge closest to the tangential direction as the same edge.
In this embodiment, optionally, the detecting the contour of the known three-dimensional object model includes:
the contour of the known three-dimensional object model is detected on the GPU graphics processor and performed in parallel with the extraction of the two-dimensional edges.
The method provided by this embodiment detects the contour of a known three-dimensional target model, acquires a current frame including a target object, tracks the target object in the current frame in real time, extracts a two-dimensional edge of the target object, corresponds the contour of the target model to the two-dimensional edge of the target object, estimates the three-dimensional posture of the target object from the current frame by minimizing the distance between the contour of the target model and the two-dimensional edge, realizes dynamic projection mapping, improves the real-time performance and robustness of edge detection while maintaining the high-speed characteristic of edge detection, can be applied to various target objects, and can be used as a realistic image expression technology.
FIG. 2 is a flowchart of a method for edge-based dynamic projection mapping object tracking according to another embodiment of the present application. Referring to fig. 2, the method includes:
201: uniformly generating sampling points on a known three-dimensional target model, detecting the profile by acquiring the sampling points with the brightness gradient larger than a specified threshold value, and mapping the detected three-dimensional profile points into two-dimensional profile points in a two-dimensional edge space;
in this embodiment, since the brightness gradient is generally higher at the edge of the three-dimensional model, the contour may be detected by obtaining a sampling point whose brightness gradient is greater than a specified threshold.
202: acquiring a current frame including a target object;
in this embodiment, the target object may be an object with a clear edge or an object without a clear edge.
203: tracking a target object in a current frame in real time, and extracting a two-dimensional edge of the target object by using a Canny edge detector, wherein the two-dimensional edge comprises an additional edge of an external object detected by unevenness from the surface and the periphery of the target object, and each edge is provided with a candidate point;
204: projecting the contour of the target model to the vicinity of a two-dimensional edge of a target object in the current frame, selecting candidate points on the same edge as the contour of the target model from candidate points of the two-dimensional edge by taking the edge closest to the tangential direction as the same edge, and establishing the corresponding relation between the two-dimensional edge and the contour of the target model;
the edge closest to the tangential direction is taken as the same edge, candidate points on the same edge in the outline of the target model are selected, the edge direction analysis calculation amount can be reduced, and the real-time processing capacity is improved.
205: the three-dimensional pose of the target object is estimated from the current frame by minimizing the distance between the contour of the target model and the two-dimensional edges.
In this embodiment, the detection of the known contour of the three-dimensional target model may be performed on the GPU graphics processor, and may be performed in parallel with the extraction of the two-dimensional edge, so that the processing may be accelerated, and a fast processing effect may be achieved.
The method provided by this embodiment detects the contour of a known three-dimensional target model, acquires a current frame including a target object, tracks the target object in the current frame in real time, extracts a two-dimensional edge of the target object, corresponds the contour of the target model to the two-dimensional edge of the target object, estimates the three-dimensional posture of the target object from the current frame by minimizing the distance between the contour of the target model and the two-dimensional edge, realizes dynamic projection mapping, improves the real-time performance and robustness of edge detection while maintaining the high-speed characteristic of edge detection, can be applied to various target objects, and can be used as a realistic image expression technology.
FIG. 3 is a block diagram of an edge-based dynamic projection mapping object tracking device according to another embodiment of the present application. Referring to fig. 3, the apparatus includes:
a detection module 301 configured to detect a contour of a known three-dimensional object model;
an obtaining module 302 configured to obtain a current frame including a target object;
an estimation module 303, configured to track the target object in the current frame in real time, extract a two-dimensional edge of the target object, correspond the contour of the target model to the two-dimensional edge of the target object, and estimate the three-dimensional pose of the target object from the current frame by minimizing the distance between the contour of the target model and the two-dimensional edge.
In this embodiment, optionally, the detection module is specifically configured to:
uniformly generating sampling points on a known three-dimensional target model, detecting the profile by acquiring the sampling points with the brightness gradient larger than a specified threshold value, and mapping the detected three-dimensional profile points into two-dimensional profile points in a two-dimensional edge space.
In this embodiment, optionally, the estimation module is specifically configured to:
extracting a two-dimensional edge of the target object using a Canny edge detector, the two-dimensional edge including additional edges of the external object detected by the non-uniformity from the surface and the periphery of the target object, and each edge having a candidate point thereon;
projecting the contour of the target model to the vicinity of the two-dimensional edge of the target object in the current frame, selecting candidate points on the same edge as the contour of the target model from the candidate points of the two-dimensional edge, and establishing the corresponding relation between the two-dimensional edge and the contour of the target model.
In this embodiment, optionally, the estimation module is specifically configured to:
and selecting candidate points on the same edge in the contour of the target model by taking the edge closest to the tangential direction as the same edge.
In this embodiment, optionally, the detection module is specifically configured to:
the contour of the known three-dimensional object model is detected on the GPU graphics processor and performed in parallel with the extraction of the two-dimensional edges.
The apparatus provided in this embodiment may perform the method provided in any of the above method embodiments, and details of the process are described in the method embodiments and are not described herein again.
The above apparatus provided in this embodiment detects a contour of a known three-dimensional target model, obtains a current frame including a target object, tracks the target object in the current frame in real time, extracts a two-dimensional edge of the target object, corresponds the contour of the target model to the two-dimensional edge of the target object, and estimates a three-dimensional pose of the target object from the current frame by minimizing a distance between the contour of the target model and the two-dimensional edge, thereby implementing dynamic projection mapping, and improving real-time performance and robustness of edge detection while maintaining a high-speed characteristic of edge detection, and may be applied to various target objects and may be used as a realistic image expression technology.
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Embodiments also provide a computing device, referring to fig. 4, comprising a memory 1120, a processor 1110 and a computer program stored in said memory 1120 and executable by said processor 1110, the computer program being stored in a space 1130 for program code in the memory 1120, the computer program, when executed by the processor 1110, implementing the method steps 1131 for performing any of the methods according to the invention.
The embodiment of the application also provides a computer readable storage medium. Referring to fig. 5, the computer readable storage medium comprises a storage unit for program code provided with a program 1131' for performing the steps of the method according to the invention, which program is executed by a processor.
The embodiment of the application also provides a computer program product containing instructions. Which, when run on a computer, causes the computer to carry out the steps of the method according to the invention.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed by a computer, cause the computer to perform, in whole or in part, the procedures or functions described in accordance with the embodiments of the application. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by a program, and the program may be stored in a computer-readable storage medium, where the storage medium is a non-transitory medium, such as a random access memory, a read only memory, a flash memory, a hard disk, a solid state disk, a magnetic tape (magnetic tape), a floppy disk (floppy disk), an optical disk (optical disk), and any combination thereof.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. An edge-based dynamic projection mapping object tracking method, comprising:
detecting the contour of a known three-dimensional target model;
acquiring a current frame including a target object;
and tracking the target object in the current frame in real time, extracting a two-dimensional edge of the target object, enabling the outline of the target model to correspond to the two-dimensional edge of the target object, and estimating the three-dimensional posture of the target object from the current frame by minimizing the distance between the outline of the target model and the two-dimensional edge.
2. The method of claim 1, wherein detecting the contour of the known three-dimensional object model comprises:
uniformly generating sampling points on a known three-dimensional target model, detecting the profile by acquiring the sampling points with the brightness gradient larger than a specified threshold value, and mapping the detected three-dimensional profile points into two-dimensional profile points in a two-dimensional edge space.
3. The method of claim 1, wherein extracting the two-dimensional edge of the target object and corresponding the contour of the target model to the two-dimensional edge of the target object comprises:
extracting a two-dimensional edge of the target object using a Canny edge detector, the two-dimensional edge including additional edges of the foreign object detected by non-uniformity from the surface and the periphery of the target object, and each edge having a candidate point thereon;
and projecting the contour of the target model to the vicinity of the two-dimensional edge of the target object in the current frame, selecting candidate points on the same edge as the contour of the target model from the candidate points of the two-dimensional edge, and establishing the corresponding relation between the two-dimensional edge and the contour of the target model.
4. The method of claim 3, wherein selecting candidate points on the same edge as in the contour of the target model comprises:
and selecting candidate points on the same edge in the contour of the target model by taking the edge closest to the tangential direction as the same edge.
5. The method of claim 1, wherein detecting the contour of the known three-dimensional object model comprises:
the contour of the known three-dimensional object model is detected on the GPU graphics processor and performed in parallel with the extraction of the two-dimensional edges.
6. An edge-based dynamic projection mapping object tracking apparatus, comprising:
a detection module configured to detect a contour of a known three-dimensional object model;
an acquisition module configured to acquire a current frame including a target object;
an estimation module configured to track a target object in the current frame in real time, extract a two-dimensional edge of the target object, correspond a contour of the target model to the two-dimensional edge of the target object, and estimate a three-dimensional pose of the target object from the current frame by minimizing a distance between the contour of the target model and the two-dimensional edge.
7. The apparatus of claim 6, wherein the detection module is specifically configured to:
uniformly generating sampling points on a known three-dimensional target model, detecting the profile by acquiring the sampling points with the brightness gradient larger than a specified threshold value, and mapping the detected three-dimensional profile points into two-dimensional profile points in a two-dimensional edge space.
8. The apparatus of claim 6, wherein the estimation module is specifically configured to:
extracting a two-dimensional edge of the target object using a Canny edge detector, the two-dimensional edge including additional edges of the foreign object detected by non-uniformity from the surface and the periphery of the target object, and each edge having a candidate point thereon;
and projecting the contour of the target model to the vicinity of the two-dimensional edge of the target object in the current frame, selecting candidate points on the same edge as the contour of the target model from the candidate points of the two-dimensional edge, and establishing the corresponding relation between the two-dimensional edge and the contour of the target model.
9. The apparatus of claim 8, wherein the estimation module is specifically configured to:
and selecting candidate points on the same edge in the contour of the target model by taking the edge closest to the tangential direction as the same edge.
10. The apparatus of claim 6, wherein the detection module is specifically configured to:
the contour of the known three-dimensional object model is detected on the GPU graphics processor and performed in parallel with the extraction of the two-dimensional edges.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911108272.2A CN110992400A (en) | 2019-11-13 | 2019-11-13 | Dynamic projection mapping object tracking method and device based on edge |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911108272.2A CN110992400A (en) | 2019-11-13 | 2019-11-13 | Dynamic projection mapping object tracking method and device based on edge |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110992400A true CN110992400A (en) | 2020-04-10 |
Family
ID=70084171
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911108272.2A Pending CN110992400A (en) | 2019-11-13 | 2019-11-13 | Dynamic projection mapping object tracking method and device based on edge |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110992400A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112435294A (en) * | 2020-11-02 | 2021-03-02 | 中国科学院深圳先进技术研究院 | Six-degree-of-freedom attitude tracking method of target object and terminal equipment |
CN112614161A (en) * | 2020-12-28 | 2021-04-06 | 之江实验室 | Three-dimensional object tracking method based on edge confidence |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103733227A (en) * | 2012-06-14 | 2014-04-16 | 索弗特凯耐提克软件公司 | Three-dimensional object modelling fitting & tracking |
-
2019
- 2019-11-13 CN CN201911108272.2A patent/CN110992400A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103733227A (en) * | 2012-06-14 | 2014-04-16 | 索弗特凯耐提克软件公司 | Three-dimensional object modelling fitting & tracking |
Non-Patent Citations (1)
Title |
---|
NAOKI HASHIMOTO等: ""Dynamic Projection Mapping with a Single IR Camera"", 《INTERNATIONAL JOURNAL OF COMPUTER GAMES TECHNOLOGY》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112435294A (en) * | 2020-11-02 | 2021-03-02 | 中国科学院深圳先进技术研究院 | Six-degree-of-freedom attitude tracking method of target object and terminal equipment |
CN112435294B (en) * | 2020-11-02 | 2023-12-08 | 中国科学院深圳先进技术研究院 | Six-degree-of-freedom gesture tracking method of target object and terminal equipment |
CN112614161A (en) * | 2020-12-28 | 2021-04-06 | 之江实验室 | Three-dimensional object tracking method based on edge confidence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6095018B2 (en) | Detection and tracking of moving objects | |
KR101749099B1 (en) | Method and apparatus for tracking object in image data, and storage medium storing the same | |
CN109711304B (en) | Face feature point positioning method and device | |
KR101532864B1 (en) | Planar mapping and tracking for mobile devices | |
US9582707B2 (en) | Head pose estimation using RGBD camera | |
US9165211B2 (en) | Image processing apparatus and method | |
US9727800B2 (en) | Optimized object detection | |
KR20150027291A (en) | Optical flow tracking method and apparatus | |
KR20130025944A (en) | Method, apparatus and computer program product for providing object tracking using template switching and feature adaptation | |
KR101272448B1 (en) | Apparatus and method for detecting region of interest, and the recording media storing the program performing the said method | |
KR102458242B1 (en) | Apparatus and method for processing image pair obtained from a stereo camera | |
EP2915142B1 (en) | Method for initializing and solving the local geometry or surface normals of surfels using images in a parallelizable architecture | |
CN113763466B (en) | Loop detection method and device, electronic equipment and storage medium | |
KR20170015299A (en) | Method and apparatus for object tracking and segmentation via background tracking | |
CN111192308A (en) | Image processing method and device, electronic equipment and computer storage medium | |
CN110738667A (en) | RGB-D SLAM method and system based on dynamic scene | |
CN110992400A (en) | Dynamic projection mapping object tracking method and device based on edge | |
CN108537868A (en) | Information processing equipment and information processing method | |
CN108764343B (en) | Method for positioning tracking target frame in tracking algorithm | |
Gutev et al. | Exploiting depth information to increase object tracking robustness | |
CN111126101A (en) | Method and device for determining key point position, electronic equipment and storage medium | |
Morerio et al. | Optimizing superpixel clustering for real-time egocentric-vision applications | |
CN110991267A (en) | Density map generation method and device based on image or video crowd counting | |
US11798280B2 (en) | Method and system for augmenting point of interest in augmented-reality video | |
CN119006566A (en) | Spacing measurement method, spacing measurement device and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200410 |
|
RJ01 | Rejection of invention patent application after publication |