CN114972414A - Method, equipment and storage medium for acquiring 6DoF data of target object - Google Patents

Method, equipment and storage medium for acquiring 6DoF data of target object Download PDF

Info

Publication number
CN114972414A
CN114972414A CN202111417225.3A CN202111417225A CN114972414A CN 114972414 A CN114972414 A CN 114972414A CN 202111417225 A CN202111417225 A CN 202111417225A CN 114972414 A CN114972414 A CN 114972414A
Authority
CN
China
Prior art keywords
target object
determining
acquiring
position information
dimensional model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111417225.3A
Other languages
Chinese (zh)
Inventor
王润瑀
杜珊
李中亮
彭特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Guangli Intelligent Technology Co ltd
Original Assignee
Guangzhou Guangli Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Guangli Intelligent Technology Co ltd filed Critical Guangzhou Guangli Intelligent Technology Co ltd
Priority to CN202111417225.3A priority Critical patent/CN114972414A/en
Publication of CN114972414A publication Critical patent/CN114972414A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, equipment and a storage medium for acquiring 6DoF data of a target object, wherein the method comprises the following steps: displaying an initial frame image, wherein the initial frame image comprises a target object in a first posture; displaying a three-dimensional model matched with the target object; obtaining an adjustment operation result of the three-dimensional model posture by a user, and determining the three-axis posture information of the target object according to the adjustment operation result; determining an angular point detection area in an initial frame image; carrying out corner detection on the corner detection area to obtain tracking points; determining a first set of position information based on the tracking points and an optical flow method; and acquiring a second group of position information, wherein the second group of position information is depth distance information. By adopting the method and the device for acquiring the target object 6DoF data, the attitude calculation process and the target identification process are not needed, only modeling is needed for the target, the area to be detected can be accurately specified, the operation is more convenient, and the expense of a computer is reduced, so that the method and the device have better universality.

Description

Method, equipment and storage medium for acquiring 6DoF data of target object
Technical Field
The present invention relates to the field of target tracking technologies, and in particular, to a method, a device, and a storage medium for acquiring target object 6DoF data.
Background
In the industrial field, an industrial robot needs to perform target tracking of a target object for positioning operation and non-contact type remote control.
The target tracking is to establish the position relation of an object to be tracked in a continuous video sequence so as to obtain the complete motion track of the object. In the conventional target tracking, a monocular camera and a binocular camera are generally used to perform tracking detection on a target. The target tracking needs the position information of the object in the space, the object has six degrees of freedom in the space, and the information of the six degrees of freedom must be clear to completely determine the position of the object in the space.
The target tracking method based on the monocular camera requires establishing an object sample data set to identify a target, and a three-dimensional attitude Euclidean angle of the object in a space coordinate is obtained according to coordinate analysis of the object in an image pixel.
Disclosure of Invention
The invention aims to provide a method, computer equipment and a storage medium for acquiring 6DoF data of a target object, which can be used for calculating the 6DoF data of the target object in a video picture for unspecified targets, do not need a posture calculation process and a target recognition process, only need to model the targets, reduce the expense of a computer and have better universality.
A method of acquiring target object 6DoF data, the method comprising: displaying an initial frame image, wherein the initial frame image comprises a target object in a first posture; displaying a three-dimensional model matched with the target object; obtaining an adjustment operation result of the three-dimensional model posture by a user, and determining the three-axis posture information of the target object according to the adjustment operation result; determining a corner detection area in an initial frame image; carrying out corner detection on the corner detection area to obtain a tracking point; determining a first set of position information based on the tracking points and an optical flow method; and acquiring a second group of position information, wherein the second group of position information is depth distance information.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the above method of acquiring DoF data of a target object 6 when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the above-mentioned steps of the method of acquiring DoF data of a target object 6.
Compared with the prior art, the invention has the following beneficial effects: the method can acquire the 6DoF data of the target object in the video picture, does not need an attitude calculation process and a target identification process, only needs to model the target, can accurately specify the area to be detected, is more convenient to operate, reduces the expense of a computer, and has better universality.
Drawings
The invention is further illustrated with reference to the following figures and examples:
fig. 1 is a schematic flowchart of a method for acquiring DoF data of a target object 6 according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram showing a three-dimensional model matched with a target object in an embodiment of the present application;
FIG. 3 is a schematic diagram of a detection frame covering a target object according to an embodiment of the present disclosure;
FIG. 4 is a block diagram of a computer device according to an embodiment of the present disclosure;
fig. 5 is a block diagram of a computer-readable storage medium according to an embodiment of the present disclosure.
Detailed Description
To facilitate understanding of the invention by those skilled in the art, a brief description of terms related to embodiments of the invention follows:
(1) an optical flow method: the optical flow method is that the instantaneous speed of the pixel movement of a space moving object on an observation imaging plane is utilized to find the corresponding relation between the previous frame and the current frame by utilizing the change of the pixel in the image sequence on the time domain and the correlation between the adjacent frames, thereby calculating the movement information of the object between the adjacent frames.
(2) Six degrees of freedom (6DoF, 6degree of freedom): the object has six degrees of freedom in space, namely freedom of movement in the direction of X, Y, Z three orthogonal axes and freedom of rotation about these three axes, which must be known for complete determination of the position of the object.
(3) A depth camera: compared with the traditional camera, the depth camera is added with a depth measurement function, so that the surrounding environment and changes can be sensed more conveniently and accurately.
(4) Target tracking: target (single target) tracking is to predict the size and position of a target in an initial frame of a video sequence given the size and position of the target in the subsequent frame.
(5) Angular point detection: a method for obtaining image features in a computer vision system is widely applied to the fields of motion detection, image matching, video tracking, three-dimensional modeling, target identification and the like, and is also called feature point detection.
(6) And (3) sub-pixel detection: a feature detection with higher accuracy.
In one embodiment, as shown in fig. 1, a method for acquiring DoF data of a target object 6 is provided, which specifically includes the following steps:
step S100, displaying an initial frame image, where the initial frame image includes a target object in a first pose.
The target object is a three-dimensional object to be detected in the image.
Wherein the first pose is a pose of the target object in the initial frame image.
Specifically, a real-time video image is shot, an initial frame image is selected from the real-time video image, a target object to be detected is displayed in the initial frame image, and a first posture of the target object and a region of the target object in a camera are determined.
It should be noted that, because there is generally interference from other objects in the captured video image, it is necessary to distinguish the object from other objects in the video image, determine the target object to be detected in the initial frame of the video image, and further determine the posture of the target object.
And step S200, displaying the three-dimensional model matched with the target object.
The three-dimensional model is generated by scanning a target object in advance and generating in a three-dimensional laser scanning point cloud mode or manually modeling by using three-dimensional software, so that the target object in a subsequent and initial frame image can be successfully matched.
In one embodiment, there are two windows in the operation interface 201, an initial frame image window 202 and a three-dimensional model window 203, and after a target object to be detected is determined in the initial frame image window 202, a three-dimensional model corresponding to the target object is displayed in the three-dimensional model window 203.
Compared with the prior art that a large number of sample databases need to be established for identifying the target object, modeling is only needed for the target object, preparation overhead and operation overhead required by tracking a certain object are reduced, and computer overhead is reduced.
And step S300, acquiring an adjustment operation result of the three-dimensional model attitude by the user, and determining the three-axis attitude information of the target object according to the adjustment operation result.
The three-axis attitude information is the rotational freedom of the object around three coordinate axes of X, Y and Z axes.
In one embodiment, a user movement operation instruction is obtained, wherein the movement operation instruction is used for carrying out adjustment operation on the three-dimensional model, and the adjustment operation comprises zooming, rotating or dragging so that the three-dimensional model is matched with a target object in an initial frame image; and obtaining an adjustment operation result of the three-dimensional model posture by the user, and determining the three-axis posture information of the target object according to the adjustment operation result.
And matching, namely the contour of the target object in the image corresponding to the current posture of the three-dimensional model is the same as that of the target object in the initial frame image.
Wherein, the adjustment operation is to adjust the three-dimensional model from the initially stored posture to the first posture of the target object in the same initial frame image, the data recorded in the adjustment process is recorded and stored, and the three-axis posture information of the target object is processed and generated,
the three-axis attitude information of the target object is acquired, operations such as calibration and the like on the camera are not needed, and the method is not limited by the system internal reference precision.
In one embodiment, after determining the three-axis pose information of the target object, the method further comprises: generating a detection frame matched with the outline of the three-dimensional model; acquiring a moving instruction of a user on the detection frame, wherein the moving instruction is used for performing moving operation on the detection frame so as to enable the detection frame to cover the target object on the initial frame image; and after the detection frame covers the target object, determining a corner detection area in the initial frame image according to the outline of the detection frame.
In one embodiment, as shown in fig. 2, the real-time movement posture of the target object is too large, and the detection frame needs to be regenerated. The detection block is to provide the execution range of the corner detection. The dotted-line rectangular frame in fig. 2 is a detection frame, and the solid-line rectangular frame is a target object.
In the first detection frame state 301, if the detection frame is too small, the high-quality corner of the target object cannot be extracted in the subsequent operation, and thus the tracking result of the corner detection is affected.
In the second detection box state 302, if the detection box is too large, corner interference of other non-target objects is easily introduced, thereby affecting the tracking result of corner detection.
In the third detection frame state 303, the detection frame just covers the target object, and the corner detection area in the initial frame image can be determined.
Compared with the prior art that the corner detection is selected by moving along the edge of the area to be detected, the method is more convenient to determine the corner detection area by generating the detection frame according to the adjustment operation result.
In step S400, a corner detection area in the initial frame image is determined.
In one embodiment, the execution region for corner detection in the initial frame image is determined according to the detection frame and the region covered by the target object. Determining a corner detection region in an initial frame image comprises: and covering the mobile detection frame with the target object, and determining an execution area for corner detection according to the coverage area.
And the execution area of the corner detection is obtained by matching the moving detection frame with the target object.
Compared with the prior art that a lot of noise is introduced by directly carrying out corner detection on the whole initial frame image, the method and the device can accurately specify the area to be detected, thereby reducing the noise interference.
Step S500, performing corner detection on the corner detection area to obtain tracking points.
Specifically, an angular point detection area is determined according to the coverage area, angular point detection is carried out on the angular point detection area by using an angular point detection algorithm, detected tracking points are obtained, sub-pixel detection is carried out subsequently, further optimization calculation is carried out on the detected tracking points, and tracking point marks are updated in real time.
In one embodiment, the Shi-Tomasi corner detection algorithm is used to perform corner detection on the corner detection area, acquire the detected tracking points, then perform sub-pixel detection, perform further optimization calculation on the detected tracking points, and update the tracking point marks in real time.
The Shi-Tomasi algorithm is to use horizontal and vertical difference operators to filter all pixels (x, y) of an image to obtain a sum, further obtain a 2 x 2 matrix consisting of four element values, then perform gaussian smoothing filtering on the matrix to obtain a matrix, and obtain the value sum of a determinant from the matrix. If the smaller of the two eigenvalues of the matrix is greater than the threshold value, it is considered a corner, i.e. only if the sum is greater than the minimum value. The corner features are better features in the image and are better used for defining and positioning than the edge features, the process of finding the image features is called feature extraction, and the feature extraction determines the final target identification effect. In all regions of the image, the regions where the pixel value changes greatly are the regions where the corner features are located by making small movements in all directions.
Step S600, determining a first group of position information based on the tracking points and an optical flow method.
Wherein the first position information is X-axis and Y-axis coordinate values of the target object in the image frame.
Specifically, after the tracking points are confirmed, the data of all the tracking points are used as initial parameters of an optical flow algorithm, an optical flow is calculated by using the optical flow method, the latest positions of all the tracking points are updated in real time, the positions and motion tracks of all the tracking points are marked, and the first group of position information is determined.
The invention realizes the tracking function of the user-defined area by using an optical flow method, and can realize continuous tracking of the target object under the conditions of jitter, camera movement and the like in a video picture.
In one embodiment, after the tracking points are confirmed, the data of all the tracking points are used as initial parameters (X, Y) of an optical flow algorithm, an optical flow of a sparse feature set is calculated by using an iterative Lucas-Kanade method with a pyramid, the latest positions of all the tracking points are updated in real time, the positions and motion tracks of all the tracking points are marked, and a first set of position information is determined.
The Lucas-Kanade method is used for calculating the movement of each pixel point position between the time of two frames. Assuming that the motion vector remains constant within a small spatial neighborhood, the optical flow is estimated using a weighted least squares method. The algorithm is widely applied to sparse optical flow fields because it is convenient to apply to a set of points of an input image. Sparse optical flow does not perform point-by-point calculation on each pixel point of an image, and generally a group of points are specified for tracking, and the group of points preferably has some obvious characteristic, so that tracking is relatively stable and reliable. The calculation cost of sparse tracking is much smaller than that of dense tracking, and the sparse representation method has good applicability and high robustness.
Step S700, a second group of position information is obtained, where the second group of position information is depth distance information.
Specifically, when the target object is tracked, a second set of position information is acquired according to the tracking result, and the second set of position information is depth distance information. Wherein the depth distance information is a Z-axis coordinate value of the target object in the image frame.
In one embodiment, a second set of location information is obtained, the second set of location information being depth distance information. Specifically, an image is captured with a depth camera, and a second set of position information is acquired. Wherein the depth camera is a structured light depth camera, the depth camera being capable of measuring depth values, the depth values being the vertical distances of the spatial points to the camera plane.
The position information of the target object is obtained, the target identification does not need to be considered, and preparation overhead and running overhead required by tracking a certain object are reduced.
Compared with the prior art, the technical scheme of the invention at least has the following beneficial effects:
(1) 6DoF data of the video image can be acquired facing to an unspecific target object;
(2) acquiring the three-axis attitude information of the target object, without performing operations such as calibration and the like on a camera, and without being limited by the accuracy of system parameters;
(3) acquiring the position information of a target object without a target identification process;
(4) when a tracking target object needs to be replaced, only modeling is needed to be carried out on the target, preparation overhead and operation overhead required by tracking a certain object are reduced, and the overhead of a computer is reduced, so that the method has better universality;
(5) the region to be detected can be accurately specified.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, a computer device 800 is provided, comprising a memory 801 and a processor 802, the memory 801 having stored therein a computer program, the processor 802 realizing the following steps when executing the computer program:
displaying an initial frame image, wherein the initial frame image comprises a target object in a first posture;
displaying a three-dimensional model matched with the target object;
acquiring an adjustment operation result of the three-dimensional model posture by a user, and determining three-axis posture information of the target object according to the adjustment operation result;
determining an angular point detection area in an initial frame image;
carrying out corner detection on the corner detection area to obtain tracking points;
determining a first set of position information based on the tracking points and an optical flow method;
and acquiring a second group of position information, wherein the second group of position information is depth distance information.
In one embodiment, the processor 802 when executing the computer program further performs the steps of:
acquiring a user movement operation instruction, wherein the movement operation instruction is used for carrying out adjustment operation on the three-dimensional model, and the adjustment operation comprises zooming, rotating or dragging so that the three-dimensional model is matched with a target object in an initial frame image; obtaining an adjustment operation result of the three-dimensional model posture by a user, and determining the three-axis posture information of the target object according to the adjustment operation result;
after determining the three-axis attitude information of the target object, generating a detection frame matched with the outline of the three-dimensional model; acquiring a moving instruction of a user on the detection frame, wherein the moving instruction is used for performing moving operation on the detection frame so as to enable the detection frame to cover the target object on the initial frame image; after the detection frame covers the target object, determining an angular point detection area in an initial frame image according to the outline of the detection frame;
the corner detection area uses Shi-Tomasi corner detection algorithm to carry out corner detection and obtain detected tracking points;
determining a first group of position information based on the detected and obtained tracking points, wherein the data of the tracking points are used as initial parameters of a Lucas-Kanade optical flow method;
and shooting an image by using a depth camera to acquire a second group of position information.
In an embodiment, a computer-readable storage medium 900 is provided, on which a computer program 901 is stored, the computer program 901 realizing the following steps when executed by a processor:
displaying an initial frame image, wherein the initial frame image comprises a target object in a first posture;
displaying a three-dimensional model matched with the target object;
obtaining an adjustment operation result of the three-dimensional model posture by a user, and determining the three-axis posture information of the target object according to the adjustment operation result;
determining a corner detection area in an initial frame image;
carrying out corner detection on the corner detection area to obtain tracking points;
determining a first set of position information based on the tracking points and an optical flow method;
and acquiring a second group of position information, wherein the second group of position information is depth distance information.
In one embodiment, the computer program 901 further realizes the following steps when executed by a processor:
acquiring a user movement operation instruction, wherein the movement operation instruction is used for carrying out adjustment operation on the three-dimensional model, and the adjustment operation comprises zooming, rotating or dragging so that the three-dimensional model is matched with a target object in an initial frame image; obtaining an adjustment operation result of the three-dimensional model posture by a user, and determining the three-axis posture information of the target object according to the adjustment operation result;
after determining the three-axis attitude information of the target object, generating a detection frame matched with the outline of the three-dimensional model; acquiring a moving instruction of a user on the detection frame, wherein the moving instruction is used for performing moving operation on the detection frame so as to enable the detection frame to cover the target object on the initial frame image; after the detection frame covers the target object, determining an angular point detection area in an initial frame image according to the outline of the detection frame;
the corner detection area uses Shi-Tomasi corner detection algorithm to carry out corner detection and obtain detected tracking points;
determining a first group of position information based on the detected and obtained tracking points, wherein the data of the tracking points are used as initial parameters of a Lucas-Kanade optical flow method;
and shooting an image by using a depth camera to acquire a second group of position information.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program that can be stored in a non-volatile computer-readable storage medium and can be executed by associated hardware, and the computer program can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The invention and its embodiments have been described above without limitation, and the embodiments shown in the drawings are only one of the embodiments of the invention, and the practical description is not limited thereto, and in conclusion, if those skilled in the art should be informed by the teachings of the present invention, it is within the scope of the present invention to design the embodiments and examples similar to the technical solutions without creative idea of the present invention.

Claims (9)

1. A method of obtaining DoF data for a target object 6, comprising:
displaying an initial frame image, wherein the initial frame image comprises a target object in a first posture;
displaying a three-dimensional model matched with the target object;
obtaining an adjustment operation result of the three-dimensional model posture by a user, and determining the three-axis posture information of the target object according to the adjustment operation result;
determining an angular point detection area in an initial frame image;
carrying out corner detection on the corner detection area to obtain tracking points;
determining a first set of position information based on the tracking points and an optical flow method;
and acquiring a second group of position information, wherein the second group of position information is depth distance information.
2. The method for acquiring 6DoF data of a target object according to claim 1, wherein the acquiring of the adjustment operation result of the user on the three-dimensional model posture, and determining the three-axis posture information of the target object according to the adjustment operation result comprises:
acquiring a user movement operation instruction, wherein the movement operation instruction is used for adjusting the three-dimensional model so that the three-dimensional model is matched with a target object in the initial frame image; the adjustment operation comprises zooming, rotating or dragging;
and obtaining an adjustment operation result of the three-dimensional model posture by the user, and determining the three-axis posture information of the target object according to the adjustment operation result.
3. The method of claim 1, wherein after determining the three-axis pose information of the target object, the method further comprises:
generating a detection frame matched with the outline of the three-dimensional model;
acquiring a moving instruction of a user on the detection frame, wherein the moving instruction is used for performing moving operation on the detection frame so as to enable the detection frame to cover the target object on the initial frame image;
and after the detection frame covers the target object, determining a corner detection area in the initial frame image according to the outline of the detection frame.
4. The method for acquiring 6DoF data of a target object according to claim 1, wherein the performing corner detection on the corner detection area to acquire tracking points comprises:
and performing corner detection on the corner detection area by using a Shi-Tomasi corner detection algorithm to acquire tracking points.
5. The method of claim 1, wherein said determining the first set of position information based on the tracking points and optical flow comprises:
and determining a first group of position information by taking the data of the tracking points as initial parameters of a Lucas-Kanade optical flow method based on the tracking points obtained by detection.
6. The method of claim 1, wherein the obtaining the second set of position information comprises:
and shooting an image by using a depth camera to acquire a second group of position information.
7. The method of claim 6, wherein the depth camera is a structured light depth camera.
8. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202111417225.3A 2021-11-25 2021-11-25 Method, equipment and storage medium for acquiring 6DoF data of target object Pending CN114972414A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111417225.3A CN114972414A (en) 2021-11-25 2021-11-25 Method, equipment and storage medium for acquiring 6DoF data of target object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111417225.3A CN114972414A (en) 2021-11-25 2021-11-25 Method, equipment and storage medium for acquiring 6DoF data of target object

Publications (1)

Publication Number Publication Date
CN114972414A true CN114972414A (en) 2022-08-30

Family

ID=82975388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111417225.3A Pending CN114972414A (en) 2021-11-25 2021-11-25 Method, equipment and storage medium for acquiring 6DoF data of target object

Country Status (1)

Country Link
CN (1) CN114972414A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116051630A (en) * 2023-04-03 2023-05-02 慧医谷中医药科技(天津)股份有限公司 High-frequency 6DoF attitude estimation method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116051630A (en) * 2023-04-03 2023-05-02 慧医谷中医药科技(天津)股份有限公司 High-frequency 6DoF attitude estimation method and system

Similar Documents

Publication Publication Date Title
CN111052183B (en) Vision inertial odometer using event camera
US10275649B2 (en) Apparatus of recognizing position of mobile robot using direct tracking and method thereof
CN109872372B (en) Global visual positioning method and system for small quadruped robot
US11830216B2 (en) Information processing apparatus, information processing method, and storage medium
US9990726B2 (en) Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image
Williams et al. On combining visual SLAM and visual odometry
Saeedi et al. Vision-based 3-D trajectory tracking for unknown environments
JP3735344B2 (en) Calibration apparatus, calibration method, and calibration program
EP1796039B1 (en) Device and method for image processing
US11108964B2 (en) Information processing apparatus presenting information, information processing method, and storage medium
Zhang et al. Building a partial 3D line-based map using a monocular SLAM
JP2016517981A (en) Method for estimating the angular deviation of a moving element relative to a reference direction
Mueller et al. Continuous extrinsic online calibration for stereo cameras
Schmidt et al. Automatic work objects calibration via a global–local camera system
KR20190034130A (en) Apparatus and method for creating map
JP6922348B2 (en) Information processing equipment, methods, and programs
CN117218210A (en) Binocular active vision semi-dense depth estimation method based on bionic eyes
CN114972414A (en) Method, equipment and storage medium for acquiring 6DoF data of target object
Gramegna et al. Optimization of the POSIT algorithm for indoor autonomous navigation
JP6603993B2 (en) Image processing apparatus, image processing method, image processing system, and program
Faraji et al. Simplified active calibration
JP2018116147A (en) Map creation device, map creation method and map creation computer program
Pietzsch Planar features for visual slam
CN112683273A (en) Adaptive incremental mapping method, system, computer equipment and storage medium
Comport et al. Efficient model-based tracking for robot vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination