CN114693782A - Method and device for determining conversion relation between three-dimensional scene model coordinate system and physical coordinate system - Google Patents

Method and device for determining conversion relation between three-dimensional scene model coordinate system and physical coordinate system Download PDF

Info

Publication number
CN114693782A
CN114693782A CN202011588281.9A CN202011588281A CN114693782A CN 114693782 A CN114693782 A CN 114693782A CN 202011588281 A CN202011588281 A CN 202011588281A CN 114693782 A CN114693782 A CN 114693782A
Authority
CN
China
Prior art keywords
coordinate system
dimensional
model
dimensional images
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011588281.9A
Other languages
Chinese (zh)
Inventor
方俊
牛旭恒
李江亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Whyhow Information Technology Co Ltd
Original Assignee
Beijing Whyhow Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Whyhow Information Technology Co Ltd filed Critical Beijing Whyhow Information Technology Co Ltd
Priority to CN202011588281.9A priority Critical patent/CN114693782A/en
Publication of CN114693782A publication Critical patent/CN114693782A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Embodiments of the present invention provide a method, an apparatus, an electronic device, and a readable medium for determining a transformation relationship between a model coordinate system and a physical coordinate system of a three-dimensional scene, wherein at least three two-dimensional images are selected from a plurality of two-dimensional images used for constructing a three-dimensional scene model, position information of cameras corresponding to the selected two-dimensional images in the physical coordinate system and position information in the model coordinate system are acquired, and the transformation relationship between the model coordinate system and the physical coordinate system is determined based on the acquired information, thereby automatically registering the three-dimensional scene model and a physical world conveniently and quickly.

Description

Method and device for determining conversion relation between three-dimensional scene model coordinate system and physical coordinate system
Technical Field
The present invention relates to the field of computer vision, and in particular, to a method, an electronic device, and a storage medium for converting between a three-dimensional scene model coordinate system and a physical coordinate system.
Background
With the development of science and technology, three-dimensional scene models are widely applied in the field of computer vision. In some applications that use computer vision for positioning or navigation, real-time positioning and navigation of a moving object in a three-dimensional scene is often achieved based on a comparative match between a two-dimensional image taken by the moving object (e.g., a robot with a camera mounted thereon) and a pre-established three-dimensional scene model. The three-dimensional scene model itself has its own coordinate system (hereinafter, may be referred to as a model coordinate system), and a physical world or a real scene corresponding to the three-dimensional scene model also has a coordinate system (hereinafter, may be referred to as a physical coordinate system), and a transformation relationship (such as displacement, scaling, and rotation) between the two coordinate systems must be obtained to realize positioning or navigation of a moving object. At present, the conversion relation between the model coordinate system and the physical coordinate system is basically determined by using a manual labeling and mapping method, which is tedious and time-consuming.
Disclosure of Invention
The scheme of the invention provides a method, electronic equipment and a medium for determining the conversion relation between a three-dimensional scene model coordinate system and a physical coordinate system, and the conversion relation between the model coordinate system and the physical coordinate system can be automatically calculated, namely, the automatic registration of the three-dimensional scene model and a physical world is realized.
The above purpose is realized by the following technical scheme:
according to a first aspect of the embodiments of the present invention, there is provided a method for determining a transformation relationship between a model coordinate system and a physical coordinate system of a three-dimensional scene, including: for a three-dimensional scene model constructed based on a plurality of two-dimensional images, selecting at least three two-dimensional images from the plurality of two-dimensional images and acquiring position information of cameras corresponding to the selected two-dimensional images in a physical coordinate system; for each selected two-dimensional image, determining the position information of the corresponding camera in the model coordinate system; and determining the conversion relation between the model coordinate system and the physical coordinate system based on the position information of the camera corresponding to each determined two-dimensional image in the model coordinate system and the position information of the camera corresponding to each determined two-dimensional image in the physical coordinate system.
In some embodiments of the present invention, the determining, for each selected two-dimensional image, the position information of its corresponding camera in the model coordinate system may include: selecting at least four characteristic points from a two-dimensional image, and determining model coordinates of each characteristic point in the model coordinate system and pixel coordinates of the characteristic points in the two-dimensional image; and calculating the position information of the camera corresponding to the two-dimensional image in a model coordinate system based on the model coordinates and the pixel coordinates of the selected characteristic points.
In some embodiments of the invention, the camera positions corresponding to the selected respective two-dimensional images are not collinear. In some embodiments, the camera positions corresponding to the selected respective two-dimensional images are not coplanar. In some embodiments, the camera positions corresponding to the selected respective two-dimensional images are not collinear and not coplanar.
In some embodiments of the present invention, determining the transformation relationship between the model coordinate system and the physical coordinate system may include determining rotation, translation, and scaling parameters between the model coordinate system and the physical coordinate system based on the determined position information of the camera corresponding to each two-dimensional image in the model coordinate system and the physical coordinate system.
In some embodiments of the present invention, the obtaining of the position information of the camera in the physical coordinate system corresponding to each of the selected two-dimensional images may include determining the position information of the camera in the physical coordinate system corresponding to each of the selected two-dimensional images from position information of the camera in the physical coordinate system of the three-dimensional scene when each of the two-dimensional images is captured, which is previously marked and stored in the process of constructing the three-dimensional scene model based on the plurality of two-dimensional images.
According to a second aspect of the embodiments of the present invention, there is also provided a system for determining a transformation relationship between a model coordinate system and a physical coordinate system of a three-dimensional scene, including: the physical coordinate determination module is used for selecting at least three two-dimensional images from the two-dimensional images and acquiring the position information of cameras corresponding to the selected two-dimensional images in a physical coordinate system for a three-dimensional scene model constructed based on the two-dimensional images; the model coordinate determination module is used for determining the position information of the corresponding camera in a model coordinate system for each selected two-dimensional image; and the conversion relation determining module is used for determining the conversion relation between the model coordinate system and the physical coordinate system based on the position information of the camera corresponding to each determined two-dimensional image under the model coordinate system and the physical coordinate system.
According to a third aspect of embodiments of the present invention, there is provided an electronic device, comprising a processor and a memory, in which a computer program is stored, which, when being executed by the processor, is operable to carry out the method according to the first aspect of embodiments of the present invention.
According to a fourth aspect of embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, is operative to carry out the method according to the first aspect of embodiments of the present invention.
Compared with the prior art, the method and the device for determining the conversion relationship between the model coordinate system and the physical coordinate system determine the conversion relationship between the model coordinate system and the physical coordinate system by utilizing the corresponding relationship between the position information of the camera in the physical coordinate system and the position information in the model coordinate system, which are respectively corresponding to a plurality of two-dimensional images for constructing the three-dimensional scene model, and conveniently and quickly complete the automatic registration between the three-dimensional scene model and the physical world.
Drawings
Embodiments of the invention are further described below with reference to the accompanying drawings, in which:
FIG. 1 shows a flow diagram of a method for determining a transformation relationship of a model coordinate system and a physical coordinate system of a three-dimensional scene according to an embodiment of the invention;
FIG. 2 is a schematic diagram of the pose of a camera corresponding to each two-dimensional image marked in an example three-dimensional scene;
fig. 3 shows a functional block diagram of an apparatus for determining a transformation relationship of a model coordinate system and a physical coordinate system of a three-dimensional scene according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail by embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Three-dimensional scene models in the field of computer vision are typically constructed based on a plurality of two-dimensional images of a scene. One of the ways to reconstruct a three-dimensional scene model using two-dimensional images is to take a large number of two-dimensional images of a scene using a dedicated camera capable of accurately calibrating its own position and posture (hereinafter, referred to as pose), and record pose information of the dedicated camera when taking each image, and then, based on the pose information, spatially sort the images and reconstruct a three-dimensional scene. The inventor of the present application also discloses a scene reconstruction method based on two-dimensional images in another patent application No.202010758289.9, in which at least one visual marker (e.g., an optical communication device, a two-dimensional code, a graphic marker, etc.) is pre-arranged in a scene to be reconstructed, the position and orientation information of a camera relative to the visual marker is determined by shooting a scene image including the visual marker and analyzing the scene image, and further the actual position and orientation information of the camera when shooting the scene image is determined based on the position and orientation information of the visual marker in a physical coordinate system, so that a plurality of scene images are spatially ordered based on the camera position and orientation information associated with each scene image, and a three-dimensional scene model is established based on the ordered scene images.
It can be seen that when a three-dimensional scene model is constructed by using two-dimensional images, the pose information of a camera shooting each two-dimensional image in a physical coordinate system of a scene is often required to be determined, and the inventor finds that such information can be used not only for performing spatial sequencing on each two-dimensional image in the construction process of the three-dimensional scene model, but also for registering the three-dimensional scene model with a physical world after the construction of the three-dimensional scene model is completed.
Fig. 1 shows a flow chart of a method for determining a transformation relationship between a three-dimensional scene model coordinate system and a physical coordinate system according to an embodiment of the invention. As shown in fig. 1, in step 101, for a three-dimensional scene model constructed using a plurality of two-dimensional images, at least three two-dimensional images are selected from the plurality of two-dimensional images and position information of cameras in a physical coordinate system corresponding to each of the selected two-dimensional images is acquired. As described above, the pose information of the camera in the physical coordinate system at the time of taking each two-dimensional image has been marked in the process of constructing the three-dimensional scene model by the plurality of two-dimensional images. In one embodiment, for the selected at least three two-dimensional images, it is at least required that their respective corresponding camera positions are not collinear. In yet another embodiment, the respective camera positions corresponding to the selected respective two-dimensional images are neither collinear nor coplanar.
Next, at step 102, for each selected two-dimensional image, position information of its corresponding camera in the model coordinate system is determined. For example, at least four feature points may be selected from the two-dimensional image, and model coordinates of the feature points in a model coordinate system of the three-dimensional scene and imaging positions (i.e., pixel coordinates) of the feature points in the two-dimensional image may be determined. The model coordinates of these feature points can be obtained by the established three-dimensional scene model, and the pixel coordinates of these feature points are, for example, the positions of these feature points in a two-dimensional coordinate system established with the point at the upper left corner of the image as the origin of coordinates. Then, pose information of a camera corresponding to the two-dimensional image is calculated based on the model coordinates and the pixel coordinates of the selected feature points. For example, the position of the camera in the captured image can be solved by utilizing a PnP (passive-n-Point) algorithm under the condition that the spatial positions of a plurality of points and the imaging positions of the points on the image are known. Optionally, a ba (bundle adjustment) optimization algorithm can also be utilized simultaneously to acquire a more accurate camera pose. In fact, for a three-dimensional scene model created based on a plurality of two-dimensional images, the position and the posture of the camera corresponding to each two-dimensional image in the model coordinate system of the three-dimensional scene model can be marked by using the algorithm. For example, a three-dimensional scene model as shown in fig. 2, in which pyramidal markers are used to represent the position and pose of the corresponding camera in the model coordinate system of the scene for each two-dimensional image.
With continued reference to fig. 1, in step 103, a transformation relationship between a model coordinate system and a physical coordinate system of the three-dimensional scene is determined according to position information of a camera corresponding to each selected two-dimensional image in the model coordinate system and the physical coordinate system. Namely, the conversion relationship between the two coordinate systems is determined by using the determined corresponding position relationship of the camera corresponding to each two-dimensional image in the two coordinate systems.
In three-dimensional space, the transformation relationship between two coordinate systems includes displacement (translation), scaling (scaling), and rotation (rotation) between the two coordinate systems, each represented by three parameters, and there are 9 parameters in total. That is, the transformation relationship of two coordinate systems in the three-dimensional space can be represented by three displacement parameters, three scaling parameters, and three rotation parameters. In general, in order to maintain consistency with a real scene, the scaling of the three-dimensional scene model on three axes is the same, so that a total of 7 parameters (three displacement parameters, one scaling parameter and three rotation parameters) are required, and the corresponding position relationship (which cannot be collinear) of at least three points under two coordinate systems is required to solve the conversion coefficient of the two coordinate systems (i.e., at least three two-dimensional images are required, and the corresponding camera positions are not collinear). In the case of different scaling, the corresponding position relationship of at least four points is required, and the two points cannot be coplanar (i.e., at least four two-dimensional images are required, and the corresponding camera positions are not coplanar). Specifically, how to solve the rotation, translation and scaling parameters in the affine transformation matrix formula between the two coordinate systems by using the known coordinate corresponding relationship of at least 3 points in the two coordinate systems belongs to the prior art, and is not described herein again.
Through the scheme of the embodiment, for the three-dimensional scene model constructed by utilizing a plurality of two-dimensional images, the conversion relation between the model coordinate system and the physical coordinate system of the three-dimensional scene model can be simply, quickly and automatically determined, and the automatic registration between the model coordinate system and the physical coordinate system is realized, so that the popularization of positioning and navigation application in a three-dimensional scene is facilitated.
Fig. 3 is a functional block diagram of an apparatus 300 for determining a transformation relationship between a coordinate system of a three-dimensional scene model and a physical coordinate system according to an embodiment of the present invention. Although the block diagrams depict components in a functionally separate manner, such depiction is for illustrative purposes only. The components shown in the figures may be arbitrarily combined or separated into separate software, firmware, and/or hardware components. Moreover, regardless of how such components are combined or divided, they may execute on the same computing device or distributed across multiple computing devices, which may be connected by one or more networks.
As shown in fig. 3, the apparatus 300 includes a physical coordinate determination module 301, a model coordinate determination module 302, and a conversion relation determination module 303. Wherein the physical coordinate module 301 selects at least three two-dimensional images from the plurality of two-dimensional images and acquires position information of cameras corresponding to the selected two-dimensional images in the physical coordinate system, as described above in connection with step S101, for a three-dimensional scene model constructed based on the plurality of two-dimensional images. The model coordinate determination module 302 determines, for each selected two-dimensional image, the position information of its corresponding camera in the model coordinate system as described above in connection with step 102. The conversion relation determining module 303 determines the conversion relation between the model coordinate system and the physical coordinate system based on the position information of the camera corresponding to each determined two-dimensional image in the model coordinate system and the physical coordinate system as described above in connection with step S103.
In one embodiment of the invention, the invention may be implemented in the form of a computer program. The computer program may be stored in various computer-readable storage media (e.g., hard disk, optical disk, flash memory, etc.) and, when executed by a processor, can be used to implement the methods of the present invention.
In another embodiment of the invention, the invention may be implemented in the form of an electronic device. The electronic device comprises a processor and a memory in which a computer program is stored which, when being executed by the processor, can be used for carrying out the method of the invention.
References herein to "various embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., indicate that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in one embodiment," or "in an embodiment," or the like, in various places throughout this document are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, a particular feature, structure, or characteristic illustrated or described in connection with one embodiment may be combined, in whole or in part, with a feature, structure, or characteristic of one or more other embodiments without limitation, as long as the combination is not logical or operational. Expressions appearing herein similar to "according to a", "based on a", "by a" or "using a" mean non-exclusive, i.e. "according to a" may encompass "according to a only", as well as "according to a and B", unless specifically stated or clear from context that the meaning is "according to a only". In the present application, for clarity of explanation, some illustrative operational steps are described in a certain order, but one skilled in the art will appreciate that each of these operational steps is not essential and some of them may be omitted or replaced by others. It is also not necessary that these operational steps be performed sequentially in the manner shown, but rather that some of these operational steps be performed in a different order or in parallel, as may be practical, so long as the new implementation is not logistically or operationally unfeasible.
Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be within the spirit and scope of the invention. Although the present invention has been described in connection with some embodiments, it is not intended to limit the present invention to the embodiments described herein, and various changes and modifications may be made without departing from the scope of the present invention.

Claims (10)

1. A method for determining a transformation relationship of a model coordinate system and a physical coordinate system of a three-dimensional scene, comprising:
for a three-dimensional scene model constructed based on a plurality of two-dimensional images, selecting at least three two-dimensional images from the plurality of two-dimensional images and acquiring position information of cameras corresponding to the selected two-dimensional images in a physical coordinate system;
for each selected two-dimensional image, determining the position information of the corresponding camera in the model coordinate system;
and determining the conversion relation between the model coordinate system and the physical coordinate system based on the position information of the camera corresponding to each determined two-dimensional image under the model coordinate system and the physical coordinate system.
2. The method of claim 1, wherein said determining, for each selected two-dimensional image, position information of its corresponding camera in a model coordinate system comprises:
selecting at least four characteristic points from a two-dimensional image, and determining model coordinates of each characteristic point in the model coordinate system and pixel coordinates of the characteristic points in the two-dimensional image;
and calculating the position information of the camera corresponding to the two-dimensional image in the model coordinate system based on the model coordinates and the pixel coordinates of each selected feature point.
3. The method of claim 1, wherein the camera positions corresponding to the selected respective two-dimensional images are not collinear.
4. The method of claim 1, wherein the camera positions corresponding to the selected respective two-dimensional images are not coplanar.
5. The method of claim 1, wherein the at least three two-dimensional images comprise at least four two-dimensional images.
6. The method of any of claims 1-5, wherein determining the transformation relationship of the model coordinate system to the physical coordinate system comprises:
and determining rotation, translation and scaling parameters between the model coordinate system and the physical coordinate system based on the determined position information of the camera corresponding to each two-dimensional image in the model coordinate system and the physical coordinate system.
7. The method according to any one of claims 1-5, wherein the acquiring position information of the camera in the physical coordinate system corresponding to each of the selected two-dimensional images comprises:
position information of cameras corresponding to the selected two-dimensional images in a physical coordinate system is determined from position information of the cameras in the physical coordinate system of the three-dimensional scene when the respective two-dimensional images are captured, which is previously marked and stored in a process of constructing a three-dimensional scene model based on the plurality of two-dimensional images.
8. A system for determining a transformation relationship of a model coordinate system and a physical coordinate system of a three-dimensional scene, comprising:
the physical coordinate determination module is used for selecting at least three two-dimensional images from the two-dimensional images and acquiring the position information of cameras corresponding to the selected two-dimensional images in a physical coordinate system for a three-dimensional scene model constructed based on the two-dimensional images;
the model coordinate determination module is used for determining the position information of the corresponding camera in a model coordinate system for each selected two-dimensional image;
and the conversion relation determining module is used for determining the conversion relation between the model coordinate system and the physical coordinate system based on the position information of the camera corresponding to each determined two-dimensional image under the model coordinate system and the physical coordinate system.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, can be used for carrying out the method of any one of claims 1-7.
10. An electronic device comprising a processor and a memory, the memory having stored thereon a computer program which, when executed by the processor, is operable to carry out the method of any one of claims 1-7.
CN202011588281.9A 2020-12-29 2020-12-29 Method and device for determining conversion relation between three-dimensional scene model coordinate system and physical coordinate system Pending CN114693782A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011588281.9A CN114693782A (en) 2020-12-29 2020-12-29 Method and device for determining conversion relation between three-dimensional scene model coordinate system and physical coordinate system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011588281.9A CN114693782A (en) 2020-12-29 2020-12-29 Method and device for determining conversion relation between three-dimensional scene model coordinate system and physical coordinate system

Publications (1)

Publication Number Publication Date
CN114693782A true CN114693782A (en) 2022-07-01

Family

ID=82129214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011588281.9A Pending CN114693782A (en) 2020-12-29 2020-12-29 Method and device for determining conversion relation between three-dimensional scene model coordinate system and physical coordinate system

Country Status (1)

Country Link
CN (1) CN114693782A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228991A (en) * 2023-05-08 2023-06-06 河北光之翼信息技术股份有限公司 Coordinate conversion method and device, electronic equipment and storage medium
CN116821570A (en) * 2023-05-24 2023-09-29 上海勘测设计研究院有限公司 Coefficient estimation method, device, terminal and medium for open channel image forward correction

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228991A (en) * 2023-05-08 2023-06-06 河北光之翼信息技术股份有限公司 Coordinate conversion method and device, electronic equipment and storage medium
CN116228991B (en) * 2023-05-08 2023-07-14 河北光之翼信息技术股份有限公司 Coordinate conversion method and device, electronic equipment and storage medium
CN116821570A (en) * 2023-05-24 2023-09-29 上海勘测设计研究院有限公司 Coefficient estimation method, device, terminal and medium for open channel image forward correction
CN116821570B (en) * 2023-05-24 2024-05-07 上海勘测设计研究院有限公司 Coefficient estimation method, device, terminal and medium for open channel image forward correction

Similar Documents

Publication Publication Date Title
CN108665536B (en) Three-dimensional and live-action data visualization method and device and computer readable storage medium
Teller et al. Calibrated, registered images of an extended urban area
CN109472828B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
US10726580B2 (en) Method and device for calibration
US20160249041A1 (en) Method for 3d scene structure modeling and camera registration from single image
WO2021136386A1 (en) Data processing method, terminal, and server
JP2022539422A (en) METHOD AND APPARATUS FOR CONSTRUCTING SIGNS MAP BASED ON VISUAL SIGNS
JP2014112055A (en) Estimation method for camera attitude and estimation system for camera attitude
CN114693782A (en) Method and device for determining conversion relation between three-dimensional scene model coordinate system and physical coordinate system
JP2019032218A (en) Location information recording method and device
CN112243518A (en) Method and device for acquiring depth map and computer storage medium
CN115830135A (en) Image processing method and device and electronic equipment
CN111739103A (en) Multi-camera calibration system based on single-point calibration object
CN111563961A (en) Three-dimensional modeling method and related device for transformer substation
WO2020133080A1 (en) Object positioning method and apparatus, computer device, and storage medium
CN110111364A (en) Method for testing motion, device, electronic equipment and storage medium
CN107507133B (en) Real-time image splicing method based on circular tube working robot
CN117196955A (en) Panoramic image stitching method and terminal
JP5988364B2 (en) Image processing apparatus and method
JP2003078811A (en) Method for associating marker coordinate, method and system for acquiring camera parameter and calibration pattern
CN115423863B (en) Camera pose estimation method and device and computer readable storage medium
CN114693749A (en) Method and system for associating different physical coordinate systems
CN116091595A (en) Labeling method and system for 360 panoramic images
CN114184127B (en) Single-camera target-free building global displacement monitoring method
CN108592789A (en) A kind of steel construction factory pre-assembly method based on BIM and machine vision technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination