CN110579169A - Stereoscopic vision high-precision measurement method based on cloud computing and storage medium - Google Patents

Stereoscopic vision high-precision measurement method based on cloud computing and storage medium Download PDF

Info

Publication number
CN110579169A
CN110579169A CN201910694797.2A CN201910694797A CN110579169A CN 110579169 A CN110579169 A CN 110579169A CN 201910694797 A CN201910694797 A CN 201910694797A CN 110579169 A CN110579169 A CN 110579169A
Authority
CN
China
Prior art keywords
digital image
point
image data
data
measured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910694797.2A
Other languages
Chinese (zh)
Inventor
王江林
肖浩威
文述生
李宁
闫少霞
周光海
马然
黄劲风
马原
徐丹龙
杨艺
丁永祥
庄所增
潘伟锋
张珑耀
刘国光
郝志刚
陶超
韦锦超
赵瑞东
闫志愿
陈奕均
黄海锋
方东城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou South Surveying & Mapping Instrument Co ltd
Original Assignee
Guangzhou South Surveying & Mapping Instrument Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou South Surveying & Mapping Instrument Co ltd filed Critical Guangzhou South Surveying & Mapping Instrument Co ltd
Priority to CN201910694797.2A priority Critical patent/CN110579169A/en
Publication of CN110579169A publication Critical patent/CN110579169A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/53Determining attitude

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a stereoscopic vision high-precision measurement method based on cloud computing and a storage medium, comprising the following steps: shooting at least three pieces of digital image data containing a certain point to be measured by a camera and acquiring GNSS data of a camera coordinate when the point to be measured is shot; calculating attitude and position data of the digital image data based on the digital image data and the GNSS data; and obtaining the three-dimensional coordinates of the point to be measured based on the attitude position data. The invention obtains the attitude position data of the digital image data by obtaining the digital image data containing the measuring points and the GPS data, and obtains the real three-dimensional space coordinate of the measuring points by adjusting difference by a light beam method based on the attitude position data, thereby enlarging the use range of the camera, having simple measuring equipment and being capable of measuring the measuring points under the condition of shielding.

Description

stereoscopic vision high-precision measurement method based on cloud computing and storage medium
Technical Field
the invention relates to the field of position measurement, in particular to a stereoscopic vision high-precision measurement method and a storage medium based on cloud computing.
background
the three-dimensional laser scanning technology is also called as live-action replication technology, and the appearance and development of the three-dimensional laser scanning technology provide a brand-new technical means for acquiring spatial three-dimensional information. The method adopts a non-contact high-speed laser measurement mode to obtain geometric figure data and image data of a complex object, finally, post-processing software processes the collected point cloud data and the collected image data and converts the point cloud data and the collected image data into space position coordinates or models in an absolute coordinate system, and the space position coordinates or models are output in various different formats, so that the requirements of data sources and different applications of a space information database are met.
the existing three-dimensional laser point cloud scanning space three-dimensional points are not perspective, a large number of noise points are easy to appear under the shielding condition, the processing data volume of the point cloud is huge, on one hand, a heavy computer needs to be carried to be connected with the very heavy three-dimensional laser scanner, and on the other hand, the computer performance is also highly required.
Disclosure of Invention
In view of the above technical problems, an object of the present invention is to provide a stereoscopic vision high-precision measurement method and storage medium based on cloud computing, which solve the problems that in the prior art, a device for measuring coordinates of three-dimensional laser points is heavy, and when a measurement point is blocked, a large number of noise points are likely to occur in measurement.
The invention adopts the technical scheme that:
a stereoscopic vision high-precision measurement method based on cloud computing comprises the following steps:
shooting at least three pieces of digital image data containing a certain point to be measured by a camera and acquiring GNSS data of a camera coordinate when the point to be measured is shot;
Calculating attitude and position data of the digital image data based on the digital image data and the GNSS data;
and obtaining the three-dimensional coordinates of the point to be measured based on the attitude position data.
preferably, the step of calculating the three-dimensional coordinate of the point to be measured based on the attitude and position data specifically includes:
And calculating to obtain the three-dimensional coordinates of the point to be measured by adopting an angle forward intersection method based on the attitude position data.
Preferably, the step of calculating the attitude and position data of the digital image data based on the digital image data and the GNSS data includes:
matching SIFT descriptors with the same two digital images in the image group by the Euclidean distance minimum principle;
Filtering the mismatching matching point pairs through an ACRANASC algorithm to obtain high-quality matching point pairs;
and establishing an error equation for the high-quality matching point pairs by a light beam adjustment method, and solving the error equation to obtain the poses of any two digital images and the coordinates of the characteristic points in the coordinate system where the coordinate points of the camera are located.
Preferably, the present invention further comprises the steps of: and selecting the position and the coordinates of the characteristic points of the two digital images with the longest base line in the coordinate system of the coordinate point of the camera.
a stereoscopic vision high-precision measurement method based on cloud computing comprises the following steps:
Shooting at least two pieces of digital image data including a certain point to be measured by a camera;
Calculating posture position data of the digital image data based on the digital image data;
And calculating to obtain the relative three-dimensional coordinate of the point to be measured relative to the camera based on the attitude position data.
a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the cloud computing based stereoscopic vision high accuracy measurement method.
Compared with the prior art, the invention has the beneficial effects that:
the invention establishes a stereoscopic vision optical perspective model linking a photogrammetry coordinate system and an object space coordinate system by shooting digital image data containing measuring points and storing real-time GPS data when a mobile terminal shoots digital images, namely attitude position data of the digital image data is obtained by containing the digital image data and the GPS data of the measuring points, and real three-dimensional space coordinates of the measuring points are obtained by adjustment by a light beam method based on the attitude position data, thereby enlarging the use range of a camera, having simple measuring equipment, enabling the measuring points to measure under the shielding condition, and achieving the requirement of indoor high-precision measurement with the relative precision of 0.5mm per meter.
Drawings
FIG. 1 is a schematic flow chart of a stereoscopic vision high-precision measurement method based on cloud computing according to the present invention;
FIG. 2 is a schematic diagram of a method for angular forward intersection in an embodiment of the cloud-computing-based stereoscopic vision high-precision measurement method of the present invention;
fig. 3 is a flow diagram in the practical application of the cloud computing-based stereoscopic vision high-precision measurement method.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and the detailed description, and it should be noted that any combination of the embodiments or technical features described below can be used to form a new embodiment without conflict.
Example (b):
Referring to fig. 1-3, a cloud computing-based stereoscopic vision high-precision measurement method, referring to fig. 1, includes:
step S100, shooting at least three digital image data containing a point to be measured by a camera and acquiring GNSS data of a camera coordinate when the point to be measured is shot; the digital image data is stored in binary, and the stored digital image is compressed or not compressed by using the formats of jpeg \ png \ gif and the like. In practical application, when a user transmits the GNSS measurement positions of the two digital images and the digital images to the server, the server algorithm obtains and transmits the attitude positions of the digital images back to the mobile terminal, and the mobile terminal can measure two-dimensional space point projections (namely pixel positions of points in the images) stored in the two digital images by using the corresponding attitude files.
Because the current computing capability of the mobile equipment is insufficient, the computing work is delivered to a server side to be realized, the process and the like have breakthroughs in the chip technology in the future, and when the computing capability is enough to support feature point extraction matching and global beam method adjustment, the computing task can be completely carried out in an off-line mode.
S200, calculating to obtain attitude position data of the digital image data based on the digital image data and the GNSS data; the attitude and position data are the poses of the digital images and the coordinates of the feature points, and specifically comprise a rotation matrix and a position vector.
further, the step of calculating the attitude and position data of the digital image data based on the digital image data and the GNSS data specifically includes:
Matching SIFT descriptors with the same two digital images in the image group by the Euclidean distance minimum principle;
filtering the mismatching matching point pairs through an ACRANASC algorithm to obtain high-quality matching point pairs;
And establishing an error equation for the high-quality matching point pairs by a light beam adjustment method, and solving the error equation to obtain the poses of any two digital images and the coordinates of the characteristic points in the coordinate system where the coordinate points of the camera are located.
specifically, the method includes the steps of matching, by means of the euclidean distance minimum principle, the SIFT descriptors [ SIFT descriptor generation reference paper (David g. low distinguishing Image Features from scale-innovative keys. january 5,2004 ]) with the same two arbitrary digital images in a group of images, filtering the mismatching matching points by using the acrasasc algorithm, adding the remaining high-quality matching point pairs into the bundle adjustment to establish an error equation, and obtaining the position and the feature point coordinates of the digital images in the photogrammetry coordinate system.
and S300, obtaining the three-dimensional coordinates of the point to be measured based on the attitude position data.
further, the step of calculating the three-dimensional coordinates of the point to be measured based on the attitude and position data specifically comprises: and calculating to obtain the three-dimensional coordinates of the point to be measured by adopting an angle forward intersection method based on the attitude position data.
And obtaining the space coordinates of the measuring points by using an angle forward intersection method. In the process, even if one or two measurement targets are shielded, the corresponding coordinate point result can be obtained as long as the approximate projection position of the measurement point is known in the user experience judgment. The forward crossing method is that if the coordinates of two points A, B are known, only the < A > and the < B > are observed for calculating the coordinates of an unknown point P. Specifically, referring to fig. 2, the digital image is represented as a line segment in a two-dimensional space, and the line segment and the projection center form a model shown as a blue triangle in the drawing, the projection of a two-dimensional space point is a point on a one-dimensional line, the point is represented by a pixel number, and from the left side of the line segment, central projections of two digital images (assumed as the digital image (X1, Y1) and the digital image (X2, Y2)) are respectively selected for the same two-dimensional space point (Xt, Yt), so as to obtain an intersection point of a light ray b and the digital image (X1, Y1), and an intersection point of a light ray c and the digital image (X2, Y2). Because the number of digital images used by a client (namely a mobile terminal) is uncertain, and the position distribution conditions of the digital images are different, in order to obtain the coordinates of the measuring point with the highest precision, two images with the longest base line are calculated by the space three-dimensional coordinates of the returned digital images according to the principle that the front part meets the optimal coordinates in the client algorithm, so that a user can select and calculate the target point.
Preferably, the method further comprises the following steps: and selecting the position and the coordinates of the characteristic points of the two digital images with the longest base line in the coordinate system of the coordinate point of the camera.
in practical applications, the device for acquiring the digital image data and the camera GNSS data during shooting may be a device in which the GNSS receiver and the camera device are separated, or may be a device in which the GNSS receiver and the camera device are unified (for example, a mobile phone, some professional cameras are equipped with GNSS receivers inside). The GNSS position data must be the position at which the digital image was taken, but it is not necessary that the corresponding GNSS data be required for each digital image: when the actual coordinate is required, at least three pieces of GNSS data are needed as a basis to further calculate conversion parameters of a relative three-dimensional space and an absolute three-dimensional space. When converting between two different rectangular coordinate systems in three-dimensional space, a seven-parameter model (mathematical equation system) is usually used, and seven unknown parameters are included in the model, which is as follows:
the transformation parameters for the two coordinate systems of different references are 7, respectively 3 rotation factors (forming a matrix of R), 3 displacement factors (Xn, Yn, Zn), and 1 scaling factor m, and if the coordinates of the same point in the two coordinate systems are known, the equations are listedthree pairs of point energy list 9 equations, and 7 unknowns are solved; when the requirement is a relative position relationship, GNSS data is not needed, and a real relative position coordinate can be converted by a projection device proportion parameter of the camera equipment, wherein the proportion parameter is that the size of a pixel of a digital image is larger than that of an actual photosensitive element (negative film), and the width of one pixel represents a distance which is long in reality.
Referring to fig. 3, a basic flow of the present application includes: collecting data, uploading to a server for processing and resolving, downloading related files and measuring.
The method is characterized in that the attitude and position data of the digital image data can be obtained by calculation based on the digital image data and can be transmitted to the mobile terminal equipment through calculation of the server, mainly because the modern communication technology is developed to a relatively satisfied state, but the performance of the mobile terminal equipment is not enough to support one of the flows selected under the condition of large-scale operation, and the method can be completely realized in the mobile terminal in an off-line mode after the performance is developed in the future. The method specifically comprises the steps that a client uploads collected digital images and GNSS data to a server by adopting a binary stream UTP protocol, the server starts a background program after analyzing the data completely, and the position and pose file of a photo is returned to mobile terminal equipment through UTP after the resolution is finished. The acquisition process does not depend on certain equipment, and even any smart phone can complete the acquisition process; in the process resolving phase of the server, the cloud computing mode or the off-line mode can be adopted. In the measurement phase, the measurement point coordinates are solved using the collinearity equation, and the measurement point is not necessarily in the digital image capture content, it may be an occlusion part existing in the optical perspective model, which is selected to conform to the collinearity equation based on the visual experience of the user. The method comprises the following specific steps:
The client selects the same corresponding target point on the digital image group shot by the client, inputs coordinates (x1, y2) and (x2, y2) on at least two images (the upper left corner is used as an origin), and establishes a collinear equation set:
in the above formula: since the-f term is not an observation value, the formula of the third row is a proportional term, the first row and the second row are respectively used as numerators, and the third row is used as a denominator to solve, so that actually only two equations can be used when selecting a projection point on one image, and therefore at least two image coordinates of two images are required to solve the three unknowns of the target point Xt, Yt and Zt, and four equations are established. Wherein x0 and y0 represent the principal distance offset of the digital image (the intersection point of a vertical line from the projection center to the photo and the photo is not necessarily the exact center of the photo, so the pixel offset between the intersection point and the exact center is x0 and y0), which belongs to the internal parameter problem of the camera device, and the Zhang Yongyou correction method can be used for calculating the parameter; f denotes a focal length at the time of digital image photographing; r represents a rotation matrix of the digital image and is also contained in a pose file returned by the server; xt, Yt and Zt represent unknown numbers of the positions of target measuring points to be solved; xi, Yi, Zi represent the position of the digital image (here set as the ith sheet) in the relative three-dimensional space coordinate system, computed by the server back to the client along with R. Therefore, the client only needs to process the input of at least two groups (x, y), namely three unknowns of Xt, Yt and Zt can be solved (3 unknowns are solved by 4 equations).
For example, a target measuring point (a clock on a wall) is arranged on the other side of the wall, a plurality of digital images of the wall are shot through the process of the scheme, and the position of the clock on the back of the wall is selected after the process treatment, so that the clock coordinate on the back of the wall can be obtained; and in the aspect of optimization, the image of the farthest baseline distance is used as input data, so that the reliability of the measurement result is improved.
The invention discloses a cloud computing-based stereoscopic vision high-precision measurement method, which comprises the following steps of:
Shooting at least two pieces of digital image data including a certain point to be measured by a camera;
Calculating posture position data of the digital image data based on the digital image data;
And calculating to obtain the relative three-dimensional coordinate of the point to be measured relative to the camera based on the attitude position data.
According to the invention, if the actual coordinates of the points need to be measured, GNSS data is needed and at least three digital images are needed to carry the GNSS data; if only the relative coordinates of the region are needed, then there may be no need for GNSS data, but at least two digital images. In the measurement process, the desired point is not necessarily displayed in the stereoscopic image pair, and the perspective positioning can be realized only by knowing the position of the desired point in the spatial projection relation in the corresponding image pair. When the requirement is a relative position relationship, GNSS data is not needed, the real relative position coordinate can be converted by only using a proportion parameter of a projection device of camera equipment, the proportion parameter is that the size of a pixel of a digital image is larger than that of an actual photosensitive element (negative film), and the width of one pixel represents the distance of how long the reality is, as long as the proportion parameter is known, the light distance with the pixel as a unit can be converted into the actual distance, the calculated coordinate is also converted into the actual coordinate from the pixel coordinate, and the coordinate is not restricted by a control point and is therefore the relative coordinate. The relative coordinates are valid when only a distance needs to be measured.
The invention combines cloud computing with portable mobile equipment, can reduce the application cost of photogrammetry, can carry out measurement under the shielding condition, has relative precision of 0.5mm per meter, and can meet the requirement of indoor high-precision measurement.
The invention also provides a computer storage medium on which a computer program is stored, in which the method of the invention, if implemented in the form of software functional units and sold or used as a stand-alone product, can be stored. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer storage medium and used by a processor to implement the steps of the embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer storage medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer storage media may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer storage media that does not include electrical carrier signals and telecommunications signals as subject to legislation and patent practice.
various other modifications and changes may be made by those skilled in the art based on the above-described technical solutions and concepts, and all such modifications and changes should fall within the scope of the claims of the present invention.

Claims (6)

1. The stereoscopic vision high-precision measurement method based on cloud computing is characterized by comprising the following steps:
Shooting at least three pieces of digital image data containing a certain point to be measured by a camera and acquiring GNSS data of a camera coordinate when the point to be measured is shot;
calculating attitude and position data of the digital image data based on the digital image data and the GNSS data;
And obtaining the three-dimensional coordinates of the point to be measured based on the attitude position data.
2. the cloud-computing-based stereoscopic vision high-precision measuring method according to claim 1, wherein the step of calculating the three-dimensional coordinates of the point to be measured based on the attitude and position data specifically comprises:
And calculating to obtain the three-dimensional coordinates of the point to be measured by adopting an angle forward intersection method based on the attitude position data.
3. The cloud-computing-based stereoscopic vision high-precision measurement method according to claim 1, wherein the step of calculating the attitude and position data of the digital image data based on the digital image data and the GNSS data specifically comprises:
matching SIFT descriptors with the same two digital images in the image group by the Euclidean distance minimum principle;
Filtering the mismatching matching point pairs through an ACRANASC algorithm to obtain high-quality matching point pairs;
And establishing an error equation for the high-quality matching point pairs by a light beam adjustment method, and solving the error equation to obtain the poses of any two digital images and the coordinates of the characteristic points in the coordinate system where the coordinate points of the camera are located.
4. The cloud-computing-based stereoscopic vision high-accuracy measurement method according to claim 3, further comprising: and selecting the position and the coordinates of the characteristic points of the two digital images with the longest base line in the coordinate system of the coordinate point of the camera.
5. the stereoscopic vision high-precision measurement method based on cloud computing is characterized by comprising the following steps:
shooting at least two pieces of digital image data including a certain point to be measured by a camera;
Calculating posture position data of the digital image data based on the digital image data;
and calculating to obtain the relative three-dimensional coordinate of the point to be measured relative to the camera based on the attitude position data.
6. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the cloud computing based stereoscopic vision high accuracy measurement method according to any one of claims 1 to 5.
CN201910694797.2A 2019-07-30 2019-07-30 Stereoscopic vision high-precision measurement method based on cloud computing and storage medium Pending CN110579169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910694797.2A CN110579169A (en) 2019-07-30 2019-07-30 Stereoscopic vision high-precision measurement method based on cloud computing and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910694797.2A CN110579169A (en) 2019-07-30 2019-07-30 Stereoscopic vision high-precision measurement method based on cloud computing and storage medium

Publications (1)

Publication Number Publication Date
CN110579169A true CN110579169A (en) 2019-12-17

Family

ID=68810636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910694797.2A Pending CN110579169A (en) 2019-07-30 2019-07-30 Stereoscopic vision high-precision measurement method based on cloud computing and storage medium

Country Status (1)

Country Link
CN (1) CN110579169A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112229323A (en) * 2020-09-29 2021-01-15 华南农业大学 Six-degree-of-freedom measurement method of checkerboard cooperative target based on monocular vision of mobile phone and application of six-degree-of-freedom measurement method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101750015A (en) * 2009-12-11 2010-06-23 东南大学 Gravel pit earth volume measuring method based on digital image technology
EP2322901A2 (en) * 2009-11-16 2011-05-18 Riegl Laser Measurement Systems GmbH Method for improving position and orientation measurement data
JP2013185851A (en) * 2012-03-06 2013-09-19 Sumitomo Mitsui Construction Co Ltd Positioning apparatus, positioning system including the same, and positioning method
US20140118536A1 (en) * 2012-11-01 2014-05-01 Novatel Inc. Visual positioning system
CN104330022A (en) * 2013-07-22 2015-02-04 赫克斯冈技术中心 Method and system for volume determination using a structure from motion algorithm
CN105300362A (en) * 2015-11-13 2016-02-03 上海华测导航技术股份有限公司 Photogrammetry method used for RTK receivers
CN205383995U (en) * 2015-12-31 2016-07-13 天津市嘉尔屹科技发展有限公司 Cloud technology based measuring system
CN106123798A (en) * 2016-03-31 2016-11-16 北京北科天绘科技有限公司 A kind of digital photography laser scanning device
CN107481279A (en) * 2017-05-18 2017-12-15 华中科技大学 A kind of monocular video depth map computational methods
CN108881667A (en) * 2018-08-01 2018-11-23 广州南方卫星导航仪器有限公司 A kind of multi-angle of view image acquisition device
CN109949232A (en) * 2019-02-12 2019-06-28 广州南方卫星导航仪器有限公司 Measurement method, system, electronic equipment and medium of the image in conjunction with RTK

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2322901A2 (en) * 2009-11-16 2011-05-18 Riegl Laser Measurement Systems GmbH Method for improving position and orientation measurement data
CN101750015A (en) * 2009-12-11 2010-06-23 东南大学 Gravel pit earth volume measuring method based on digital image technology
JP2013185851A (en) * 2012-03-06 2013-09-19 Sumitomo Mitsui Construction Co Ltd Positioning apparatus, positioning system including the same, and positioning method
US20140118536A1 (en) * 2012-11-01 2014-05-01 Novatel Inc. Visual positioning system
CN104330022A (en) * 2013-07-22 2015-02-04 赫克斯冈技术中心 Method and system for volume determination using a structure from motion algorithm
CN105300362A (en) * 2015-11-13 2016-02-03 上海华测导航技术股份有限公司 Photogrammetry method used for RTK receivers
CN205383995U (en) * 2015-12-31 2016-07-13 天津市嘉尔屹科技发展有限公司 Cloud technology based measuring system
CN106123798A (en) * 2016-03-31 2016-11-16 北京北科天绘科技有限公司 A kind of digital photography laser scanning device
CN107481279A (en) * 2017-05-18 2017-12-15 华中科技大学 A kind of monocular video depth map computational methods
CN108881667A (en) * 2018-08-01 2018-11-23 广州南方卫星导航仪器有限公司 A kind of multi-angle of view image acquisition device
CN109949232A (en) * 2019-02-12 2019-06-28 广州南方卫星导航仪器有限公司 Measurement method, system, electronic equipment and medium of the image in conjunction with RTK

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
周泩朴;耿国华;李康;王飘: "一种基于AKAZE算法的多视图几何三维重建方法", 《计算机科学》 *
唐秋虎: "基于序列图像的三维点云数据获取技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
朱凌: "《摄影测量基础》", 30 June 2018 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112229323A (en) * 2020-09-29 2021-01-15 华南农业大学 Six-degree-of-freedom measurement method of checkerboard cooperative target based on monocular vision of mobile phone and application of six-degree-of-freedom measurement method

Similar Documents

Publication Publication Date Title
CN112102458B (en) Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance
CN108665536B (en) Three-dimensional and live-action data visualization method and device and computer readable storage medium
CN109816703B (en) Point cloud registration method based on camera calibration and ICP algorithm
CN107564069B (en) Method and device for determining calibration parameters and computer readable storage medium
AU2011312140B2 (en) Rapid 3D modeling
CN112949478B (en) Target detection method based on tripod head camera
CN105654547B (en) Three-dimensional rebuilding method
CN110517209B (en) Data processing method, device, system and computer readable storage medium
CN106548489A (en) The method for registering of a kind of depth image and coloured image, three-dimensional image acquisition apparatus
CN104677277B (en) A kind of method and system for measuring object geometric attribute or distance
JP2019190974A (en) Calibration device, calibration method and program
CN111862180A (en) Camera group pose acquisition method and device, storage medium and electronic equipment
CN110009687A (en) Color three dimension imaging system and its scaling method based on three cameras
EP4411627A1 (en) Photogrammetry method, apparatus and device, and storage medium
CN113160328A (en) External reference calibration method, system, robot and storage medium
CN110738703A (en) Positioning method and device, terminal and storage medium
CN117456114B (en) Multi-view-based three-dimensional image reconstruction method and system
CN115345942A (en) Space calibration method and device, computer equipment and storage medium
CN116091724A (en) Building digital twin modeling method
CN110579169A (en) Stereoscopic vision high-precision measurement method based on cloud computing and storage medium
GB2569609A (en) Method and device for digital 3D reconstruction
KR101673144B1 (en) Stereoscopic image registration method based on a partial linear method
CN113034615B (en) Equipment calibration method and related device for multi-source data fusion
CN112819900B (en) Method for calibrating internal azimuth, relative orientation and distortion coefficient of intelligent stereography
CN115018922A (en) Distortion parameter calibration method, electronic device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191217