CN113884080A - Method and equipment for determining three-dimensional coordinates of positioning point and photoelectric measuring instrument - Google Patents

Method and equipment for determining three-dimensional coordinates of positioning point and photoelectric measuring instrument Download PDF

Info

Publication number
CN113884080A
CN113884080A CN202110914073.1A CN202110914073A CN113884080A CN 113884080 A CN113884080 A CN 113884080A CN 202110914073 A CN202110914073 A CN 202110914073A CN 113884080 A CN113884080 A CN 113884080A
Authority
CN
China
Prior art keywords
measuring instrument
positioning
dimensional
point
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110914073.1A
Other languages
Chinese (zh)
Inventor
周恺弟
王学运
潘成伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Motu Technology Co ltd
Original Assignee
Beijing Motu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Motu Technology Co ltd filed Critical Beijing Motu Technology Co ltd
Priority to CN202110914073.1A priority Critical patent/CN113884080A/en
Publication of CN113884080A publication Critical patent/CN113884080A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • G01C15/02Means for marking measuring points
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • G01B11/005Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates coordinate measuring machines
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/026Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring distance between sensor and object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/26Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes

Abstract

The invention relates to a method and equipment for measuring three-dimensional coordinates of a positioning point and a photoelectric measuring instrument, wherein the method comprises the following steps: providing a camera, a photoelectric measuring instrument and a driving device for controlling the movement of the photoelectric measuring instrument; acquiring a two-dimensional image of a positioning point at a first position by using a camera, and determining a centroid position corresponding to the positioning point in the two-dimensional image; using a photoelectric measuring instrument to beat a laser point at a first position; driving the photoelectric measuring instrument by a driving device to enable the laser point to coincide with the centroid position corresponding to the positioning point in the two-dimensional image obtained at the first position; and (3) marking the three-dimensional coordinates of the laser point at the first position of the photoelectric measuring instrument as the three-dimensional coordinates of the positioning point.

Description

Method and equipment for determining three-dimensional coordinates of positioning point and photoelectric measuring instrument
Technical Field
The invention relates to the field of positioning and tracking, in particular to a method and a device for arranging positioning points in a space environment, and a method and equipment for measuring three-dimensional coordinates of the positioning points in the space environment.
Background
In a visual positioning system, especially in an Inside-out positioning system, a camera is often placed on an object to be positioned, then markers are fixed in the environment, and after the markers are observed by the camera, the coordinates of the camera can be calculated according to the position coordinates of the markers, so that in any kind of Inside-out positioning system based on the markers, the first problem to be solved is to calibrate the three-dimensional coordinate position of each marker, and the position of the camera can be positioned after the positions of the markers are known. This process may be referred to as calibration, and may also be referred to as three-dimensional reconstruction, or modeling.
The conventional modeling uses sfm (structure From motion), and the procedure is simply to first apply the mark and then take a video to calculate the position of each mark. Specifically, the method comprises the following steps:
(1) firstly, deploying a mark in an environment;
(2) taking pictures or videos containing the markers from different positions and angles with a camera with known internal parameters (calibrated);
(3) extracting image coordinates of mark points in each picture;
(4) finding the same mark points in different pictures through an algorithm, or associating the mark points in different pictures;
(5) according to the position of the mark, the coordinate of the mark point can meet the constraint equation as much as possible, and the reprojection error is minimum when the final mark point is reprojected onto multiple pictures with different visual angles.
However, the above method has poor modeling accuracy, and the larger the modeling range, the larger the error.
Disclosure of Invention
The present invention has been made to solve or alleviate at least one aspect of the above technical problems.
According to an aspect of an embodiment of the present invention, the present invention provides an anchor point arranging method, including:
step 1: calibrating at least three first area points in a first plane by using a photoelectric measuring instrument at a first position, and recording three-dimensional coordinates of each first area point, wherein the at least three first area points define a first dotting area;
step 2: simulating and generating a plurality of positioning points in the first dotting area in a control module of the photoelectric measuring instrument;
and step 3: and the control module controls the photoelectric measuring instrument to perform laser dotting on the positioning points which are generated in the first dotting area in a simulated mode one by one and acquire the three-dimensional coordinates of the laser points.
Optionally, the method further includes:
and 4, step 4: calibrating at least three second area points in a second plane by using the photoelectric measuring instrument at the first position, and recording the three-dimensional coordinates of each second area point, wherein the at least three second area points define a second dotting area;
and 5: simulating to generate a plurality of positioning points in the second dotting area in a control module of the photoelectric measuring instrument; and
step 6: and the control module controls the photoelectric measuring instrument to generate a plurality of positioning points to enable the laser to be in the second dotting area one by one on the basis of the simulation in the second dotting area, and the three-dimensional coordinates of the laser point are obtained.
Or optionally, the method further comprises:
and 4, step 4: calibrating at least three second area points in a second plane by using the photoelectric measuring instrument moving from the first position to the second position, and recording the three-dimensional coordinates of each second area point, wherein the at least three second area points define a second dotting area;
and 5: simulating to generate a plurality of positioning points in the second dotting area in a control module of the photoelectric measuring instrument; and
step 6: the control module controls the photoelectric measuring instrument to beat the laser in the second dotting area one by one based on the positioning point generated in the second dotting area in a simulated mode and obtains the three-dimensional coordinates of the laser point; and
and 7: and converting the coordinates of the positioning point in the coordinate system of the second position into the coordinate system of the first position, or converting the coordinates of the positioning point in the coordinate system of the first position into the coordinate system of the second position.
Further, step 7 comprises:
the laser of the photoelectric measuring instrument at the second position is shot to at least three positioning points which are calibrated on the first position, and the three-dimensional coordinates of the laser in the coordinate system at the second position are recorded;
and solving the conversion relation of the two coordinate systems based on the three-dimensional coordinates of the at least three positioning points in the coordinate system of the first position and the three-dimensional coordinates of the at least three positioning points in the coordinate system of the second position.
According to another aspect of embodiments of the present invention, there is provided a method of arranging an optical positioning mark including a positioning portion, the method including:
arranging positioning points by using the method; and
the optical positioning marks are attached to positioning points which are formed by laser emitted by the photoelectric measuring instrument one by one, and the centroids of the positioning parts are aligned with the corresponding positioning points.
Optionally, the optical positioning marks further include ID portions, and each optical positioning mark corresponds to one ID, and the method further includes the steps of: and forming a corresponding relation between the ID of the optical positioning mark and the three-dimensional coordinates of the corresponding positioning point when the optical positioning marks are attached one by one. Further, the ID part includes a color coding region composed of a plurality of color blocks, and the color blocks are visible light color blocks and are sequentially arranged to form a color code.
According to still another aspect of the embodiments of the present invention, there is provided an anchor point arranging apparatus including:
the area delineation device is used for demarcating at least three first area points in a first plane by using the photoelectric measuring instrument at a first position and recording three-dimensional coordinates of each first area point, and the at least three first area points define a first dotting area;
the control module is used for controlling the photoelectric measuring instrument to generate a plurality of positioning points in a simulation mode in the first positioning point area; and
and the control module is used for controlling the photoelectric measuring instrument to perform laser dotting on the positioning points which are generated in the first dotting area in a simulated mode one by one and acquiring the three-dimensional coordinates of the laser points.
According to still another aspect of embodiments of the present invention, there is provided a method of determining three-dimensional coordinates of a plurality of positioning points in a spatial environment, each having a different ID corresponding to the three-dimensional coordinates of the positioning point, the plurality of positioning points including at least four positioning points whose three-dimensional coordinates are known in advance, the method including:
step 1: forming a plurality of two-dimensional images at different positions of the positioning points by using a calibrated camera, wherein the positioning points form code points in the two-dimensional images, the plurality of two-dimensional images comprise at least two-dimensional images with determinable camera poses, and the two-dimensional images with determinable camera poses comprise at least four code points corresponding to the positioning points with known three-dimensional coordinates;
step 2: extracting ID of a positioning point corresponding to each code point in each two-dimensional image and image coordinates of each code point;
and step 3: finding out a two-dimensional image with a determinable camera pose, and obtaining the camera pose corresponding to the two-dimensional image with the determinable camera pose based on the two-dimensional coordinates of code points corresponding to at least four positioning points with known three-dimensional coordinates in the two-dimensional image with the determinable camera pose and the three-dimensional coordinates of at least four positioning points with known corresponding three-dimensional coordinates;
and 4, step 4: regarding code points corresponding to the same positioning point with unknown three-dimensional coordinates in the two-dimensional images with known poses, taking the three-dimensional coordinates of intersection points of two three-dimensional positions corresponding to the cameras and connecting lines of the code points corresponding to the same positioning point in the two-dimensional images as the three-dimensional coordinates of the same positioning point, and enabling the three-dimensional coordinates and the corresponding ID to form a corresponding relation.
Optionally, the method further includes:
and 5: and repeating the step 3 and the step 4 to obtain the three-dimensional coordinates of the positioning points of which the other three-dimensional coordinates are unknown.
According to still another aspect of embodiments of the present invention, there is provided a method of determining three-dimensional coordinates of a plurality of positioning points in a spatial environment, each having a different ID corresponding to the three-dimensional coordinates of the positioning point, the plurality of positioning points including at least four positioning points whose three-dimensional coordinates are known in advance, the method including:
step 1: utilizing a calibrated camera to form at least two-dimensional images for positioning points with known at least four three-dimensional positions or selecting at least two-dimensional images formed for positioning points with known at least four three-dimensional positions, wherein the positioning points form code points in the two-dimensional images, and each of the at least two-dimensional images also comprises code points corresponding to the same positioning point with unknown three-dimensional coordinates;
step 2: extracting ID of a positioning point corresponding to each code point in the two-dimensional image and image coordinates of each code point;
and step 3: acquiring camera poses corresponding to the two-dimensional images based on the three-dimensional coordinates of the at least four positioning points with known three-dimensional positions and the image coordinates of the code points corresponding to the at least four positioning points with known three-dimensional positions;
and 4, step 4: regarding code points corresponding to the same positioning point with unknown three-dimensional coordinates in the two-dimensional images with known poses, taking the three-dimensional coordinates of intersection points of two three-dimensional positions corresponding to the cameras and connecting lines of the code points corresponding to the same positioning point in the two-dimensional images as the three-dimensional coordinates of the same positioning point, and enabling the three-dimensional coordinates and the corresponding ID to form a corresponding relation.
Optionally, the method further includes:
and 5: and repeating the steps 1-4 to obtain the three-dimensional coordinates of the positioning points of which the other three-dimensional coordinates are unknown.
The embodiment of the invention also relates to equipment for measuring three-dimensional coordinates of a plurality of positioning points in a space environment, wherein each positioning point has different ID (identity), the ID corresponds to the three-dimensional coordinates of the positioning point, the plurality of positioning points comprise at least four positioning points with known three-dimensional coordinates in advance, and the equipment comprises:
the camera is calibrated, the camera forms a plurality of two-dimensional images at different positions for the positioning points, the positioning points form code points in the two-dimensional images, the two-dimensional images comprise at least two-dimensional images with determinable camera poses, and each two-dimensional image with determinable camera poses comprises code points corresponding to at least four positioning points with known three-dimensional coordinates;
a device for extracting the ID of the positioning point corresponding to each code point in each two-dimensional image and the image coordinate of each code point;
the camera pose determining device is used for finding out a two-dimensional image with a determinable camera pose, and obtaining the camera pose corresponding to the two-dimensional image with the determinable camera pose based on the two-dimensional coordinates of the code points corresponding to at least four positioning points with known three-dimensional coordinates in the two-dimensional image with the determinable camera pose and the three-dimensional coordinates of at least four positioning points with known three-dimensional coordinates;
the positioning point three-dimensional coordinate acquisition device is used for: regarding code points corresponding to the same positioning point with unknown three-dimensional coordinates in the two-dimensional images with known poses, taking the three-dimensional coordinates of intersection points of two three-dimensional positions corresponding to the cameras and connecting lines of the code points corresponding to the same positioning point in the two-dimensional images as the three-dimensional coordinates of the same positioning point, and enabling the three-dimensional coordinates and the corresponding ID to form a corresponding relation.
The embodiment of the present invention also relates to an apparatus for determining three-dimensional coordinates of a plurality of positioning points in a spatial environment, each of the positioning points having a different ID corresponding to the three-dimensional coordinates of the positioning point, the plurality of positioning points including at least four positioning points whose three-dimensional coordinates are known in advance, the apparatus comprising:
the camera forms two-dimensional images for at least four positioning points with known three-dimensional positions, the positioning points form code points in the two-dimensional images, and each of the two-dimensional images also comprises the code points corresponding to the same positioning point with unknown three-dimensional coordinates;
a device for extracting ID of a positioning point corresponding to each code point in the two-dimensional image and image coordinates of each code point;
the device for acquiring the camera poses is used for acquiring the camera poses corresponding to the two-dimensional images based on the three-dimensional coordinates of the positioning points with known at least four three-dimensional positions and the image coordinates of the code points corresponding to the positioning points with known at least four three-dimensional positions;
the positioning point three-dimensional coordinate acquisition device is used for: regarding code points corresponding to the same positioning point with unknown three-dimensional coordinates in the two-dimensional images with known poses, taking the three-dimensional coordinates of intersection points of two three-dimensional positions corresponding to the cameras and connecting lines of the code points corresponding to the same positioning point in the two-dimensional images as the three-dimensional coordinates of the same positioning point, and enabling the three-dimensional coordinates and the corresponding ID to form a corresponding relation.
According to an embodiment of the present invention, there is also provided a method of determining three-dimensional coordinates of a plurality of positioning points in a spatial environment, the method including:
step 1: providing a camera, a photoelectric measuring instrument and a driving device for controlling the movement of the photoelectric measuring instrument;
step 2: acquiring a two-dimensional image of a positioning point at a first position by using a camera, and determining a centroid position corresponding to the positioning point in the two-dimensional image;
and step 3: using a photoelectric measuring instrument to beat a laser point at a first position;
and 4, step 4: driving the photoelectric measuring instrument by a driving device to enable the laser point to coincide with the centroid position corresponding to the positioning point in the two-dimensional image obtained at the first position;
and 5: and (3) marking the three-dimensional coordinates of the laser point at the first position of the photoelectric measuring instrument as the three-dimensional coordinates of the positioning point.
Optionally, in step 5, the three-dimensional coordinates of the positioning point and the ID form a corresponding relationship.
Optionally, the method further includes:
step 6: and repeating the steps 2-5 to obtain the three-dimensional coordinates of the positioning points of which the other three-dimensional coordinates are unknown.
Optionally, a calibration step is further included between step 1 and step 2: and calibrating the camera and the photoelectric measuring instrument to align the coordinate system of the camera with the coordinate system of the photoelectric measuring instrument.
The method further comprises the following steps:
and 7: acquiring a two-dimensional image of a positioning point at a second position by using a camera, and determining a centroid position corresponding to the positioning point in the two-dimensional image;
and 8: using a photoelectric measuring instrument to beat a laser point at the second position;
and step 9: driving the photoelectric measuring instrument by a driving device to enable the laser point to coincide with the centroid position corresponding to the positioning point in the two-dimensional image acquired at the second position;
step 10: marking the three-dimensional coordinates of the laser point at the second position of the photoelectric measuring instrument as the three-dimensional coordinates of the positioning point;
step 11: and converting the coordinates of the positioning point in the coordinate system of the second position into the coordinate system of the first position, or converting the coordinates of the positioning point in the coordinate system of the first position into the coordinate system of the second position.
Optionally, step 11 includes:
the laser of the photoelectric measuring instrument at the second position is shot to at least three positioning points which are calibrated on the first position, and the three-dimensional coordinates of the laser in the coordinate system at the second position are recorded;
and solving the conversion relation of the two coordinate systems based on the three-dimensional coordinates of the at least three positioning points in the coordinate system of the first position and the three-dimensional coordinates of the at least three positioning points in the coordinate system of the second position.
An embodiment of the present invention further provides an apparatus for determining three-dimensional coordinates of a plurality of positioning points in a spatial environment, the apparatus including:
the system comprises a camera and a photoelectric measuring instrument, wherein a camera coordinate system is aligned with a coordinate system of the photoelectric measuring instrument;
a device for determining the centroid position corresponding to the positioning point in the two-dimensional image acquired by the camera;
the driving device drives the photoelectric measuring instrument to enable the laser points shot by the photoelectric measuring instrument to coincide with the centroid positions corresponding to the positioning points in the two-dimensional image;
and the positioning point three-dimensional coordinate acquisition device is used for taking the three-dimensional coordinate of the laser point which is formed by the photoelectric measuring instrument as the three-dimensional coordinate of the positioning point.
The embodiment of the present invention further provides an apparatus for determining three-dimensional coordinates of a positioning point in a spatial environment, including:
a photoelectric measuring instrument adapted to fire a laser spot, the photoelectric measuring instrument having a driving device for driving the photoelectric measuring instrument to adjust a position of the fired laser spot;
the camera is suitable for capturing a positioning point in a space environment and a two-dimensional image of a laser point shot by the photoelectric measuring instrument, and a coordinate system of the camera is aligned with a coordinate system of the photoelectric measuring instrument; and
and the control device is used for controlling the driving device so that the laser points shot by the photoelectric measuring instrument coincide with the centroid positions corresponding to the positioning points in the two-dimensional image.
Optionally, the photoelectric measuring instrument includes a distance measuring sensor and an angle measuring sensor, the distance measuring sensor is suitable for shooting a laser spot and measuring a distance between the photoelectric measuring instrument and the laser spot, the angle measuring sensor is used for measuring a yaw angle and a pitch angle of the distance measuring sensor, wherein: and the photoelectric measuring instrument obtains the three-dimensional coordinate of the laser point relative to the photoelectric measuring instrument based on the measured distance, the yaw angle and the pitch angle.
An embodiment of the present invention also provides an optoelectronic measuring instrument, including:
the distance measuring sensor is suitable for shooting a laser point and measuring the distance between the photoelectric measuring instrument and the laser point;
the angle measurement sensor is used for measuring the yaw angle and the pitch angle of the distance measurement sensor;
and the driving device is used for driving the distance measuring sensor to adjust the position of the shot laser spot.
Optionally, the angle measurement sensor is a two-axis angle measurement sensor.
Drawings
FIG. 1 is a flow chart of a method of locating point placement in accordance with an exemplary embodiment of the present invention;
FIG. 2 is a flow chart of a method of arranging optical locating marks in accordance with an exemplary embodiment of the present invention;
FIG. 3 is a schematic illustration of an optical locating mark according to an exemplary embodiment of the present invention;
FIG. 4 is a schematic illustration of an optical locating mark according to an exemplary embodiment of the present invention;
FIG. 5 is a flow chart of a method of determining three-dimensional coordinates of a plurality of location points in a spatial environment according to an exemplary embodiment of the present invention;
FIG. 6 is a flow chart of a method of determining three-dimensional coordinates of a plurality of location points in a spatial environment in accordance with an exemplary embodiment of the present invention;
FIG. 7 is a flow chart of a method of determining three-dimensional coordinates of a plurality of location points in a spatial environment in accordance with an exemplary embodiment of the present invention;
FIG. 8 is a schematic diagram of an apparatus for determining three-dimensional coordinates of a location point in a spatial environment, according to an exemplary embodiment of the present invention;
fig. 9 is a schematic structural diagram of an optoelectronic measurement instrument according to an exemplary embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings. In the specification, the same or similar reference numerals denote the same or similar components. The following description of the embodiments of the present invention with reference to the accompanying drawings is intended to explain the general inventive concept of the present invention and should not be construed as limiting the invention.
Example 1
As shown in fig. 1, the present invention provides a positioning point arrangement method, including:
step 1: calibrating at least three first area points in a first plane by using a photoelectric measuring instrument at a first position, and recording three-dimensional coordinates of each first area point, wherein the at least three first area points define a first dotting area;
step 2: simulating and generating a plurality of positioning points in the first dotting area in a control module of the photoelectric measuring instrument;
and step 3: and the control module controls the photoelectric measuring instrument to perform laser dotting on the positioning points which are generated in the first dotting area in a simulated mode one by one and acquire the three-dimensional coordinates of the laser points.
In the invention, the photoelectric measuring instrument can be a total station as long as the photoelectric measuring instrument can perform laser ranging and angle measurement, and mainly comprises laser ranging equipment and a theodolite (angle measurement), so that three-dimensional coordinates in a certain range can be accurately measured. The photoelectric measuring instrument can be used for shooting laser points into the environment, and the three-dimensional coordinates of the points can be obtained by shooting the laser points everywhere. The photoelectric measuring instrument can be controlled by a servo motor and a controller. As will be appreciated by those skilled in the art, in the present invention, the three-dimensional coordinates of the anchor points are measured and then stored or transmitted.
The position of the optoelectronic measuring instrument may be constant, but in order to arrange the anchor points in another plane, the method may further comprise:
and 4, step 4: calibrating at least three second area points in a second plane by using the photoelectric measuring instrument at the first position, and recording the three-dimensional coordinates of each second area point, wherein the at least three second area points define a second dotting area;
and 5: simulating to generate a plurality of positioning points in the second dotting area in a control module of the photoelectric measuring instrument; and
step 6: and the control module controls the photoelectric measuring instrument to generate a plurality of positioning points to enable the laser to be in the second dotting area one by one on the basis of the simulation in the second dotting area, and the three-dimensional coordinates of the laser point are obtained.
In the case where it is necessary to move the photoelectric measuring instrument to another position to arrange the anchor points in another plane, the method may further include:
and 4, step 4: calibrating at least three second area points in a second plane by using the photoelectric measuring instrument moving from the first position to the second position, and recording the three-dimensional coordinates of each second area point, wherein the at least three second area points define a second dotting area;
and 5: simulating to generate a plurality of positioning points in the second dotting area in a control module of the photoelectric measuring instrument; and
step 6: the control module controls the photoelectric measuring instrument to beat the laser in the second dotting area one by one based on the positioning point generated in the second dotting area in a simulated mode and obtains the three-dimensional coordinates of the laser point; and
and 7: and converting the coordinates of the positioning point in the coordinate system of the second position into the coordinate system of the first position, or converting the coordinates of the positioning point in the coordinate system of the first position into the coordinate system of the second position.
Specifically, the coordinate transformation may include the steps of: the laser of the photoelectric measuring instrument at the second position is shot to at least three positioning points which are calibrated on the first position, and the three-dimensional coordinates of the laser in the coordinate system at the second position are recorded; and solving the conversion relation of the two coordinate systems by using an SVD (singular value decomposition) or ICP (inductively coupled plasma) algorithm for example based on the three-dimensional coordinates of the at least three positioning points in the coordinate system of the first position and the three-dimensional coordinates of the at least three positioning points in the coordinate system of the second position.
Correspondingly, the invention relates to a positioning point arranging device, comprising:
the area delineation device is used for demarcating at least three first area points in a first plane by using the photoelectric measuring instrument at a first position and recording three-dimensional coordinates of each first area point, and the at least three first area points define a first dotting area;
the control module is used for controlling the photoelectric measuring instrument to generate a plurality of positioning points in a simulation mode in the first positioning point area; and
and the control module is used for controlling the photoelectric measuring instrument to perform laser dotting on the positioning points which are generated in the first dotting area in a simulated mode one by one and acquiring the three-dimensional coordinates of the laser points.
Accordingly, as shown in fig. 2, the present invention further provides an arrangement method of an optical positioning mark, where the optical positioning mark includes a positioning portion, the method includes:
arranging positioning points by using the method; and
the optical positioning marks are attached to positioning points which are formed by laser emitted by the photoelectric measuring instrument one by one, and the centroids of the positioning parts are aligned with the corresponding positioning points.
In this way, the photoelectric measuring instrument can automatically perform dotting in the environment, and the deployment personnel can paste marks at the laser point.
Optionally, the optical positioning marks further include ID portions, and each optical positioning mark corresponds to one ID, and the method further includes the steps of:
and forming a corresponding relation between the ID of the optical positioning mark and the three-dimensional coordinates of the corresponding positioning point when the optical positioning marks are attached one by one.
Positioning points with known three-dimensional coordinates are arranged, and finally, the point cloud is formed.
An exemplary embodiment of a method of arranging anchor points is described in detail below. The method comprises the following steps:
(1) the optoelectronic measuring instrument is placed in the environment, 4 points are manually designated in a plane by laser points, three-dimensional coordinates of the 4 points are recorded, and an area is defined, as known to those skilled in the art, three points can also define an area.
(2) The photoelectric instrument randomly generates a plurality of sampling points within a range defined by 4 points by using a sampling algorithm. The photoelectric measuring instrument makes laser points hit on sampling points one by one and obtains three-dimensional coordinates. The randomness here may require that the distance between sampling points is greater than a predetermined threshold, as desired.
(3) The user applies a mark to the position of the laser spot that is struck by the opto-electronic instrument. If the mark has ID, recording the ID and the three-dimensional coordinate together, otherwise, only recording the three-dimensional coordinate.
(4) And (4) if the mark needs to be deployed on a plurality of planes, repeating the steps (1) to (3).
(5) If the placing position of the photoelectric measuring instrument needs to be changed due to shielding in the midway, the photoelectric measuring instrument still repeats the steps (1) to (4) after the position is changed. However, the measured coordinate point belongs to a new coordinate system, and the conversion relationship between the new coordinate system and the original coordinate system needs to be obtained, and the specific method is as follows: and (3) striking the laser points of the photoelectric measuring instrument to at least three marks measured before the position of the photoelectric measuring instrument is changed, and recording mark IDs and three-dimensional coordinates. The coordinates of at least three same markers in two different coordinate systems are used as input, and the conversion relation between the new coordinate system and the old coordinate system can be solved through an SVD (singular value decomposition) or ICP (inductively coupled plasma) algorithm, so that the coordinates of the marker points measured by the new coordinate system can be converted into the original coordinate system.
In the embodiment of the invention, the modeling precision is high, and no global accumulated error exists. However, this method requires photoelectric measuring instruments and also requires specialized person modeling.
In the invention, the ID part comprises a color coding region formed by a plurality of color blocks, and the color blocks are visible light color blocks and are sequentially arranged to form a color code. The description about the ID section is equally applicable to other embodiments of the present disclosure.
The optical locating mark 100 is described below with reference to fig. 3 and 4.
As shown in fig. 3, a small circle including a cross in the middle is a position of the light reflecting point or the light reflecting portion or the positioning portion 10, the small circle is an inner boundary of the color coding region, the black ring at the outermost layer is an outer boundary of the color coding region, and the ring including the fan rings with different colors in the middle is the color coding region 20. Different colors represent different codes. It is to be noted that the cross may also not be present; in addition, the positioning part can be arranged to cover the circle center of the small circle; the outermost rings may be black or of another color, but preferably are of a different color than the color in the color-coded region, and advantageously are of a different color than the background environment.
The color information of the outermost circle can quickly and accurately confirm the appearance and the outline of the optical positioning mark, and the circle center position of the optical positioning mark can be conveniently and quickly and accurately confirmed.
In decoding, a reference color block, such as the white region block in fig. 3, can be found first, and thenThe colors of the other color blocks are read sequentially in a clockwise direction (which may be counterclockwise of course). Taking fig. 3 as an example, the middle ring is divided into 6 parts, and the area occupied by each code is 1/6 rings (60-degree fan ring). In the figure, there are 4 colors of red (R), blue (B), green (G) and white (W), which respectively represent four codes of 0, 1, 2 and 3. Starting from the white area, encoding is done clockwise, the color coding in fig. 3 is 320100. Regarding the design of color coding, if the number n of intermediate color blocks, the number c of combinations of colors are determined, a total of c can be generatedn-1In the embodiment of fig. 4, the color of the reference block cannot be the same as the colors of other color blocks, otherwise, one color code generates multiple decoding modes, which destroys the uniqueness of the color code decoding. It should be noted that it is necessary to specify that the color of the reference patch is different from that of the other patches because the reference patch is not suitable for being determined by a position when the patches form a ring, in other words, the reference patch is selected by a color in the case where the patches form a color-coded region of a ring.
In the embodiment of fig. 3, the design of the middle color block is extensible and may be embodied in the following aspects:
1. the color can be selected from a plurality of color combinations with higher resolution for coding. The selection can be made in a variety of color spaces, such as RGB, HSV color space selection. For example, pure red (R ═ 255, G ═ 0, and B ═ 0), pure blue (R ═ 0, G ═ 0, and B ═ 255), and pure green (R ═ 0, G ═ 255, and B ═ 0) are selected, and the reference color is, for example, pure white (R ═ 255, G ═ 255, and B ═ 255).
2. The larger the number of the partitions of the intermediate color blocks, the larger the number of the finally generated codes, which mainly depends on the number of the code points to be deployed in the actual three-dimensional scene. For example, if the number of color blocks is 4 and the number of color combinations is 3, 27 codes are generated, and if the number of color blocks is increased to 6 and the number of color combinations is 3, 243 codes are generated.
It should be noted that the sector-shaped color blocks in fig. 3 may also be in the form of unequal circles, and in this case, each color block or each encoding point is not determined by an angle, but is directly determined by the color of the color block. For example, in this case, two red color blocks in the upper right corner of fig. 3 are regarded as one color block.
How to obtain the light-reflecting spots of the optical locating marks is described in brief by way of example with reference to fig. 3. Firstly, the infrared camera obtains a gray level image, the position of a reflective point is brighter, the gray level value is high, and pixels which are not the reflective point can be filtered by utilizing a threshold value method. Then, clustering is carried out on the pixels of the reflection points to obtain a connected region of the reflection points in the image. And finally, fitting a circle or an ellipse according to the connected region, and approximately solving the centroid of the reflection point by using the circle center.
FIG. 4 is a schematic illustration of an optical locating mark according to an exemplary embodiment of the present invention. In fig. 4, the entire optical locating mark is generally elongated, and each color patch is a straight color stripe in the color-coded region 20. Adjacent color bars may be adjacent to each other and have different colors. The width of the color bar can be used for distinguishing different color bars. The positioning portion 10 may be disposed at any position of the optical positioning mark, and optionally, disposed at one end of an elongated optical positioning mark, as shown in fig. 4. Wherein the color-coded region includes a single reference patch at one end in a lengthwise direction of the optical positioning mark, and color coding is started from the single reference patch in a predetermined order. In fig. 4, the color code may be 0212.
Example 2
The point cloud of the localization points can also be modeled or obtained with initialization values.
As shown in fig. 5, the present invention relates to a method for determining three-dimensional coordinates of a plurality of positioning points in a spatial environment, each having a different ID corresponding to the three-dimensional coordinates of the positioning points, the plurality of positioning points including at least four positioning points whose three-dimensional coordinates are known in advance, the method comprising:
step 1: forming a plurality of two-dimensional images at different positions of the positioning points by using a calibrated camera, wherein the positioning points form code points in the two-dimensional images, the plurality of two-dimensional images comprise at least two-dimensional images with determinable camera poses, and the two-dimensional images with determinable camera poses comprise at least four code points corresponding to the positioning points with known three-dimensional coordinates;
step 2: extracting ID of a positioning point corresponding to each code point in each two-dimensional image and image coordinates of each code point;
and step 3: finding out a two-dimensional image with a determinable camera pose, and obtaining the camera pose corresponding to the two-dimensional image with the determinable camera pose based on the two-dimensional coordinates of code points corresponding to at least four positioning points with known three-dimensional coordinates in the two-dimensional image with the determinable camera pose and the three-dimensional coordinates of at least four positioning points with known corresponding three-dimensional coordinates;
and 4, step 4: regarding code points corresponding to the same positioning point with unknown three-dimensional coordinates in the two-dimensional images with known poses, taking the three-dimensional coordinates of intersection points of two three-dimensional positions corresponding to the cameras and connecting lines of the code points corresponding to the same positioning point in the two-dimensional images as the three-dimensional coordinates of the same positioning point, and enabling the three-dimensional coordinates and the corresponding ID to form a corresponding relation.
It should be noted that the correspondence between ID and three-dimensional coordinate of a positioning point includes not only the case that the three-dimensional coordinate of each positioning point corresponds to one ID, but also the case that there is a possibility that one ID corresponds to a plurality of three-dimensional positions (because coding resources are insufficient, for example, there are 200 different coding IDs, but 3000 points need to be deployed, and there must be repetition), but in the latter case, in a specific image, a unique three-dimensional position is obtained by an unambiguous ID, for example, three-dimensional positions corresponding to code points in the same image are close.
The method may further include step 5: and repeating the step 3 and the step 4 to obtain the three-dimensional coordinates of the positioning points of which the other three-dimensional coordinates are unknown.
It should be noted that, it is within the scope of the present invention that the two-dimensional image may be obtained in the form of a picture or a video.
It should be noted that, in an ideal situation, two connecting lines may intersect to form the intersection point, but in an actual operation, due to the existence of an error, the two connecting lines do not necessarily intersect, and at this time, more than two-dimensional images with known poses are required to form more than two connecting lines, and a plurality of connecting lines may obtain an "intersection point" that is approximately intersected. In the present invention, such a case where more than two connecting lines obtain a so-called intersection point is also included in the range of the feature "the intersection points of the two three-dimensional positions corresponding to the cameras and the connecting lines corresponding to the code points corresponding to the same positioning point in the two-dimensional images, respectively".
It should be noted that, in the present invention, the calibrated camera is actually a pinhole imaging model. The pose of the known camera is actually the three-dimensional position of the optical center of the known lens and the pose of the imaging plane, and the connecting line is a line between the optical center of the lens and the imaging center of the positioning point (i.e. the code point).
The following describes in detail a method for determining the three-dimensional coordinates of a plurality of positioning points in a spatial environment, the method comprising the steps of:
(1) marking points are attached to the environment, each marking point has a different ID, three-dimensional coordinates of a part of the marking points are known, a set of the coordinate points is taken as S1, and the rest of the marking points are set as X.
(2) Pictures or videos containing the markers are taken from different positions and angles with a camera with known internal parameters (calibrated).
(3) The image coordinates of the mark points in each picture are extracted, as well as the ID of the mark.
(4) From the IDs of these markers, the marker belonging to S1 is found, and if 4 or more than 4 markers in a picture belong to S1, the image is said to have a known pose. Since the tag belongs to S1, the three-dimensional coordinates of the tag are known, and the pose of the camera can be found, for example, by the PNP algorithm, in conjunction with the corresponding two-dimensional coordinates of the tag in the image.
(5) All the pictures of known poses, markers not belonging to S1, are projected from the image to the real world, each marker getting a ray. The same marker corresponds to different rays in pictures with different known postures, the intersection point of the rays is the three-dimensional coordinate of the marker, and the marker of the newly calculated three-dimensional coordinate, all or part of the preferred markers in the S1 are set as S2.
(6) Step 3 to step 5, S2 may be obtained from S1, and repeating the iteration of step 3 to step 5, Sn +1 may be obtained from Sn, and if the number of markers in Sn is equal to the number of markers in Sn +1 or the difference between them is less than a predetermined threshold, the iteration is stopped. At this point all the markers in Sn are known three-dimensional coordinates and the modeling ends.
In the above-mentioned method, a plurality of two-dimensional images are first acquired, and then three-dimensional coordinates of all the positioning points are obtained from a known presumptive unknowns on the basis of these images. However, two-dimensional images may be obtained correspondingly each time to obtain three-dimensional coordinates of some positioning points, and then three-dimensional coordinates of other positioning points may be obtained based on the two-dimensional images. Correspondingly, the present invention further provides a method for determining three-dimensional coordinates of a plurality of positioning points in a spatial environment, each positioning point having a different ID, the ID corresponding to the three-dimensional coordinates of the positioning point, the plurality of positioning points including at least four positioning points whose three-dimensional coordinates are known in advance, as shown in fig. 6, the method comprising:
step 1: forming two-dimensional images for the positioning points with known at least four three-dimensional positions by using a calibrated camera or selecting two-dimensional images formed by the positioning points with known at least four three-dimensional positions, wherein the positioning points form code points in the two-dimensional images, and each two-dimensional image also comprises code points corresponding to the same positioning point with unknown three-dimensional coordinates;
step 2: extracting ID of a positioning point corresponding to each code point in the two-dimensional image and image coordinates of each code point;
and step 3: acquiring camera poses corresponding to the two-dimensional images based on the three-dimensional coordinates of the at least four positioning points with known three-dimensional positions and the image coordinates of the code points corresponding to the at least four positioning points with known three-dimensional positions;
and 4, step 4: regarding code points corresponding to the same positioning point with unknown three-dimensional coordinates in the two-dimensional images with known poses, taking the three-dimensional coordinates of intersection points of two three-dimensional positions corresponding to the cameras and connecting lines of the code points corresponding to the same positioning point in the two-dimensional images as the three-dimensional coordinates of the same positioning point, and enabling the three-dimensional coordinates and the corresponding ID to form a corresponding relation.
It should be noted that, in step 1, "selecting two-dimensional images formed at positioning points whose three-dimensional positions are known" also includes a case where a new two-dimensional image is taken and an old two-dimensional image is selected. Steps 1-4 may be repeated to learn the three-dimensional coordinates of the anchor points for which other three-dimensional coordinates are unknown.
Accordingly, the present invention provides an apparatus for determining three-dimensional coordinates of a plurality of positioning points in a spatial environment, each positioning point having a different ID corresponding to the three-dimensional coordinates of the positioning point, the plurality of positioning points including at least four positioning points whose three-dimensional coordinates are known in advance, the apparatus comprising:
the camera is calibrated, the camera forms a plurality of two-dimensional images at different positions for the positioning points, the positioning points form code points in the two-dimensional images, the two-dimensional images comprise at least two-dimensional images with determinable camera poses, and each two-dimensional image with determinable camera poses comprises code points corresponding to at least four positioning points with known three-dimensional coordinates;
a device for extracting the ID of the positioning point corresponding to each code point in each two-dimensional image and the image coordinate of each code point;
the camera pose determining device is used for finding out a two-dimensional image with a determinable camera pose, and obtaining the camera pose corresponding to the two-dimensional image with the determinable camera pose based on the two-dimensional coordinates of the code points corresponding to at least four positioning points with known three-dimensional coordinates in the two-dimensional image with the determinable camera pose and the three-dimensional coordinates of at least four positioning points with known three-dimensional coordinates;
the positioning point three-dimensional coordinate acquisition device is used for: regarding code points corresponding to the same positioning point with unknown three-dimensional coordinates in the two-dimensional images with known poses, taking the three-dimensional coordinates of intersection points of two three-dimensional positions corresponding to the cameras and connecting lines of the code points corresponding to the same positioning point in the two-dimensional images as the three-dimensional coordinates of the same positioning point, and enabling the three-dimensional coordinates and the corresponding ID to form a corresponding relation.
Accordingly, the present invention provides an apparatus for determining three-dimensional coordinates of a plurality of positioning points in a spatial environment, each positioning point having a different ID corresponding to the three-dimensional coordinates of the positioning point, the plurality of positioning points including at least four positioning points whose three-dimensional coordinates are known in advance, the apparatus comprising:
the camera forms at least two-dimensional images for at least four positioning points with known three-dimensional positions, the positioning points form code points in the two-dimensional images, and each of the at least two-dimensional images also comprises the code points corresponding to the same positioning point with unknown three-dimensional coordinates;
a device for extracting ID of a positioning point corresponding to each code point in the two-dimensional image and image coordinates of each code point;
the device for acquiring the camera poses is used for acquiring the camera poses corresponding to the two-dimensional images based on the three-dimensional coordinates of the positioning points with known at least four three-dimensional positions and the image coordinates of the code points corresponding to the positioning points with known at least four three-dimensional positions;
the positioning point three-dimensional coordinate acquisition device is used for: regarding code points corresponding to the same positioning point with unknown three-dimensional coordinates in the two-dimensional images with known poses, taking the three-dimensional coordinates of intersection points of two three-dimensional positions corresponding to the cameras and connecting lines of the code points corresponding to the same positioning point in the two-dimensional images as the three-dimensional coordinates of the same positioning point, and enabling the three-dimensional coordinates and the corresponding ID to form a corresponding relation.
In embodiment 2, modeling can be performed by a common camera, no special equipment or person is needed, and modeling accuracy is higher than that of SFM. However, its accuracy is lower than that of an optoelectronic measuring instrument, and there is also a global accumulated error.
It should be noted that, in the present invention, the "obtaining the pose of the camera" may use a method known in the art, for example, a solvePnP algorithm or a post algorithm, and it is within the scope of the present invention as long as the method can obtain the pose of the camera at the current position by using a two-dimensional to three-dimensional mapping relationship.
Example 3
The optoelectronic measuring instrument and the visual measurement can be combined.
As shown in fig. 7, the present invention provides a method for determining three-dimensional coordinates of a plurality of positioning points in a spatial environment, the method comprising:
step 1: providing a camera, a photoelectric measuring instrument and a driving device for controlling the movement of the photoelectric measuring instrument;
step 2: acquiring a two-dimensional image of a positioning point at a first position by using a camera, and determining a centroid position corresponding to the positioning point in the two-dimensional image;
and step 3: using a photoelectric measuring instrument to beat a laser point at a first position;
and 4, step 4: driving the photoelectric measuring instrument by a driving device to enable the laser point to coincide with the centroid position corresponding to the positioning point in the two-dimensional image obtained at the first position;
and 5: and (3) marking the three-dimensional coordinates of the laser point at the first position of the photoelectric measuring instrument as the three-dimensional coordinates of the positioning point.
Optionally, in step 5, the three-dimensional coordinates of the positioning point and the ID form a corresponding relationship.
Optionally, the method further includes:
step 6: and repeating the steps 2-5 to obtain the three-dimensional coordinates of the positioning points of which the other three-dimensional coordinates are unknown.
Optionally, a calibration step is further included between step 1 and step 2: and calibrating the camera and the photoelectric measuring instrument to align the coordinate system of the camera with the coordinate system of the photoelectric measuring instrument. If the coordinate of the laser measuring point mapped to the two-dimensional image can be obtained according to the triangulation method, the relative position and the posture of the camera and the laser range finder. However, the calibration step may not be included, and in this case, the image coordinates of the laser spot may be directly obtained from the two-dimensional image by using an image processing method, and then the coordinates of the laser spot in the two-dimensional image may be adjusted to the coordinates of the positioning point in the two-dimensional image.
The following describes in detail a method for determining the three-dimensional coordinates of a plurality of positioning points in a spatial environment, the method comprising the steps of:
(1) the camera captures an image, and the marked image center A is found in the image.
(2) And obtaining a coordinate B mapped by the laser measuring point in the image according to the triangulation method, the relative positions and the distances of the camera and the laser range finder. Of course, the image coordinates B of the laser spot may be directly obtained from the image by an image processing method.
(3) If B is not aligned with A, adjusting the course motor and the pitching motor of the laser range finder to enable B to be closer to A.
(4) And (4) repeating the steps (2) and (3) until A and B coincide or the distance is smaller than the threshold value.
(5) Moving to step 1, the next mark is measured, and optionally, in the case where an ID section is present, the number of the mark is recorded.
Repeating the steps (1) to (5) can generate a point cloud, but if the deployment range of the mark points is large, the measuring equipment is fixed at one position, and all marks cannot be measured, so that the equipment needs to be moved to the next place for measurement, and finally a plurality of point clouds can be obtained, and then the point clouds are spliced together through an algorithm to form the modeling under the same coordinate system.
Correspondingly, the method may further include:
and 7: acquiring a two-dimensional image of a positioning point at a second position by using a camera, and determining a centroid position corresponding to the positioning point in the two-dimensional image;
and 8: using a photoelectric measuring instrument to beat a laser point at the second position;
and step 9: driving the photoelectric measuring instrument by a driving device to enable the laser point to coincide with the centroid position corresponding to the positioning point in the two-dimensional image acquired at the second position;
step 10: marking the three-dimensional coordinates of the laser point at the second position of the photoelectric measuring instrument as the three-dimensional coordinates of the positioning point;
step 11: and converting the coordinates of the positioning point in the coordinate system of the second position into the coordinate system of the first position, or converting the coordinates of the positioning point in the coordinate system of the first position into the coordinate system of the second position.
Optionally, step 11 includes:
the laser of the photoelectric measuring instrument at the second position is shot to at least three positioning points which are calibrated on the first position, and the three-dimensional coordinates of the laser in the coordinate system at the second position are recorded;
and solving the conversion relation of the two coordinate systems based on the three-dimensional coordinates of the at least three positioning points in the coordinate system of the first position and the three-dimensional coordinates of the at least three positioning points in the coordinate system of the second position.
Accordingly, the present invention proposes an apparatus for determining three-dimensional coordinates of a plurality of positioning points in a spatial environment, said apparatus comprising:
the system comprises a camera and a photoelectric measuring instrument, wherein a camera coordinate system is aligned with a coordinate system of the photoelectric measuring instrument;
a device for determining the centroid position corresponding to the positioning point in the two-dimensional image acquired by the camera;
the driving device drives the photoelectric measuring instrument to enable the laser points shot by the photoelectric measuring instrument to coincide with the centroid positions corresponding to the positioning points in the two-dimensional image;
and the positioning point three-dimensional coordinate acquisition device is used for taking the three-dimensional coordinate of the laser point which is formed by the photoelectric measuring instrument as the three-dimensional coordinate of the positioning point.
Correspondingly, as shown in fig. 8 and 9, the present invention further provides an apparatus 1000 for determining three-dimensional coordinates of a positioning point in a space environment, comprising:
an optoelectronic measuring instrument 200 adapted to fire a laser spot, said optoelectronic measuring instrument having a driving means 210 for driving the optoelectronic measuring instrument to adjust the position of the fired laser spot;
the camera 300 is suitable for capturing a positioning point in a space environment and a two-dimensional image of a laser point shot by the photoelectric measuring instrument, and a coordinate system of the camera is aligned with a coordinate system of the photoelectric measuring instrument; and
and the control device 400 is used for controlling the driving device so that the laser points shot by the photoelectric measuring instrument coincide with the centroid positions corresponding to the positioning points in the two-dimensional image.
Optionally, the optoelectronic measuring instrument includes a distance measuring sensor 220 and an angle measuring sensor 230, the distance measuring sensor 220 is adapted to shoot a laser spot and measure a distance between the optoelectronic measuring instrument and the laser spot, and the angle measuring sensor 230 is configured to measure an yaw angle and a pitch angle of the distance measuring sensor, where: and the photoelectric measuring instrument obtains the three-dimensional coordinate of the laser point relative to the photoelectric measuring instrument based on the measured distance, the yaw angle and the pitch angle.
As shown in fig. 9, the present invention provides an optoelectronic measuring instrument 200, comprising:
a distance measuring sensor 220 adapted to strike a laser spot and measure a distance between the photoelectric measuring instrument and the laser spot;
an angle measurement sensor 230 for measuring a yaw angle and a pitch angle of the distance measurement sensor;
and a driving device 210 for driving the distance measuring sensor to adjust the position of the laser spot.
Optionally, the angle measurement sensor is a two-axis angle measurement sensor.
Although embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.

Claims (11)

1. A method of determining three-dimensional coordinates of a plurality of location points in a spatial environment, the method comprising:
step 1: providing a camera, a photoelectric measuring instrument and a driving device for controlling the movement of the photoelectric measuring instrument;
step 2: acquiring a two-dimensional image of a positioning point at a first position by using a camera, and determining a centroid position corresponding to the positioning point in the two-dimensional image;
and step 3: using a photoelectric measuring instrument to beat a laser point at a first position;
and 4, step 4: driving the photoelectric measuring instrument by a driving device to enable the laser point to coincide with the centroid position corresponding to the positioning point in the two-dimensional image obtained at the first position;
and 5: and (3) marking the three-dimensional coordinates of the laser point at the first position of the photoelectric measuring instrument as the three-dimensional coordinates of the positioning point.
2. The method of claim 1, wherein:
in step 5, the three-dimensional coordinates of the positioning points and the ID form a corresponding relation.
3. The method of claim 1 or 2, further comprising:
step 6: and repeating the steps 2-5 to obtain the three-dimensional coordinates of the positioning points of which the other three-dimensional coordinates are unknown.
4. The method of claim 3, wherein:
the method also comprises a calibration step between the step 1 and the step 2: and calibrating the camera and the photoelectric measuring instrument to align the coordinate system of the camera with the coordinate system of the photoelectric measuring instrument.
5. The method of any of claims 1-4, further comprising:
and 7: acquiring a two-dimensional image of a positioning point at a second position by using a camera, and determining a centroid position corresponding to the positioning point in the two-dimensional image;
and 8: using a photoelectric measuring instrument to beat a laser point at the second position;
and step 9: driving the photoelectric measuring instrument by a driving device to enable the laser point to coincide with the centroid position corresponding to the positioning point in the two-dimensional image acquired at the second position;
step 10: marking the three-dimensional coordinates of the laser point at the second position of the photoelectric measuring instrument as the three-dimensional coordinates of the positioning point;
step 11: and converting the coordinates of the positioning point in the coordinate system of the second position into the coordinate system of the first position, or converting the coordinates of the positioning point in the coordinate system of the first position into the coordinate system of the second position.
6. The method of claim 5, wherein:
the step 11 comprises the following steps:
the laser of the photoelectric measuring instrument at the second position is shot to at least three positioning points which are calibrated on the first position, and the three-dimensional coordinates of the laser in the coordinate system at the second position are recorded;
and solving the conversion relation of the two coordinate systems based on the three-dimensional coordinates of the at least three positioning points in the coordinate system of the first position and the three-dimensional coordinates of the at least three positioning points in the coordinate system of the second position.
7. An apparatus for determining three-dimensional coordinates of a plurality of location points in a spatial environment, the apparatus comprising:
the system comprises a camera and a photoelectric measuring instrument, wherein a camera coordinate system is aligned with a coordinate system of the photoelectric measuring instrument;
a device for determining the centroid position corresponding to the positioning point in the two-dimensional image acquired by the camera;
the driving device drives the photoelectric measuring instrument to enable the laser points shot by the photoelectric measuring instrument to coincide with the centroid positions corresponding to the positioning points in the two-dimensional image;
and the positioning point three-dimensional coordinate acquisition device is used for taking the three-dimensional coordinate of the laser point which is formed by the photoelectric measuring instrument as the three-dimensional coordinate of the positioning point.
8. An apparatus for determining three-dimensional coordinates of a location point in a spatial environment, comprising:
a photoelectric measuring instrument adapted to fire a laser spot, the photoelectric measuring instrument having a driving device for driving the photoelectric measuring instrument to adjust a position of the fired laser spot;
the camera is suitable for capturing a positioning point in a space environment and a two-dimensional image of a laser point shot by the photoelectric measuring instrument, and a coordinate system of the camera is aligned with a coordinate system of the photoelectric measuring instrument; and
and the control device is used for controlling the driving device so that the laser points shot by the photoelectric measuring instrument coincide with the centroid positions corresponding to the positioning points in the two-dimensional image.
9. The apparatus of claim 8, wherein:
the photoelectric measuring instrument comprises a distance measuring sensor and an angle measuring sensor, the distance measuring sensor is suitable for shooting a laser point and measuring the distance between the photoelectric measuring instrument and the laser point, the angle measuring sensor is used for measuring the yaw angle and the pitch angle of the distance measuring sensor, and the distance measuring sensor comprises: and the photoelectric measuring instrument obtains the three-dimensional coordinate of the laser point relative to the photoelectric measuring instrument based on the measured distance, the yaw angle and the pitch angle.
10. An optoelectronic measurement instrument, comprising:
the distance measuring sensor is suitable for shooting a laser point and measuring the distance between the photoelectric measuring instrument and the laser point;
the angle measurement sensor is used for measuring the yaw angle and the pitch angle of the distance measurement sensor;
and the driving device is used for driving the distance measuring sensor to adjust the position of the shot laser spot.
11. The instrument of claim 10, wherein:
the angle measuring sensor is a two-axis angle measuring sensor.
CN202110914073.1A 2016-11-01 2016-11-01 Method and equipment for determining three-dimensional coordinates of positioning point and photoelectric measuring instrument Pending CN113884080A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110914073.1A CN113884080A (en) 2016-11-01 2016-11-01 Method and equipment for determining three-dimensional coordinates of positioning point and photoelectric measuring instrument

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110914073.1A CN113884080A (en) 2016-11-01 2016-11-01 Method and equipment for determining three-dimensional coordinates of positioning point and photoelectric measuring instrument
CN201610935791.6A CN106546230B (en) 2016-11-01 2016-11-01 Positioning point arrangement method and device, and method and equipment for measuring three-dimensional coordinates of positioning points

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201610935791.6A Division CN106546230B (en) 2016-11-01 2016-11-01 Positioning point arrangement method and device, and method and equipment for measuring three-dimensional coordinates of positioning points

Publications (1)

Publication Number Publication Date
CN113884080A true CN113884080A (en) 2022-01-04

Family

ID=58392274

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202110914073.1A Pending CN113884080A (en) 2016-11-01 2016-11-01 Method and equipment for determining three-dimensional coordinates of positioning point and photoelectric measuring instrument
CN202110914085.4A Active CN113884081B (en) 2016-11-01 2016-11-01 Method and equipment for measuring three-dimensional coordinates of positioning point
CN201610935791.6A Active CN106546230B (en) 2016-11-01 2016-11-01 Positioning point arrangement method and device, and method and equipment for measuring three-dimensional coordinates of positioning points

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202110914085.4A Active CN113884081B (en) 2016-11-01 2016-11-01 Method and equipment for measuring three-dimensional coordinates of positioning point
CN201610935791.6A Active CN106546230B (en) 2016-11-01 2016-11-01 Positioning point arrangement method and device, and method and equipment for measuring three-dimensional coordinates of positioning points

Country Status (1)

Country Link
CN (3) CN113884080A (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7022624B2 (en) * 2018-03-13 2022-02-18 株式会社ディスコ Positioning method
CN109269444A (en) * 2018-09-19 2019-01-25 贵州航天电子科技有限公司 A kind of servo mechanism angle calibration measurement method
CN110375721B (en) * 2019-06-13 2021-07-06 中交二航局第四工程有限公司 Method for precise three-dimensional positioning of high tower top structure
CN113240744A (en) * 2020-01-23 2021-08-10 华为技术有限公司 Image processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009053002A (en) * 2007-08-27 2009-03-12 Akinobu Morita Measuring system
JP2010032282A (en) * 2008-07-28 2010-02-12 Japan Atomic Energy Agency Method and system for measuring three-dimensional position of marker
CN102175211A (en) * 2010-12-24 2011-09-07 北京控制工程研究所 Barrier position determining method based on lattice structured light
CN103557796A (en) * 2013-11-19 2014-02-05 天津工业大学 Three-dimensional locating system and locating method based on laser ranging and computer vision

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0921640A (en) * 1995-07-07 1997-01-21 Green Syst:Kk Apparatus and method for three-dimensional measurement in tunnel or the like
JP2002341031A (en) * 2001-05-11 2002-11-27 Daiei Dream Kk Forming method of three-dimensional model and three- dimensional scanner system using laser radar
JP2003021507A (en) * 2001-07-09 2003-01-24 Mitsubishi Heavy Ind Ltd Method and device for recognizing three-dimensional shape
JP2005331383A (en) * 2004-05-20 2005-12-02 Toshiba Corp Method and device for evaluating three-dimensional coordinate position
CN101509763A (en) * 2009-03-20 2009-08-19 天津工业大学 Single order high precision large-sized object three-dimensional digitized measurement system and measurement method thereof
KR101221449B1 (en) * 2009-03-27 2013-01-11 한국전자통신연구원 Apparatus and method for calibrating image between cameras
US8943701B2 (en) * 2010-06-28 2015-02-03 Trimble Navigation Limited Automated layout and point transfer system
WO2013005244A1 (en) * 2011-07-01 2013-01-10 株式会社ベイビッグ Three-dimensional relative coordinate measuring device and method
CN102589571B (en) * 2012-01-18 2014-06-04 西安交通大学 Spatial three-dimensional vision-computing verification method
CN102788572B (en) * 2012-07-10 2015-07-01 中联重科股份有限公司 Method, device and system for measuring attitude of lifting hook of engineering machinery
CN102927908B (en) * 2012-11-06 2015-04-22 中国科学院自动化研究所 Robot eye-on-hand system structured light plane parameter calibration device and method
CN103903246A (en) * 2012-12-26 2014-07-02 株式会社理光 Object detection method and device
CN103425355B (en) * 2013-07-08 2016-09-07 狒特科技(北京)有限公司 The portable optical touch screen of a kind of omnidirectional camera structure and location calibration steps thereof
CN104424630A (en) * 2013-08-20 2015-03-18 华为技术有限公司 Three-dimension reconstruction method and device, and mobile terminal
CN103971353B (en) * 2014-05-14 2017-02-15 大连理工大学 Splicing method for measuring image data with large forgings assisted by lasers
DE102014013724A1 (en) * 2014-09-22 2016-03-24 Andreas Enders Method for staking boreholes of a vehicle
CN105987666A (en) * 2015-03-05 2016-10-05 力弘科技股份有限公司 Virtual positioning plate and building detection method with application of virtual positioning plate
CN105203023B (en) * 2015-07-10 2017-12-05 中国人民解放军信息工程大学 A kind of one-stop scaling method of vehicle-mounted three-dimensional laser scanning system placement parameter
CN105547305B (en) * 2015-12-04 2018-03-16 北京布科思科技有限公司 A kind of pose calculation method based on wireless location and laser map match
CN106052697B (en) * 2016-05-24 2017-11-14 百度在线网络技术(北京)有限公司 Unmanned vehicle, unmanned vehicle localization method, device and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009053002A (en) * 2007-08-27 2009-03-12 Akinobu Morita Measuring system
JP2010032282A (en) * 2008-07-28 2010-02-12 Japan Atomic Energy Agency Method and system for measuring three-dimensional position of marker
CN102175211A (en) * 2010-12-24 2011-09-07 北京控制工程研究所 Barrier position determining method based on lattice structured light
CN103557796A (en) * 2013-11-19 2014-02-05 天津工业大学 Three-dimensional locating system and locating method based on laser ranging and computer vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谷风云 等: "《地面三维激光扫描技术与应用》", 武汉:武汉大学出版社, pages: 86 - 88 *

Also Published As

Publication number Publication date
CN106546230A (en) 2017-03-29
CN106546230B (en) 2021-06-22
CN113884081A (en) 2022-01-04
CN113884081B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
US10401143B2 (en) Method for optically measuring three-dimensional coordinates and controlling a three-dimensional measuring device
CN105203046B (en) Multi-thread array laser 3 D scanning system and multi-thread array laser 3-D scanning method
US8265376B2 (en) Method and system for providing a digital model of an object
CN106546230B (en) Positioning point arrangement method and device, and method and equipment for measuring three-dimensional coordinates of positioning points
US6917702B2 (en) Calibration of multiple cameras for a turntable-based 3D scanner
CN104335005B (en) 3D is scanned and alignment system
US8098958B2 (en) Processing architecture for automatic image registration
US7298889B2 (en) Method and assembly for the photogrammetric detection of the 3-D shape of an object
US20160134860A1 (en) Multiple template improved 3d modeling of imaged objects using camera position and pose to obtain accuracy
US20160044301A1 (en) 3d modeling of imaged objects using camera position and pose to obtain accuracy with reduced processing requirements
EP2631740A2 (en) System for reproducing virtual objects
Gschwandtner et al. Infrared camera calibration for dense depth map construction
EP3069100B1 (en) 3d mapping device
JP2005534026A (en) Three-dimensional measurement data automatic alignment apparatus and method using optical markers
US20130070094A1 (en) Automatic registration of multi-projector dome images
CN105378794A (en) 3d recording device, method for producing 3d image, and method for setting up 3d recording device
CN108227929A (en) Augmented reality setting-out system and implementation method based on BIM technology
CN108022265B (en) Method, equipment and system for determining pose of infrared camera
CN105096376B (en) A kind of information processing method and electronic equipment
KR102255017B1 (en) Method for calibrating an image capture sensor comprising at least one sensor camera using a time coded pattern target
WO2015023483A1 (en) 3d mapping device for modeling of imaged objects using camera position and pose to obtain accuracy with reduced processing requirements
JP2023546739A (en) Methods, apparatus, and systems for generating three-dimensional models of scenes
CN105513074B (en) A kind of scaling method of shuttlecock robot camera and vehicle body to world coordinate system
CN109032329B (en) Space consistency keeping method for multi-person augmented reality interaction
WO2016040271A1 (en) Method for optically measuring three-dimensional coordinates and controlling a three-dimensional measuring device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination