CN111238368A - Three-dimensional scanning method and device - Google Patents

Three-dimensional scanning method and device Download PDF

Info

Publication number
CN111238368A
CN111238368A CN202010041264.7A CN202010041264A CN111238368A CN 111238368 A CN111238368 A CN 111238368A CN 202010041264 A CN202010041264 A CN 202010041264A CN 111238368 A CN111238368 A CN 111238368A
Authority
CN
China
Prior art keywords
point
laser
line laser
image
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010041264.7A
Other languages
Chinese (zh)
Inventor
陈海南
黄林冲
郑野风
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202010041264.7A priority Critical patent/CN111238368A/en
Publication of CN111238368A publication Critical patent/CN111238368A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention is suitable for the technical field of computers, and provides a three-dimensional scanning method and a device, wherein the method comprises the following steps: acquiring an image of a target area through a monocular camera, extracting a feature point pixel coordinate in the image, emitting a single line laser to the feature point, and extracting a laser point pixel coordinate corresponding to the feature point pixel coordinate in the image of the single line laser; obtaining a direction vector from the laser point to the characteristic point according to the pixel coordinate of the characteristic point and the pixel coordinate of the laser point; obtaining an attitude value of the single-line laser emitting head according to the direction vector, and adjusting the attitude of the single-line laser emitting head according to the attitude value until the pixel coordinate of the characteristic point is superposed with the pixel coordinate of the laser point; and measuring the spatial distance from the characteristic point to the single-line laser emission head, and performing spatial coordinate conversion by combining the pixel coordinates of the characteristic point to obtain the spatial coordinate of the monocular camera corresponding to the characteristic point. According to the invention, the monocular camera is combined with the single-line laser, so that the scanning precision is improved, and the pixel-level fine scanning can be flexibly and effectively carried out on the target area.

Description

Three-dimensional scanning method and device
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a three-dimensional scanning method and device.
Background
Three-dimensional scanning is the process of mapping the three-dimensional coordinates of each point in the target area space. In the traditional three-dimensional scanning, one is to rapidly scan a region in a space by multi-line laser of a laser radar, and the other is to measure and calculate space point coordinates by a binocular or a mobile monocular camera to realize space scanning. The scanning based on the laser radar is passive scanning, automatic identification and then autonomous scanning of a target object cannot be achieved, the identification based on a monocular or monocular camera is dependent on pixel difference calculation of characteristic points at different shooting angles, the calculation is complex, the stability is insufficient, and the method cannot be suitable for engineering sites with complex light conditions.
Disclosure of Invention
The invention aims to provide a three-dimensional scanning method and a three-dimensional scanning device, and aims to solve the problem of passive scanning in the prior art.
In one aspect, the present invention provides a three-dimensional scanning method, including the steps of:
acquiring an image of a target area through a monocular camera, extracting a feature point pixel coordinate in the image, emitting a single line laser to the feature point, and extracting a laser point pixel coordinate corresponding to the feature point pixel coordinate in the image of the single line laser;
obtaining a direction vector from the laser point to the characteristic point according to the pixel coordinate of the characteristic point and the pixel coordinate of the laser point;
obtaining an attitude value of the single-line laser emitting head according to the direction vector, and adjusting the attitude of the single-line laser emitting head according to the attitude value until the pixel coordinate of the characteristic point is superposed with the pixel coordinate of the laser point;
and measuring the spatial distance from the characteristic point to the single-line laser emission head, and performing spatial coordinate conversion by combining the pixel coordinates of the characteristic point to obtain the spatial coordinate of the monocular camera corresponding to the characteristic point.
In another aspect, the present invention provides a three-dimensional scanning apparatus, the apparatus comprising:
the extraction unit is used for acquiring an image of a target area through a monocular camera, extracting a feature point pixel coordinate in the image, emitting single line laser to the feature point, and extracting a laser point pixel coordinate corresponding to the feature point pixel coordinate of the single line laser in the image;
the vector calculation unit is used for obtaining a direction vector from the laser point to the characteristic point according to the pixel coordinate of the characteristic point and the pixel coordinate of the laser point;
the attitude adjusting unit is used for obtaining an attitude value of the single-line laser emitting head according to the direction vector, and adjusting the attitude of the single-line laser emitting head according to the attitude value until the pixel coordinate of the characteristic point is superposed with the pixel coordinate of the laser point;
and the coordinate conversion unit is used for measuring the spatial distance from the characteristic point to the single-line laser emission head, and carrying out spatial coordinate conversion by combining the pixel coordinates of the characteristic point to obtain the spatial coordinate of the corresponding monocular camera of the characteristic point.
According to the embodiment of the invention, the monocular camera and the single line laser are combined, the single line laser is emitted to the characteristic point by the characteristic point in the target area image acquired by the monocular camera to acquire the spatial distance of the characteristic point, so that the spatial coordinate is obtained, the pixel difference value does not need to be shot and calculated for the characteristic point in multiple angles, the problem of insufficient data accuracy caused by complex calculation is avoided, meanwhile, scanning measurement can be carried out with pixel accuracy, the measurement position can be actively controlled, and the fine scanning of the given area is realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart illustrating an implementation of a three-dimensional scanning method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a three-dimensional scanning apparatus according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of specific implementations of the present invention is provided in conjunction with specific embodiments:
the first embodiment is as follows:
fig. 1 shows a flowchart of an implementation of a three-dimensional scanning method according to a first embodiment of the present invention, and for convenience of description, only a part related to the first embodiment of the present invention is shown, and the following details are described below:
in step S101, an image of a target area is acquired by a monocular camera, feature point pixel coordinates in the image are extracted, a single line laser is emitted to the feature point, and laser point pixel coordinates of the single line laser in the image corresponding to the feature point pixel coordinates are extracted.
In the embodiment of the invention, a monocular camera shoots a real-time picture, acquires a real-time image of a target area, outputs the real-time image in the form of an image sequence, and forms an image data set F for an image acquired at the time tt,FtAn RGB image with a resolution of w × h, w being wide and h being high. Extracting the pixel coordinates of the feature points in the image by using a feature extraction algorithm (ORB algorithm, ordered FAST and Rotated BRIEF), wherein P { (x)i,yi)|xi∈[0,w],yi∈[0,h]And i is less than n, wherein n is the number of the characteristic points. For each feature point pixel coordinate (x)i,yi) After the single line laser is emitted, extracting the pixel coordinates of the corresponding characteristic points of the single line laser red points in the image through an image target detection algorithm
Figure BDA0002367846240000031
Further, an image of the target area is acquired by a monocular camera, and the pixel coordinates (x) of the feature points in the image are extractedi,yi) I is less than or equal to n, wherein n is the number of the characteristic points;
initializing the attitude value of the single-line laser emitting head to be wh=0,wv0, wherein whFor horizontal rotation angle, w, of single-line laser emitting headvRotating the single-line laser emitting head in the vertical direction;
emitting single line laser to the characteristic points, and extracting the image through a target detection algorithmLaser spot pixel coordinates
Figure BDA0002367846240000032
In the embodiment of the invention, a monocular camera shoots a real-time picture, obtains a real-time image of a target area, and extracts the pixel coordinates of the feature points in the image through a feature extraction algorithm, (x)i,yi) And i is less than or equal to n, wherein n is the number of the characteristic points, only the pixels of the specific concerned target area are extracted, irrelevant areas are filtered, the scanning efficiency is improved, and long-time staying is avoided particularly for the area to be scanned with a large space range. After extracting the image feature point pixel coordinates, a single line laser is emitted for each feature point pixel coordinate. Firstly, initializing a single-line laser emitting head, and adjusting the attitude value of the single-line laser emitting head to be wh=0,wv0, wherein whFor horizontal rotation angle, w, of single-line laser emitting headvEmitting a single line laser to the characteristic point for the rotation angle of the single line laser emitting head in the vertical direction, and extracting the pixel coordinate of the laser point in the image through a target detection algorithm
Figure BDA0002367846240000041
When the single-line laser is transmitted to each feature point one by one, the single-line laser transmitting head is initialized according to the pixel coordinates of each feature point, so that the sequential calculation is realized, the repeated calculation is reduced, and the fusion efficiency is improved.
In step S102, a direction vector from the laser spot to the feature point is obtained according to the feature point pixel coordinate and the laser spot pixel coordinate.
In the embodiment of the invention, when the single-line laser is transmitted to each characteristic point one by one, the single-line laser transmitting head is initialized according to the pixel coordinate of each characteristic point, the pixel coordinate of the laser point in the image is extracted through a target detection algorithm, and the pixel coordinate of the characteristic point is coincided with the pixel coordinate of the laser point by continuously adjusting the attitude value of the single-line laser transmitting head. For the feature point pixel coordinate (x)i,yi) Emitting a single line laser to find an image in a target areaCorresponding laser point pixel coordinate
Figure BDA0002367846240000042
And calculating the direction vector from the laser point to the characteristic point as
Figure BDA0002367846240000043
The positional relationship of the laser point and the feature point in the image of the target area can be known.
In step S103, an attitude value of the singlet laser emitting head is obtained according to the direction vector, and the attitude of the singlet laser emitting head is adjusted according to the attitude value until the feature point pixel coordinate coincides with the laser point pixel coordinate.
In the embodiment of the invention, the position relation between the laser point and the characteristic point in the image of the target area can be obtained by calculating the direction vector from the laser point to the characteristic point, the attitude value of the single-line laser emitting head is further calculated according to the direction vector from the laser point to the characteristic point, the attitude of the single-line laser emitting head is further adjusted, and the pixel coordinate of the characteristic point is coincided with the pixel coordinate of the laser point in the process of continuously updating and adjusting.
Further, according to the direction vector, obtaining the attitude value of the single-line laser emitting head as
Figure BDA0002367846240000044
Figure BDA0002367846240000051
Wherein, whFor horizontal rotation angle, w, of single-line laser emitting headvThe angle of rotation of the single-line laser emitting head in the vertical direction is shown, and a and b are radian conversion coefficients in unit pixel-to-sphere coordinates.
Specifically, the position relationship between the laser point and the feature point in the image of the target area can be known according to the direction vector from the laser point to the feature point, the attitude value of the single-line laser emitting head is calculated according to the direction vector from the laser point to the feature point, and the attitude value of the single-line laser emitting head is obtained
Figure BDA0002367846240000052
Wherein, whFor horizontal rotation angle, w, of single-line laser emitting headvThe radian conversion coefficients of the unit pixel in the spherical coordinates are determined by calibration of a monocular camera.
In step S104, the spatial distance from the feature point to the single line laser emitting head is measured, and spatial coordinate conversion is performed in combination with the feature point pixel coordinates to obtain the spatial coordinates of the feature point corresponding to the monocular camera.
In the embodiment of the invention, the pixel coordinates of the laser point are found in the image of the target area by continuously updating the attitude value of the single-line laser emitting head
Figure BDA0002367846240000053
So that the laser point pixel coordinates
Figure BDA0002367846240000054
Representing that the pixel coordinates of the feature points coincide with the pixel coordinates of the laser points, enabling the single-line laser to hit the current feature points, measuring the spatial distance from the feature points to the single-line laser emission head, scanning all the feature points in the image of the target area one by one to obtain feature point three-dimensional data comprising the pixel coordinates of the feature points and the spatial distance from the feature points to the single-line laser emission head, performing spatial coordinate conversion on the feature point three-dimensional data to obtain the spatial coordinates of the feature points corresponding to the monocular camera, and obtaining a set of the spatial coordinates of all the feature points corresponding to the monocular camera in the image of the target area to form a three-dimensional scanning image of the target area.
Further, the spatial distance D from the characteristic point to the single-line laser emitting head is measurediCombining the pixel coordinates of the characteristic points to obtain three-dimensional data (x) of the characteristic pointsi,yi,Di);
For feature point three-dimensional data (x)i,yi,Di) According to the formula
Figure BDA0002367846240000055
Performing space coordinate conversion to obtain space coordinate (X) of the feature point corresponding to the monocular camerai,Yi,Di) Wherein f is the focal length of the monocular camera.
In particular, the laser spot pixel coordinates are found in the image of the target area
Figure BDA0002367846240000056
So that the laser point pixel coordinates
Figure BDA0002367846240000057
Indicating that the pixel coordinates of the characteristic points coincide with the pixel coordinates of the laser points, and measuring the spatial distance D from the characteristic points to a single-line laser emitting head by a laser ranging technologyiThe laser ranging method is characterized in that the laser ranging is used for accurately measuring the distance of a target, a single-line laser emitting head emits a thin laser beam to a characteristic point, a photoelectric element receives the laser beam reflected by the characteristic point, a timer measures the time from emitting to receiving of the laser beam, and the distance from the emitting point to the characteristic point is calculated. Scanning all the characteristic points in the image of the target area one by one to obtain characteristic point three-dimensional data (x) containing the pixel coordinates of the characteristic points and the space distance from the characteristic points to the single-line laser emission headi,yi,Di) For feature point three-dimensional data (x)i,yi,Di) According to the formula
Figure BDA0002367846240000061
Performing space coordinate conversion to obtain space coordinate (X) of the feature point corresponding to the monocular camerai,Yi,Di) And f is the focal length of the monocular camera, and a set of space coordinates of the monocular camera corresponding to all feature points in the image of the target area is obtained to form a three-dimensional scanning image of the target area.
In the embodiment of the invention, the monocular camera is combined with the monocular laser, the monocular laser is emitted to the feature points acquired by the monocular camera, the monocular laser emitting head can freely rotate in the horizontal direction and the vertical direction relative to the monocular camera, so that the monocular laser emitting points can be distributed at any position in the target area image acquired by the monocular camera, meanwhile, the coincidence of the monocular laser emitting points and the feature points can be ensured by continuously adjusting the posture of the monocular laser emitting head aiming at each feature point in the target area image, the spatial distance from the feature points to the monocular laser emitting head is further obtained through laser ranging, the three-dimensional data of the feature points comprising the pixel coordinates of the feature points and the spatial distance from the feature points to the monocular laser emitting head is obtained, and the spatial coordinates of the feature points corresponding to the monocular camera are obtained after the spatial coordinates are converted. The monocular camera and the monocular laser are combined to calculate the relative positions of the monocular camera and the monocular laser in real time, and the independence and the mutual interference of the monocular laser and the monocular camera are guaranteed to the maximum extent. And compared with multi-line laser equipment, the single-line laser control is fine and flexible, the price is low, and the method is suitable for continuous monitoring in a specific area or target under the conditions of long execution time and low interval.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by relevant hardware instructed by a program, and the program may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc.
Example two:
fig. 2 is a schematic structural diagram of a three-dimensional scanning apparatus according to a second embodiment of the present invention, and only the parts related to the second embodiment of the present invention are shown for convenience of illustration. In an embodiment of the present invention, a three-dimensional scanning apparatus includes: an extraction unit 21, a vector calculation unit 22, an attitude adjustment unit 23, and a coordinate conversion unit 24, wherein:
the extraction unit 21 is configured to acquire an image of a target area through a monocular camera, extract a feature point pixel coordinate in the image, emit single line laser to the feature point, and extract a laser point pixel coordinate in the image, where the single line laser corresponds to the feature point pixel coordinate.
In the embodiment of the invention, a monocular camera shoots a real-time picture, acquires a real-time image of a target area, outputs the real-time image in the form of an image sequence, and forms an image data set F for an image acquired at the time tt,FtAn RGB image with a resolution of w × h, w being wide and h being high. Extracting the pixel coordinates of the feature points in the image by using a feature extraction algorithm (ORB algorithm, ordered FAST and Rotated BRIEF), wherein P { (x)i,yi)|xi∈[0,w],yi∈[0,h]And i is less than n, wherein n is the number of the characteristic points. For each feature point pixel coordinate (x)i,yi) After the single line laser is emitted, extracting the pixel coordinates of the corresponding characteristic points of the single line laser red points in the image through an image target detection algorithm
Figure BDA0002367846240000071
Further, the extraction unit 21 includes:
a first extraction unit for acquiring an image of the target region by the monocular camera, extracting a feature point pixel coordinate (x) in the imagei,yi) I is less than or equal to n, wherein n is the number of the characteristic points;
an attitude value initialization unit for initializing the attitude value of the single-line laser emitting head to be wh=0,wv0, wherein whFor horizontal rotation angle, w, of single-line laser emitting headvRotating the single-line laser emitting head in the vertical direction; and
a second extraction unit for emitting single line laser to the characteristic points and extracting the pixel coordinates of the laser points in the image by a target detection algorithm
Figure BDA0002367846240000072
In the embodiment of the invention, a monocular camera shoots a real-time picture, obtains a real-time image of a target area, and extracts the pixel coordinates of the feature points in the image through a feature extraction algorithm, (x)i,yi) I is less than or equal to n, wherein n is the number of characteristic points, only the pixels of a specific concerned target area are extracted, irrelevant areas are filtered, and scanning is improvedThe scanning efficiency, especially aiming at the area to be scanned with a large space range, avoids long-time standing. After extracting the image feature point pixel coordinates, a single line laser is emitted for each feature point pixel coordinate. Firstly, initializing a single-line laser emitting head, and adjusting the attitude value of the single-line laser emitting head to be wh=0,wv0, wherein whFor horizontal rotation angle, w, of single-line laser emitting headvEmitting a single line laser to the characteristic point for the rotation angle of the single line laser emitting head in the vertical direction, and extracting the pixel coordinate of the laser point in the image through a target detection algorithm
Figure BDA0002367846240000081
When the single-line laser is transmitted to each feature point one by one, the single-line laser transmitting head is initialized according to the pixel coordinates of each feature point, so that the sequential calculation is realized, the repeated calculation is reduced, and the fusion efficiency is improved.
And the vector calculation unit 22 is configured to obtain a direction vector from the laser point to the feature point according to the feature point pixel coordinate and the laser point pixel coordinate.
In the embodiment of the invention, when the single-line laser is transmitted to each characteristic point one by one, the single-line laser transmitting head is initialized according to the pixel coordinate of each characteristic point, the pixel coordinate of the laser point in the image is extracted through a target detection algorithm, and the pixel coordinate of the characteristic point is coincided with the pixel coordinate of the laser point by continuously adjusting the attitude value of the single-line laser transmitting head. For the feature point pixel coordinate (x)i,yi) Emitting single line laser, finding out the corresponding laser point pixel coordinate in the image of the target area
Figure BDA0002367846240000082
And calculating the direction vector from the laser point to the characteristic point as
Figure BDA0002367846240000083
The positional relationship of the laser point and the feature point in the image of the target area can be known.
And the attitude adjusting unit 23 is configured to obtain an attitude value of the single-line laser emitting head according to the direction vector, and adjust the attitude of the single-line laser emitting head according to the attitude value until the feature point pixel coordinate coincides with the laser point pixel coordinate.
In the embodiment of the invention, the position relation between the laser point and the characteristic point in the image of the target area can be obtained by calculating the direction vector from the laser point to the characteristic point, the attitude value of the single-line laser emitting head is further calculated according to the direction vector from the laser point to the characteristic point, the attitude of the single-line laser emitting head is further adjusted, and the pixel coordinate of the characteristic point is coincided with the pixel coordinate of the laser point in the process of continuously updating and adjusting.
Further, the posture adjustment unit 23 includes:
an attitude value calculating unit for obtaining the attitude value of the single-line laser emitting head according to the direction vector
Figure BDA0002367846240000091
Wherein, whFor horizontal rotation angle, w, of single-line laser emitting headvThe angle of rotation of the single-line laser emitting head in the vertical direction is shown, and a and b are radian conversion coefficients in unit pixel-to-sphere coordinates.
Specifically, the position relationship between the laser point and the feature point in the image of the target area can be known according to the direction vector from the laser point to the feature point, the attitude value of the single-line laser emitting head is calculated according to the direction vector from the laser point to the feature point, and the attitude value of the single-line laser emitting head is obtained
Figure BDA0002367846240000092
Wherein, whFor horizontal rotation angle, w, of single-line laser emitting headvThe radian conversion coefficients of the unit pixel in the spherical coordinates are determined by calibration of a monocular camera.
And the coordinate conversion unit 24 is used for measuring the spatial distance from the features to the single-line laser emitting head, and performing spatial coordinate conversion by combining the pixel coordinates of the feature points to obtain the spatial coordinates of the feature points corresponding to the monocular camera.
In the embodiment of the invention, the pixel coordinates of the laser point are found in the image of the target area by continuously updating the attitude value of the single-line laser emitting head
Figure BDA0002367846240000093
So that the laser point pixel coordinates
Figure BDA0002367846240000094
Representing that the pixel coordinates of the feature points coincide with the pixel coordinates of the laser points, enabling the single-line laser to hit the current feature points, measuring the spatial distance from the pixel coordinates of the feature points to the single-line laser emission head, scanning all the feature points in the image of the target area one by one to obtain feature point three-dimensional data comprising the pixel coordinates of the feature points and the spatial distance from the feature points to the single-line laser emission head, performing spatial coordinate conversion on the feature point three-dimensional data to obtain the spatial coordinates of the corresponding monocular camera of the feature points, and obtaining a set of the spatial coordinates of the corresponding monocular camera of all the feature points in the image of the target area to form a three-dimensional scanning image of the target area.
Further, the coordinate conversion unit includes:
a three-dimensional data acquisition unit for measuring the spatial distance D of the pixel coordinates of the characteristic points corresponding to the single-line laser emission headiCombining the pixel coordinates of the characteristic points to obtain three-dimensional data (x) of the characteristic pointsi,yi,Di) (ii) a And
a coordinate conversion subunit for converting the feature point three-dimensional data (x)i,yi,Di) According to the formula
Figure BDA0002367846240000095
Performing space coordinate conversion to obtain space coordinate (X) of the feature point corresponding to the monocular camerai,Yi,Di) Wherein f is the focal length of the monocular camera.
In particular, the laser spot pixel coordinates are found in the image of the target area
Figure BDA0002367846240000101
So that the laser point pixel coordinates
Figure BDA0002367846240000102
Indicating that the pixel coordinates of the characteristic points coincide with the pixel coordinates of the laser points, and measuring the space distance D of the pixel coordinates of the characteristic points corresponding to the single-line laser emitting head by a laser ranging technologyiThe laser ranging method is characterized in that the distance of a target is accurately measured by using laser, a single-line laser emitting head emits a thin laser beam to a target characteristic point, a photoelectric element receives the laser beam reflected by the target characteristic point, a timer measures the time from emitting to receiving of the laser beam, and the distance from the emitting point to the target characteristic point is calculated. Scanning all the characteristic points in the image of the target area one by one to obtain characteristic point three-dimensional data (x) containing the pixel coordinates of the characteristic points and the space distance from the characteristic points to the single-line laser emission headi,yi,Di) For feature point three-dimensional data (x)i,yi,Di) According to the formula
Figure BDA0002367846240000103
Performing space coordinate conversion to obtain space coordinate (X) of the feature point corresponding to the monocular camerai,Yi,Di) And f is the focal length of the monocular camera, and a set of space coordinates of the monocular camera corresponding to all feature points in the image of the target area is obtained to form a three-dimensional scanning image of the target area.
In the embodiment of the invention, the monocular camera is combined with the monocular laser to emit the monocular laser to the characteristic points acquired by the monocular camera, the monocular laser emitting head can freely rotate in the horizontal direction and the vertical direction relative to the monocular camera, so that the monocular laser emitting points can be distributed at any position in the target area image acquired by the monocular camera, meanwhile, aiming at each characteristic point in the target area image, the single-line laser emission point can be ensured to be coincided with the characteristic point by continuously adjusting the posture of the single-line laser emission head, and then obtaining the spatial distance from the feature point pixel coordinate feature point to the single-line laser emission head through laser ranging, obtaining feature point three-dimensional data containing the feature point pixel coordinate and the spatial distance from the feature point to the single-line laser emission head, and obtaining the spatial coordinate of the monocular camera corresponding to the feature point after spatial coordinate conversion. The monocular camera and the monocular laser are combined to calculate the relative positions of the monocular camera and the monocular laser in real time, and the independence and the mutual interference of the monocular laser and the monocular camera are guaranteed to the maximum extent. And compared with multi-line laser equipment, the single-line laser control is fine and flexible, the price is low, and the method is suitable for continuous monitoring in a specific area or target under the conditions of long execution time and low interval.
In the embodiment of the present invention, each unit of the three-dimensional scanning device may be implemented by a corresponding hardware or software unit, and each unit may be an independent software or hardware unit, or may be integrated into a software or hardware unit, which is not limited herein. For the implementation of each unit of the apparatus, reference may be made to the description of the first embodiment, which is not repeated herein.
The above-mentioned embodiments are only specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications, substitutions and improvements within the technical scope of the present invention, and these modifications, substitutions and improvements should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method of three-dimensional scanning, the method comprising the steps of:
acquiring an image of a target area through a monocular camera, extracting a feature point pixel coordinate in the image, emitting a single line laser to the feature point, and extracting a laser point pixel coordinate corresponding to the feature point pixel coordinate of the single line laser in the image;
obtaining a direction vector from the laser point to the characteristic point according to the pixel coordinate of the characteristic point and the pixel coordinate of the laser point;
obtaining an attitude value of the single-line laser emitting head according to the direction vector, and adjusting the attitude of the single-line laser emitting head according to the attitude value until the characteristic point pixel coordinate is coincided with the laser point pixel coordinate;
and measuring the spatial distance from the characteristic point to the single-line laser emission head, and performing spatial coordinate conversion by combining the pixel coordinates of the characteristic point to obtain the spatial coordinate of the characteristic point corresponding to the monocular camera.
2. The method of claim 1, wherein the step of acquiring an image of a target area by a monocular camera, extracting feature point pixel coordinates in the image, emitting a singlet laser to the feature point, extracting laser point pixel coordinates of the singlet laser in the image corresponding to the feature point pixel coordinates comprises:
acquiring an image of a target area through a monocular camera, and extracting a feature point pixel coordinate (x) in the imagei,yi) I is less than or equal to n, wherein n is the number of the characteristic points;
initializing the attitude value of the single-line laser emitting head to be wh=0,wv0, wherein whFor horizontal rotation angle, w, of single-line laser emitting headvRotating the single-line laser emitting head in the vertical direction;
emitting single line laser to the characteristic points, and extracting the pixel coordinates of the laser points in the image through a target detection algorithm
Figure FDA0002367846230000011
3. The method of claim 2, wherein the step of obtaining the direction vector from the laser spot to the feature point according to the pixel coordinates of the feature point and the pixel coordinates of the laser spot comprises:
direction of the laser spot to the characteristic pointMeasured as
Figure FDA0002367846230000012
4. The method of claim 3, wherein obtaining an attitude value of the singlet laser emitting head based on the direction vector, and adjusting the attitude of the singlet laser emitting head based on the attitude value until the feature point pixel coordinate coincides with the laser point pixel coordinate comprises:
obtaining the attitude value of the single-line laser emitting head according to the direction vector
Figure FDA0002367846230000021
Figure FDA0002367846230000022
Wherein, whFor horizontal rotation angle, w, of single-line laser emitting headvThe angle of rotation of the single-line laser emitting head in the vertical direction is shown, and a and b are radian conversion coefficients in unit pixel-to-sphere coordinates.
5. The method of claim 4, wherein the step of measuring the spatial distance between the feature point and the single line laser emitting head and performing spatial coordinate transformation by combining the pixel coordinates of the feature point to obtain the spatial coordinates of the feature point corresponding to the monocular camera comprises:
measuring the spatial distance D from the characteristic point to the single-line laser emitting headiCombining the pixel coordinates of the characteristic points to obtain the three-dimensional data (x) of the characteristic pointsi,yi,Di);
For the feature point three-dimensional data (x)i,yi,Di) According to the formula
Figure FDA0002367846230000023
Performing space coordinate conversion to obtain the space coordinate of the feature point corresponding to the monocular camera(Xi,Yi,Di) Wherein f is the focal length of the monocular camera.
6. A three-dimensional scanning apparatus, characterized in that the apparatus comprises:
the extraction unit is used for acquiring an image of a target area through a monocular camera, extracting a feature point pixel coordinate in the image, emitting a single line laser to the feature point, and extracting a laser point pixel coordinate corresponding to the feature point pixel coordinate in the image of the single line laser;
the vector calculation unit is used for obtaining a direction vector from the laser point to the characteristic point according to the pixel coordinate of the characteristic point and the pixel coordinate of the laser point;
the attitude adjusting unit is used for obtaining an attitude value of the single-line laser emitting head according to the direction vector, and adjusting the attitude of the single-line laser emitting head according to the attitude value until the characteristic point pixel coordinate is superposed with the laser point pixel coordinate;
and the coordinate conversion unit is used for measuring the spatial distance from the characteristic point to the single-line laser emission head, and carrying out spatial coordinate conversion by combining the pixel coordinates of the characteristic point to obtain the spatial coordinates of the characteristic point corresponding to the monocular camera.
7. The apparatus of claim 6, wherein the extraction unit comprises:
a first extraction unit for acquiring an image of a target region by a monocular camera, extracting a feature point pixel coordinate (x) in the imagei,yi) I is less than or equal to n, wherein n is the number of the characteristic points;
an attitude value initialization unit for initializing the attitude value of the single-line laser emitting head to be wh=0,wv0, wherein whFor horizontal rotation angle, w, of single-line laser emitting headvRotating the single-line laser emitting head in the vertical direction;
a second extraction unit for emitting single line laser to the characteristic points and calculating by target detectionMethod of extracting the laser point pixel coordinates in said image
Figure FDA0002367846230000031
8. The apparatus of claim 7, wherein the vector calculation unit comprises:
the direction vector from the laser point to the characteristic point is
Figure FDA0002367846230000032
9. The apparatus of claim 8, wherein the attitude adjustment unit comprises:
an attitude value calculation unit for obtaining the attitude value of the single-line laser emitting head according to the direction vector
Figure FDA0002367846230000033
Wherein, whFor horizontal rotation angle, w, of single-line laser emitting headvThe angle of rotation of the single-line laser emitting head in the vertical direction is shown, and a and b are radian conversion coefficients in unit pixel-to-sphere coordinates.
10. The apparatus of claim 9, wherein the coordinate conversion unit comprises:
a three-dimensional data acquisition unit for measuring the spatial distance D between the characteristic point and the single-line laser emission headiCombining the pixel coordinates of the characteristic points to obtain the three-dimensional data (x) of the characteristic pointsi,yi,Di);
A coordinate conversion subunit for converting the feature point three-dimensional data (x)i,yi,Di) According to the formula
Figure FDA0002367846230000034
Figure FDA0002367846230000035
Performing space coordinate conversion to obtain the space coordinate (X) of the feature point corresponding to the monocular camerai,Yi,Di) Wherein f is the focal length of the monocular camera.
CN202010041264.7A 2020-01-15 2020-01-15 Three-dimensional scanning method and device Pending CN111238368A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010041264.7A CN111238368A (en) 2020-01-15 2020-01-15 Three-dimensional scanning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010041264.7A CN111238368A (en) 2020-01-15 2020-01-15 Three-dimensional scanning method and device

Publications (1)

Publication Number Publication Date
CN111238368A true CN111238368A (en) 2020-06-05

Family

ID=70862591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010041264.7A Pending CN111238368A (en) 2020-01-15 2020-01-15 Three-dimensional scanning method and device

Country Status (1)

Country Link
CN (1) CN111238368A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112325795A (en) * 2020-10-16 2021-02-05 华中科技大学鄂州工业技术研究院 Three-dimensional target flight time measuring method, system and device based on machine vision guidance
CN112731343A (en) * 2020-12-18 2021-04-30 福建汇川物联网技术科技股份有限公司 Target measuring method and device of measuring camera
CN112833784A (en) * 2021-01-04 2021-05-25 中铁四局集团有限公司 Steel rail positioning method combining monocular camera with laser scanning
CN113050113A (en) * 2021-03-10 2021-06-29 广州南方卫星导航仪器有限公司 Laser point positioning method and device
CN113420732A (en) * 2021-08-23 2021-09-21 深圳市城市交通规划设计研究中心股份有限公司 Pavement disease detection method and device and storage medium
CN113743237A (en) * 2021-08-11 2021-12-03 北京奇艺世纪科技有限公司 Follow-up action accuracy determination method and device, electronic device and storage medium
WO2021258251A1 (en) * 2020-06-22 2021-12-30 深圳市大疆创新科技有限公司 Surveying and mapping method for movable platform, and movable platform and storage medium
CN115488883A (en) * 2022-09-06 2022-12-20 群青华创(北京)智能科技有限公司 Robot hand-eye calibration method, device and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101819036A (en) * 2009-11-25 2010-09-01 煤炭科学研究总院太原研究院 Method for automatically measuring special position of tunnel boring machine
CN102749094A (en) * 2012-04-23 2012-10-24 北京信息科技大学 System and method for extra large gear at-position posture adjustment
JP2013050352A (en) * 2011-08-30 2013-03-14 Ricoh Co Ltd Method of adjusting installation of stereo camera, and stereo camera
JP2013122434A (en) * 2011-12-12 2013-06-20 Itt:Kk Three-dimensional shape position measuring device by monocular camera using laser, method for measuring three-dimensional shape position, and three-dimensional shape position measuring program
CN103557796A (en) * 2013-11-19 2014-02-05 天津工业大学 Three-dimensional locating system and locating method based on laser ranging and computer vision
CN204430355U (en) * 2015-01-27 2015-07-01 南京航空航天大学 Boring method based on generating laser vows alignment system
CN108981672A (en) * 2018-07-19 2018-12-11 华南师范大学 Hatch door real-time location method based on monocular robot in conjunction with distance measuring sensor
CN110017810A (en) * 2019-05-16 2019-07-16 湖北工业大学 A kind of photoelectrical position sensor and monocular vision assembled gesture measuring system and method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101819036A (en) * 2009-11-25 2010-09-01 煤炭科学研究总院太原研究院 Method for automatically measuring special position of tunnel boring machine
JP2013050352A (en) * 2011-08-30 2013-03-14 Ricoh Co Ltd Method of adjusting installation of stereo camera, and stereo camera
JP2013122434A (en) * 2011-12-12 2013-06-20 Itt:Kk Three-dimensional shape position measuring device by monocular camera using laser, method for measuring three-dimensional shape position, and three-dimensional shape position measuring program
CN102749094A (en) * 2012-04-23 2012-10-24 北京信息科技大学 System and method for extra large gear at-position posture adjustment
CN103557796A (en) * 2013-11-19 2014-02-05 天津工业大学 Three-dimensional locating system and locating method based on laser ranging and computer vision
CN204430355U (en) * 2015-01-27 2015-07-01 南京航空航天大学 Boring method based on generating laser vows alignment system
CN108981672A (en) * 2018-07-19 2018-12-11 华南师范大学 Hatch door real-time location method based on monocular robot in conjunction with distance measuring sensor
CN110017810A (en) * 2019-05-16 2019-07-16 湖北工业大学 A kind of photoelectrical position sensor and monocular vision assembled gesture measuring system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王潇榕等: "基于单目SLAM的实时场景三维重建", 《农业装备与车辆工程》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021258251A1 (en) * 2020-06-22 2021-12-30 深圳市大疆创新科技有限公司 Surveying and mapping method for movable platform, and movable platform and storage medium
CN112325795A (en) * 2020-10-16 2021-02-05 华中科技大学鄂州工业技术研究院 Three-dimensional target flight time measuring method, system and device based on machine vision guidance
CN112731343A (en) * 2020-12-18 2021-04-30 福建汇川物联网技术科技股份有限公司 Target measuring method and device of measuring camera
CN112731343B (en) * 2020-12-18 2023-12-12 福建汇川物联网技术科技股份有限公司 Target measurement method and device for measurement camera
CN112833784A (en) * 2021-01-04 2021-05-25 中铁四局集团有限公司 Steel rail positioning method combining monocular camera with laser scanning
CN112833784B (en) * 2021-01-04 2022-02-25 中铁四局集团有限公司 Steel rail positioning method combining monocular camera with laser scanning
CN113050113A (en) * 2021-03-10 2021-06-29 广州南方卫星导航仪器有限公司 Laser point positioning method and device
CN113050113B (en) * 2021-03-10 2023-08-01 广州南方卫星导航仪器有限公司 Laser spot positioning method and device
CN113743237A (en) * 2021-08-11 2021-12-03 北京奇艺世纪科技有限公司 Follow-up action accuracy determination method and device, electronic device and storage medium
CN113743237B (en) * 2021-08-11 2023-06-02 北京奇艺世纪科技有限公司 Method and device for judging accuracy of follow-up action, electronic equipment and storage medium
CN113420732B (en) * 2021-08-23 2022-02-01 深圳市城市交通规划设计研究中心股份有限公司 Pavement disease detection method and device and storage medium
CN113420732A (en) * 2021-08-23 2021-09-21 深圳市城市交通规划设计研究中心股份有限公司 Pavement disease detection method and device and storage medium
CN115488883A (en) * 2022-09-06 2022-12-20 群青华创(北京)智能科技有限公司 Robot hand-eye calibration method, device and system
CN115488883B (en) * 2022-09-06 2023-11-07 群青华创(南京)智能科技有限公司 Robot hand-eye calibration method, device and system

Similar Documents

Publication Publication Date Title
CN111238368A (en) Three-dimensional scanning method and device
US7417717B2 (en) System and method for improving lidar data fidelity using pixel-aligned lidar/electro-optic data
US9443308B2 (en) Position and orientation determination in 6-DOF
US6664529B2 (en) 3D multispectral lidar
WO2017098966A1 (en) Point group data acquisition system and method therefor
CN111811395B (en) Monocular vision-based dynamic plane pose measurement method
CN108646259A (en) A kind of three-dimensional laser scanner, which is set, stands firm to device and method
Beekmans et al. Cloud photogrammetry with dense stereo for fisheye cameras
CN111524174B (en) Binocular vision three-dimensional construction method for moving platform moving target
CN106546230B (en) Positioning point arrangement method and device, and method and equipment for measuring three-dimensional coordinates of positioning points
CN110517284B (en) Target tracking method based on laser radar and PTZ camera
CN111288891B (en) Non-contact three-dimensional measurement positioning system, method and storage medium
CN113012238B (en) Method for quick calibration and data fusion of multi-depth camera
CN114820725A (en) Target display method and device, electronic equipment and storage medium
KR102460361B1 (en) System and method for performing calibrations
CN112258633B (en) SLAM technology-based scene high-precision reconstruction method and device
Ringaby et al. Scan rectification for structured light range sensors with rolling shutters
WO2005100910A1 (en) Three-dimensional shape measuring method and its equipment
CN110619664B (en) Laser pattern-assisted camera distance posture calculation method and server
CN112525161A (en) Rotating shaft calibration method
CN105717502B (en) A kind of high-rate laser range unit based on line array CCD
CN111654626A (en) High-resolution camera containing depth information
CN111982071B (en) 3D scanning method and system based on TOF camera
CN113112532B (en) Real-time registration method for multi-TOF camera system
WO2022040940A1 (en) Calibration method and device, movable platform, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200605