CN111076712A - Automatic building method, system and device of space three-dimensional model - Google Patents

Automatic building method, system and device of space three-dimensional model Download PDF

Info

Publication number
CN111076712A
CN111076712A CN202010080608.5A CN202010080608A CN111076712A CN 111076712 A CN111076712 A CN 111076712A CN 202010080608 A CN202010080608 A CN 202010080608A CN 111076712 A CN111076712 A CN 111076712A
Authority
CN
China
Prior art keywords
sampling
camera
laser
dimensional model
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010080608.5A
Other languages
Chinese (zh)
Other versions
CN111076712B (en
Inventor
江世松
林大甲
黄宗荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinqianmao Technology Co ltd
Original Assignee
Jinqianmao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinqianmao Technology Co ltd filed Critical Jinqianmao Technology Co ltd
Priority to CN202010080608.5A priority Critical patent/CN111076712B/en
Publication of CN111076712A publication Critical patent/CN111076712A/en
Application granted granted Critical
Publication of CN111076712B publication Critical patent/CN111076712B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • G01C15/02Means for marking measuring points

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to the field of measurement, in particular to an automatic building method, system and device of a space three-dimensional model. The method comprises the steps of establishing a reference coordinate system through a camera initial position, shooting by the camera at a sampling position to obtain a shot image, selecting sampling points in the shot image, finishing information sampling work of all the sampling points by taking deviation angles and distances corresponding to positions of laser spots as coordinate values under the reference coordinate system when the sampling points are located at the camera initial position, finally outputting a space three-dimensional model containing information of all the sampling points, and rapidly acquiring reconnaissance data through the space three-dimensional model to realize accurate and efficient engineering acceptance work.

Description

Automatic building method, system and device of space three-dimensional model
The application is a divisional application of a parent application named as 'an intelligent visual sampling method, system and device' with the application number of 201910353497.8 and the application date of 2019, 4 and 29.
Technical Field
The invention relates to the field of measurement, in particular to an automatic building method, system and device of a space three-dimensional model.
Background
Engineering construction is carried out according to the construction design drawing, construction acceptance is carried out according to the construction drawing, and the material is reduced by preventing the work stealing of a construction unit. In engineering investigation, a total station is generally used, manual operation is relied on, horizontal and vertical braking screws are controlled, angle adjustment is carried out, observation is carried out through an eyepiece in the adjustment process until a cross wire is aligned with a target point, and sampling of a single target point is completed. The whole operation process needs manual participation, the total station belongs to a precision instrument, the adjustment process of the screw needs concentration, when the sampling of a plurality of target points needs to be continuously completed, the whole sampling process is time-consuming and labor-consuming, and the manual work is difficult to be accurately and efficiently completed.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method, the system and the device for automatically establishing the space three-dimensional model by using the camera to replace human eyes to identify, track and measure the target are provided, and the sampling can be automatically, accurately and efficiently completed.
In order to solve the above technical problems, a first technical solution adopted by the present invention is:
an automatic building method of a space three-dimensional model comprises the following steps:
s1, recording the initial position of the camera, and establishing a reference coordinate system; the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
s2, shooting by the camera at the sampling position to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
s3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point;
s4, adjusting the shooting angle of the camera according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
and S6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all the sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all the sampling points.
The second technical scheme adopted by the invention is as follows:
an automatic building system of a three-dimensional model of a space, comprising one or more processors and a memory, said memory storing a program which, when executed by the processors, performs the steps of:
s1, recording the initial position of the camera, and establishing a reference coordinate system; the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
s2, shooting by the camera at the sampling position to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
s3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point;
s4, adjusting the shooting angle of the camera according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
and S6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all the sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all the sampling points.
The third technical scheme adopted by the invention is as follows:
an automatic building device of a space three-dimensional model comprises a camera, a laser range finder, a holder, an angle encoder and a processor; the laser range finder is arranged on the camera, and a light spot of laser of the laser range finder is positioned in a shooting picture of the camera; the camera and the angle encoder are respectively installed on the holder and can rotate along with the holder at a full angle, and the camera, the laser range finder and the angle encoder are respectively electrically connected with the processor.
The invention has the beneficial effects that:
according to the automatic building method, system and device of the space three-dimensional model, the reference coordinate system is built through the initial position of the camera, the camera is located at the sampling position to shoot to obtain a shot image, sampling points are selected in the shot image, the information sampling work of all the sampling points is completed according to the deviation angle and the distance corresponding to the position of the laser spot when the sampling points are located at the initial position of the camera, the space three-dimensional model containing the information of all the sampling points is finally output, the reconnaissance data can be quickly obtained through the space three-dimensional model, and the accurate and efficient engineering acceptance work is realized.
Drawings
FIG. 1 is a flow chart of the steps of the method for automatic building of a three-dimensional model of a space according to the present invention;
FIG. 2 is a schematic structural diagram of an automatic spatial three-dimensional model building system according to the present invention;
FIG. 3 is a schematic diagram of a coordinate system on a captured image according to the present invention;
description of reference numerals:
1. a processor; 2. a memory.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Referring to fig. 1, the method for automatically building a spatial three-dimensional model provided by the present invention includes the following steps:
s1, recording the initial position of the camera, and establishing a reference coordinate system; the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
s2, shooting by the camera at the sampling position to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
s3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point;
s4, adjusting the shooting angle of the camera according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
and S6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all the sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all the sampling points.
From the above description, the beneficial effects of the present invention are:
the invention provides an automatic building method of a space three-dimensional model, which comprises the steps of building a reference coordinate system through an initial position of a camera, shooting by the camera at a sampling position to obtain a shot image, selecting sampling points in the shot image, finishing information sampling work of all the sampling points according to a deviation angle and a distance corresponding to the position of a laser spot when the sampling points are located at the initial position of the camera as coordinate values under the reference coordinate system, and finally outputting the space three-dimensional model containing information of all the sampling points.
Further, step S2 is specifically:
projecting light spots of laser on a camera on a plurality of targets which have the same structure and different distances from a laser emission source respectively to obtain corresponding pixel coordinate data on the targets at different distances;
fitting an image motion track model of laser light spots at different distances according to corresponding pixel coordinate data on targets at different distances;
the camera is located at a sampling position to shoot to obtain a shot image, the distance from the laser to the light spot is substituted into the image motion track model, and a first pixel coordinate value of the light spot of the laser on the shot image is obtained through calculation.
As can be seen from the above description, by the above method, the distance measurement data of the laser (i.e. the distance from the laser to the light spot) can be converted into pixel coordinate values on the shot image, which is convenient for obtaining survey data subsequently.
Further, step S3 is specifically:
receiving sampling information, and extracting a corresponding sampling object image in a preset image database according to the sampling information;
and identifying the sampling object image in the shot image to obtain a corresponding second pixel coordinate value.
According to the above description, the method can realize automatic identification and improve the measurement efficiency.
Referring to fig. 2, the system for automatically building a spatial three-dimensional model provided by the present invention includes one or more processors 1 and a memory 2, where the memory 2 stores a program, and the program implements the following steps when executed by the processor 1:
s1, recording the initial position of the camera, and establishing a reference coordinate system; the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
s2, shooting by the camera at the sampling position to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
s3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point;
s4, adjusting the shooting angle of the camera according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
and S6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all the sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all the sampling points.
From the above description, the beneficial effects of the present invention are:
according to the automatic building system of the space three-dimensional model, the reference coordinate system is built through the initial position of the camera, the camera is located at the sampling position to shoot to obtain a shot image, sampling points are selected in the shot image, the information sampling work of all the sampling points is completed according to the deviation angle and the distance corresponding to the position of the laser spot when the sampling points are located at the initial position of the camera, the deviation angle and the distance are used as coordinate values under the reference coordinate system, the space three-dimensional model containing the information of all the sampling points is finally output, the reconnaissance data can be quickly obtained through the space three-dimensional model, and the accurate and efficient engineering acceptance work is realized.
Further, the program when executed by the processor further performs the steps of:
projecting light spots of laser on a camera on a plurality of targets which have the same structure and different distances from a laser emission source respectively to obtain corresponding pixel coordinate data on the targets at different distances;
fitting an image motion track model of laser light spots at different distances according to corresponding pixel coordinate data on targets at different distances;
the camera is located at a sampling position to shoot to obtain a shot image, the distance from the laser to the light spot is substituted into the image motion track model, and a first pixel coordinate value of the light spot of the laser on the shot image is obtained through calculation.
As can be seen from the above description, the distance measurement data of the laser (i.e. the distance from the laser to the light spot) can be converted into pixel coordinate values on the shot image, so as to facilitate the subsequent acquisition of survey data.
Further, the program when executed by the processor further performs the steps of:
receiving sampling information, and extracting a corresponding sampling object image in a preset image database according to the sampling information;
and identifying the sampling object image in the shot image to obtain a corresponding second pixel coordinate value.
According to the above description, automatic identification can be realized, and the measurement efficiency is improved.
The invention also provides an automatic building device of the space three-dimensional model, which comprises a camera, a laser range finder, a holder, an angle encoder and a processor; the laser range finder is arranged on the camera, and a light spot of laser of the laser range finder is positioned in a shooting picture of the camera; the camera and the angle encoder are respectively installed on the holder and can rotate along with the holder at a full angle, and the camera, the laser range finder and the angle encoder are respectively electrically connected with the processor.
From the above description, the beneficial effects of the present invention are:
the automatic establishing device of the space three-dimensional model provided by the invention establishes a reference coordinate system through the initial position of the camera, the camera is positioned at the sampling position to shoot to obtain a shot image, sampling points are selected in the shot image, the information sampling work of all the sampling points is completed according to the deviation angle and the distance corresponding to the position of the laser spot when the sampling points are positioned at the initial position of the camera as coordinate values under the reference coordinate system, and finally the space three-dimensional model containing the information of all the sampling points is output.
Further, the device also comprises a memory, and the memory is electrically connected with the processor.
Referring to fig. 1 and fig. 3, a first embodiment of the present invention is:
the invention provides an automatic building method of a space three-dimensional model, which comprises the following steps:
s1, recording the initial position of the camera, and establishing a reference coordinate system; the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
in this embodiment, when the pan/tilt head is at the set horizontal and vertical zero orientations, the initial position of the camera is taken as the initial position of the camera, and when the initial position of the camera (that is, the pan/tilt head is at the initial position), the light beam of the laser is taken as ZwAxes, establishing a spatial reference coordinate system XwYwZw
S2, rotating the camera to enable the camera to be located at the sampling position to shoot to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
it should be noted that the initial position of the camera (i.e. when the horizontal and vertical angles of the pan/tilt head are zero) is a fixed position, and a reference coordinate system is established at this position. The sampling position of the camera refers to a position where the camera is rotated to an unexpected target for sampling, if the camera is rotated to a sampling position at the upper left corner of a building to shoot an image, coordinates of a target sampling point in the upper left corner image under a camera initial position reference coordinate system are obtained; and then, turning to the sampling position at the upper right corner to shoot the image to obtain the coordinates of the target sampling point in the image at the upper right corner under the reference coordinate system of the initial position of the camera. The image target sampling points at any sampling position can be unified to the same reference coordinate system established by the initial position of the camera.
For example, after the initial position of the camera is set, the camera is turned to the position A to be used as a camera sampling position to shoot an image, and three-dimensional coordinates of a target sampling point in the image A under the initial position of the camera are obtained through a series of steps; turning the camera to the position B, taking the camera as a sampling position of the camera to shoot an image, and obtaining a three-dimensional coordinate of a target sampling point in the image B at the initial position of the camera through a series of steps; thus A, B the target sample points between images are correlated.
Wherein, the step S2 specifically includes:
projecting light spots of laser on a camera on a plurality of targets which have the same structure and different distances from a laser emission source respectively to obtain corresponding pixel coordinate data on the targets at different distances;
fitting an image motion track model of laser light spots at different distances according to corresponding pixel coordinate data on targets at different distances;
the camera is located at a sampling position to shoot to obtain a shot image, the distance from the laser to the light spot is substituted into the image motion track model, and a first pixel coordinate value of the light spot of the laser on the shot image is obtained through calculation.
In the embodiment, the laser range finder obtains the spot distance, calculates the pixel coordinate of the spot according to the spot distance, traverses the pixel coordinate list of the sampling point, and calculates the difference value list of the horizontal and vertical pixel coordinates of the spot pixel coordinate and the sampling point pixel coordinate; converting the pixel coordinate difference list into the angle difference list through the field angle of the camera; rotating the holder to enable the pixel coordinates of the light spot to coincide with the pixel coordinates of the sampling point, obtaining the distance of the sampling point by the laser range finder, obtaining the deviation angle of the sampling point by the angle encoder, and obtaining a sampling point information list by the distance and the deviation angle;
the calculation method of the pixel coordinates of the light spot comprises the following steps: firstly, in a laboratory, the light spots of the laser range finder are projected on targets with different distances, the targets are made of materials with good light reflection degree and high bottom color identification degree, pixel coordinate data of the light spots at different distances in the camera imaging process are obtained by combining RGB and circle center feature recognition and extraction technologies, and image motion track models of the light spots at different distances are fitted by the pixel coordinate data; substituting the light spot distance into the motion trail model to calculate the light spot pixel coordinate of the light spot in the shot image;
the field angle of the camera comprises a horizontal field angle and a vertical field angle, wherein the horizontal field angle refers to a spatial horizontal angle range which can be seen by the camera, and the vertical field angle refers to a spatial vertical angle range which can be seen by the camera;
the angle difference comprises a horizontal angle difference and a vertical angle difference, wherein the horizontal angle difference is a horizontal pixel coordinate difference/column number of the image/horizontal field angle, and the vertical angle difference is a horizontal pixel coordinate difference/column number of the image/horizontal field angle;
the deviation angle comprises a horizontal deviation angle and a vertical deviation angle, and when the holder is located at the sampling position, the angle encoder obtains the horizontal deviation angle and the vertical deviation angle of the sampling position based on the initial position of the holder.
S3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point; or select a sample point from a sample library.
Wherein, the step S3 specifically includes:
receiving sampling information, and extracting a corresponding sampling object image in a preset image database according to the sampling information; and identifying the sampling object image in the shot image to obtain a corresponding second pixel coordinate value.
In the present embodiment, as shown in fig. 3, a coordinate system u-v with the top left corner of the captured image as the origin and the pixel as the unit is provided, where the abscissa u and the ordinate v of the pixel are the number of columns and the number of rows of the captured image, respectively;
the sample repository is a collection containing many different sampling targets; the sampling target is an engineering project sample drawing and name required to be checked and accepted in engineering check and acceptance, such as engineering projects of precast piles, column hoops and the like in building construction; when the sampling target is selected, the sampling point of the sampling target can be automatically positioned in the image without manually selecting the sampling point;
automatically positioning the sampling points of the sampling target in the shot image, namely shooting engineering project images including precast piles, column hoops and the like needing engineering acceptance by the cameras in each project in advance to form a large number of data sample images; manually labeling the data sample image, wherein the labeling means that engineering project areas such as precast piles and column hoops are framed out on the data sample image to obtain coordinates of the project areas on the data sample image, naming the project areas according to the engineering projects to obtain annotation tags of { data sample images and coordinates | naming }, constructing a regional convolutional neural network artificial intelligent deep learning model with a segmentation mask according to the scale of the data sample image and the number of the varieties of the engineering projects, injecting the annotation tags into the model for training, and stopping training by continuously adjusting hyper-parameters of the model and observing fitting and prediction errors until the prediction errors are reduced to be below a reasonable threshold value to obtain an excellent parameter model; and when the sampling target in the sampling library is selected, obtaining the area coordinate of the sampling target in the image by using the excellent parameter model of the image, and automatically obtaining the sampling point.
S4, adjusting the shooting angle of the camera according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
and S6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all the sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all the sampling points.
In this embodiment, the coordinates of each sampling point in the reference coordinate system are obtained from the sampling point information list
Figure BDA0002380171950000091
The superscript n denotes the nth sample point, which in turn results in a list of sample point coordinates.
Figure BDA0002380171950000101
Figure BDA0002380171950000102
Figure BDA0002380171950000103
Wherein D isnIs the sample point distance, α is the sample point vertical departure angle, β is the sample point horizontal departure angle, R (X)wα) is wound around XwExpressed in the form of a transformation matrix with an axis rotation of α degrees, R (Y)wβ) is wound around YwThe axis rotation is represented in the form of a transformation matrix of β degrees.
Establishing a space three-dimensional model of all the sampling points according to the sampling point coordinate list;
selecting any two sampling points A, B in the engineering three-dimensional model with the coordinates of
Figure BDA0002380171950000104
Figure BDA0002380171950000105
Size information can be obtained
Figure BDA0002380171950000106
Selecting any three sampling points A, B, C in the engineering three-dimensional model with the coordinates of
Figure BDA0002380171950000107
Figure BDA0002380171950000108
Area information can be obtained
Figure BDA0002380171950000109
Wherein the content of the first and second substances,
Figure BDA00023801719500001010
Figure BDA00023801719500001011
Figure BDA00023801719500001012
p=(a+b+c)/2;
selecting any plurality of non-coplanar sampling points in the engineering three-dimensional model, such as the sampling point A, B, C, D, with the coordinate of
Figure BDA0002380171950000111
Volume information can be obtained
Figure BDA0002380171950000112
By the mode, the reconnaissance data can be rapidly acquired, and accurate and efficient engineering acceptance work is realized.
Referring to fig. 2, the second embodiment of the present invention is:
the invention provides an automatic building system of a space three-dimensional model, which comprises one or more processors 1 and a memory 2, wherein the memory 2 stores a program, and the program realizes the following steps when being executed by the processor 1:
s1, recording the initial position of the camera, and establishing a reference coordinate system; the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
s2, shooting by the camera at the sampling position to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
s3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point;
s4, adjusting the shooting angle of the camera according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
and S6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all the sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all the sampling points.
Further, the program when executed by the processor further performs the steps of:
projecting light spots of laser on a camera on a plurality of targets which have the same structure and different distances from a laser emission source respectively to obtain corresponding pixel coordinate data on the targets at different distances;
fitting an image motion track model of laser light spots at different distances according to corresponding pixel coordinate data on targets at different distances;
the camera is located at a sampling position to shoot to obtain a shot image, the distance from the laser to the light spot is substituted into the image motion track model, and a first pixel coordinate value of the light spot of the laser on the shot image is obtained through calculation.
Further, the program when executed by the processor further performs the steps of:
receiving sampling information, and extracting a corresponding sampling object image in a preset image database according to the sampling information;
and identifying the sampling object image in the shot image to obtain a corresponding second pixel coordinate value.
The third embodiment of the invention is as follows:
the invention provides an automatic building device of a space three-dimensional model, which comprises a camera, a laser range finder, a holder, an angle encoder and a processor, wherein the camera is connected with the laser range finder; the laser range finder is arranged on the camera, and a light spot of laser of the laser range finder is positioned in a shooting picture of the camera; the camera and the angle encoder are respectively installed on the holder and can rotate along with the holder at a full angle, and the camera, the laser range finder and the angle encoder are respectively electrically connected with the processor.
The processor is configured to record an initial position of the camera and establish a reference coordinate system; controlling a camera to be positioned at a sampling position to shoot to obtain a shot image, and calculating according to the distance from laser to a light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image; receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point; adjusting the shooting angle of the camera through the rotation of the holder according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained; comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and acquiring the distance of the sampling point measured by the laser at the current position; and selecting other sampling points for multiple times by taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system until all the sampling points on the shot image are received, and establishing a corresponding space three-dimensional model according to all the sampling points.
Further, the device also comprises a memory, and the memory is electrically connected with the processor. The memory is used for storing an image database (sampling library) which contains a collection of a plurality of different sampling targets; the sampling target is an engineering project sample drawing and name required to be checked and accepted in engineering check and acceptance, such as engineering projects of precast piles, column hoops and the like in building construction; when the sampling target is selected, the sampling point of the sampling target can be automatically positioned in the image without manually selecting the sampling point; automatically positioning the sampling points of the sampling target in the shot image, namely shooting engineering project images including precast piles, column hoops and the like needing engineering acceptance by the cameras in each project in advance to form a large number of data sample images; manually labeling the data sample image, wherein the labeling means that engineering project areas such as precast piles and column hoops are framed out on the data sample image to obtain coordinates of the project areas on the data sample image, naming the project areas according to the engineering projects to obtain annotation tags of { data sample images and coordinates | naming }, constructing a regional convolutional neural network artificial intelligent deep learning model with a segmentation mask according to the scale of the data sample image and the number of the varieties of the engineering projects, injecting the annotation tags into the model for training, and stopping training by continuously adjusting hyper-parameters of the model and observing fitting and prediction errors until the prediction errors are reduced to be below a reasonable threshold value to obtain an excellent parameter model; and when the sampling target in the sampling library is selected, obtaining the area coordinate of the sampling target in the image by using the excellent parameter model of the image, and automatically obtaining the sampling point.
In summary, according to the automatic establishment method, system and device of the spatial three-dimensional model provided by the invention, the reference coordinate system is established through the initial position of the camera, the camera is located at the sampling position to shoot to obtain the shot image, the sampling point is selected from the shot image, the information sampling work of all the sampling points is completed according to the deviation angle and the distance corresponding to the position of the laser spot when the sampling point is located at the initial position of the camera as the coordinate value under the reference coordinate system, and finally the spatial three-dimensional model containing the information of all the sampling points is output.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.

Claims (10)

1. An automatic building method of a space three-dimensional model is characterized by comprising the following steps:
s1, mounting the camera on the holder, recording the initial position of the camera, and establishing a reference coordinate system;
the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
the cloud platform is in when the level of settlement and perpendicular zero position, as the initial position of camera, when the camera is in the initial position, use the light beam of laser as ZwAxes, establishing a spatial reference coordinate system XwYwZw
S2, shooting by the camera at the sampling position to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
s3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point;
s4, adjusting the shooting angle of the camera by rotating the holder according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
s6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all sampling points;
the deviation angle comprises a horizontal deviation angle and a vertical deviation angle;
the establishing of the corresponding space three-dimensional model according to all the sampling points comprises the following steps:
obtaining the coordinates of each sampling point under the reference coordinate system according to the corresponding deviation angles and distances of all sampling points
Figure FDA0002380171940000011
Obtaining a sampling coordinate point coordinate list:
Figure FDA0002380171940000012
Figure FDA0002380171940000013
Figure FDA0002380171940000021
the superscript n denotes the nth sample point, DnIs the sample point distance, α is the vertical divergence angle of the sample point, β is the horizontal divergence angle of the sample point, R (X)wα) is wound around XwExpressed in the form of a transformation matrix with an axis rotation of α degrees, R (Y)wβ) is wound around YwA transformation matrix form representation of axis rotation β degrees;
and establishing a corresponding space three-dimensional model according to the sampling point coordinate list.
2. The method for automatically building a spatial three-dimensional model according to claim 1, wherein step S2 specifically comprises:
projecting light spots of laser on a camera on a plurality of targets which have the same structure and different distances from a laser emission source respectively to obtain corresponding pixel coordinate data on the targets at different distances;
fitting an image motion track model of laser light spots at different distances according to corresponding pixel coordinate data on targets at different distances;
the camera is located at a sampling position to shoot to obtain a shot image, the distance from the laser to the light spot is substituted into the image motion track model, and a first pixel coordinate value of the light spot of the laser on the shot image is obtained through calculation.
3. The method for automatically building a spatial three-dimensional model according to claim 1, wherein step S3 specifically comprises:
receiving sampling information, and extracting a corresponding sampling object image in a preset image database according to the sampling information;
and identifying the sampling object image in the shot image to obtain a corresponding second pixel coordinate value.
4. The method for automatically building a three-dimensional model of space according to claim 1, further comprising:
obtaining size information between any two sampling points according to any two sampling points in the space three-dimensional model;
obtaining area information corresponding to three sampling points according to any three sampling points in the space three-dimensional model;
and obtaining volume information corresponding to the non-coplanar sampling points according to any non-coplanar sampling points in the three-dimensional space model.
5. An automatic building system of a three-dimensional model of a space, comprising one or more processors and a memory, said memory storing a program which, when executed by the processors, performs the steps of:
s1, mounting the camera on the holder, recording the initial position of the camera, and establishing a reference coordinate system;
the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
the cloud platform is in when the level of settlement and perpendicular zero position, as the initial position of camera, when the camera is in the initial position, use the light beam of laser as ZwAxes, establishing a spatial reference coordinate system XwYwZw
S2, shooting by the camera at the sampling position to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
s3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point;
s4, adjusting the shooting angle of the camera by rotating the holder according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
s6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all sampling points;
the deviation angle comprises a horizontal deviation angle and a vertical deviation angle;
the establishing of the corresponding space three-dimensional model according to all the sampling points comprises the following steps:
obtaining the coordinates of each sampling point under the reference coordinate system according to the corresponding deviation angles and distances of all sampling points
Figure FDA0002380171940000031
Obtaining a sampling coordinate point coordinate list:
Figure FDA0002380171940000032
Figure FDA0002380171940000033
Figure FDA0002380171940000041
the superscript n denotes the nth sample point, DnIs the sample point distance, α is the vertical divergence angle of the sample point, β is the horizontal divergence angle of the sample point, R (X)wα) is wound around XwExpressed in the form of a transformation matrix with an axis rotation of α degrees, R (Y)wβ) is wound around YwA transformation matrix form representation of axis rotation β degrees;
and establishing a corresponding space three-dimensional model according to the sampling point coordinate list.
6. The system for automatically building a three-dimensional model of space according to claim 5, wherein the program when executed by the processor further performs the steps of:
projecting light spots of laser on a camera on a plurality of targets which have the same structure and different distances from a laser emission source respectively to obtain corresponding pixel coordinate data on the targets at different distances;
fitting an image motion track model of laser light spots at different distances according to corresponding pixel coordinate data on targets at different distances;
the camera is located at a sampling position to shoot to obtain a shot image, the distance from the laser to the light spot is substituted into the image motion track model, and a first pixel coordinate value of the light spot of the laser on the shot image is obtained through calculation.
7. The system for automatically building a three-dimensional model of space according to claim 5, wherein the program when executed by the processor further performs the steps of:
receiving sampling information, and extracting a corresponding sampling object image in a preset image database according to the sampling information;
and identifying the sampling object image in the shot image to obtain a corresponding second pixel coordinate value.
8. The system for automatically building a three-dimensional model of space according to claim 5, wherein the program when executed by the processor further performs the steps of:
obtaining size information between any two sampling points according to any two sampling points in the space three-dimensional model;
obtaining area information corresponding to three sampling points according to any three sampling points in the space three-dimensional model;
and obtaining volume information corresponding to the non-coplanar sampling points according to any non-coplanar sampling points in the three-dimensional space model.
9. An automatic building device of a space three-dimensional model is characterized by comprising a camera, a laser range finder, a holder, an angle encoder and a processor; the laser range finder is arranged on the camera, and a light spot of laser of the laser range finder is positioned in a shooting picture of the camera; the camera, the laser range finder and the angle encoder are respectively electrically connected with the processor;
the processor is configured to:
recording the initial position of the camera and establishing a reference coordinate system;
the cloud platform is in when the level of settlement and perpendicular zero position, as the initial position of camera, when the camera is in the initial position, use the light beam of laser as ZwAxes, establishing a spatial reference coordinate system XwYwZw
Controlling a camera to be positioned at a sampling position to shoot to obtain a shot image, and calculating according to the distance from a laser range finder to a light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point;
adjusting the shooting angle of the camera through the rotation of the holder according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
comparing the current position of the camera with the recorded initial position of the camera, obtaining a deviation angle corresponding to the sampling point through an angle encoder, and obtaining the distance of the sampling point measured by the laser at the current position;
taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, selecting other sampling points for multiple times until all sampling points on the shot image are received, and establishing a corresponding space three-dimensional model according to all sampling points;
the deviation angle comprises a horizontal deviation angle and a vertical deviation angle;
the establishing of the corresponding space three-dimensional model according to all the sampling points comprises the following steps:
obtaining the coordinates of each sampling point under the reference coordinate system according to the corresponding deviation angles and distances of all sampling points
Figure FDA0002380171940000051
Obtaining a sampling coordinate point coordinate list:
Figure FDA0002380171940000061
Figure FDA0002380171940000062
Figure FDA0002380171940000063
the superscript n denotes the nth sample point, DnIs the sample point distance, α is the vertical divergence angle of the sample point, β is the horizontal divergence angle of the sample point, R (X)wα) is wound around XwExpressed in the form of a transformation matrix with an axis rotation of α degrees, R (Y)wβ) is wound around YwA transformation matrix form representation of axis rotation β degrees;
and establishing a corresponding space three-dimensional model according to the sampling point coordinate list.
10. The apparatus for automatically building a three-dimensional model according to claim 9, further comprising a memory electrically connected to the processor;
the memory is used for storing an image database, and the image database contains a set of different sampling targets;
when the sampling target is selected, the sampling point of the sampling target is automatically positioned in the image.
CN202010080608.5A 2019-04-29 2019-04-29 Automatic building method, system and device of space three-dimensional model Active CN111076712B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010080608.5A CN111076712B (en) 2019-04-29 2019-04-29 Automatic building method, system and device of space three-dimensional model

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010080608.5A CN111076712B (en) 2019-04-29 2019-04-29 Automatic building method, system and device of space three-dimensional model
CN201910353497.8A CN110231023B (en) 2019-04-29 2019-04-29 Intelligent visual sampling method, system and device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910353497.8A Division CN110231023B (en) 2019-04-29 2019-04-29 Intelligent visual sampling method, system and device

Publications (2)

Publication Number Publication Date
CN111076712A true CN111076712A (en) 2020-04-28
CN111076712B CN111076712B (en) 2021-08-31

Family

ID=67860934

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201910353497.8A Active CN110231023B (en) 2019-04-29 2019-04-29 Intelligent visual sampling method, system and device
CN202010080608.5A Active CN111076712B (en) 2019-04-29 2019-04-29 Automatic building method, system and device of space three-dimensional model
CN202010080606.6A Pending CN111256669A (en) 2019-04-29 2019-04-29 Automatic sampling device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910353497.8A Active CN110231023B (en) 2019-04-29 2019-04-29 Intelligent visual sampling method, system and device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010080606.6A Pending CN111256669A (en) 2019-04-29 2019-04-29 Automatic sampling device

Country Status (2)

Country Link
CN (3) CN110231023B (en)
WO (1) WO2020220522A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111614903A (en) * 2020-05-28 2020-09-01 西安航空学院 Method for removing faculae in image shooting
CN112288810A (en) * 2020-10-29 2021-01-29 铜陵有色金属集团股份有限公司 Sampling positioning method, device, system and computer storage medium
CN112866579A (en) * 2021-02-08 2021-05-28 上海巡智科技有限公司 Data acquisition method and device and readable storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414508B (en) * 2020-03-17 2022-09-13 金钱猫科技股份有限公司 Method and terminal for searching and realizing visualization in design model
CN112099028A (en) * 2020-09-03 2020-12-18 深圳市迈测科技股份有限公司 Laser spot automatic tracking method and device, storage medium and laser ranging device
CN113050113B (en) * 2021-03-10 2023-08-01 广州南方卫星导航仪器有限公司 Laser spot positioning method and device
CN113552054A (en) * 2021-07-16 2021-10-26 苏州苏试试验集团股份有限公司 Control device and control method for automatic positioning of environmental test chamber
CN113693636B (en) * 2021-08-30 2023-11-24 南方科技大学 Sampling method, sampling system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103047969A (en) * 2012-12-07 2013-04-17 北京百度网讯科技有限公司 Method for generating three-dimensional image through mobile terminal and mobile terminal
CN104613948A (en) * 2015-02-03 2015-05-13 北京航空航天大学 Multi-angle tunable laser dotting device
CN106073895A (en) * 2016-08-12 2016-11-09 杭州三坛医疗科技有限公司 Noninvasive type real-time surgery location 3D navigator
US20180231371A1 (en) * 2015-08-10 2018-08-16 Wisetech Global Limited Volumetric estimation methods, devices, & systems

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005127992A (en) * 2003-09-30 2005-05-19 Tokyo Univ Of Agriculture Instrument and method for measuring position of moving object by laser range finder
JP5019478B2 (en) * 2008-09-26 2012-09-05 独立行政法人日本原子力研究開発機構 Marker automatic registration method and system
CN102445183B (en) * 2011-10-09 2013-12-18 福建汇川数码技术科技有限公司 Positioning method of ranging laser point of remote ranging system based on paralleling of laser and camera
CN203148438U (en) * 2013-03-08 2013-08-21 武汉海达数云技术有限公司 Integrated mobile three-dimensional measuring device
CN103557796B (en) * 2013-11-19 2016-06-08 天津工业大学 3 D positioning system and localization method based on laser ranging and computer vision
CN103557821A (en) * 2013-11-21 2014-02-05 福建汇川数码技术科技有限公司 Method for achieving three-dimensional space measuring under non-leveling, non-centering and height-measuring states
CN105486289B (en) * 2016-01-31 2018-03-23 山东科技大学 A kind of laser photography measuring system and camera calibration method
CN106152971B (en) * 2016-07-28 2018-07-17 南京航空航天大学 Laser three-dimensional scanning marker method under machine vision auxiliary
CN106249427B (en) * 2016-08-31 2018-11-20 河北汉光重工有限责任公司 A kind of optic axis adjusting method based on laser imaging
CN108828555B (en) * 2017-05-18 2020-08-04 金钱猫科技股份有限公司 Accurate measurement method, system and device based on coordinate transformation
CN107101580B (en) * 2017-05-18 2018-04-20 金钱猫科技股份有限公司 A kind of image measuring method based on laser, system and device
CN108050928B (en) * 2017-09-05 2024-03-12 东莞中子科学中心 Visual measuring instrument and visual measuring method
CN107909029A (en) * 2017-11-14 2018-04-13 福州瑞芯微电子股份有限公司 A kind of real scene virtualization acquisition method and circuit
CN109307477B (en) * 2018-12-04 2020-10-13 福建汇川物联网技术科技股份有限公司 Displacement measurement system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103047969A (en) * 2012-12-07 2013-04-17 北京百度网讯科技有限公司 Method for generating three-dimensional image through mobile terminal and mobile terminal
CN104613948A (en) * 2015-02-03 2015-05-13 北京航空航天大学 Multi-angle tunable laser dotting device
US20180231371A1 (en) * 2015-08-10 2018-08-16 Wisetech Global Limited Volumetric estimation methods, devices, & systems
CN106073895A (en) * 2016-08-12 2016-11-09 杭州三坛医疗科技有限公司 Noninvasive type real-time surgery location 3D navigator

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吕翠华 等: ""基于三维激光扫描技术的建筑物三维建模方法"", 《科学技术与工程》 *
黄涛 等: ""基于标定物的相机标定及三维重建"", 《计算机工程应用技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111614903A (en) * 2020-05-28 2020-09-01 西安航空学院 Method for removing faculae in image shooting
CN112288810A (en) * 2020-10-29 2021-01-29 铜陵有色金属集团股份有限公司 Sampling positioning method, device, system and computer storage medium
CN112288810B (en) * 2020-10-29 2023-04-07 铜陵有色金属集团股份有限公司 Sampling positioning method, device, system and computer storage medium
CN112866579A (en) * 2021-02-08 2021-05-28 上海巡智科技有限公司 Data acquisition method and device and readable storage medium

Also Published As

Publication number Publication date
CN110231023B (en) 2020-02-21
WO2020220522A1 (en) 2020-11-05
CN111076712B (en) 2021-08-31
CN110231023A (en) 2019-09-13
CN111256669A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN111076712B (en) Automatic building method, system and device of space three-dimensional model
US9965870B2 (en) Camera calibration method using a calibration target
CN109410256B (en) Automatic high-precision point cloud and image registration method based on mutual information
Herráez et al. 3D modeling by means of videogrammetry and laser scanners for reverse engineering
CN105069743B (en) Detector splices the method for real time image registration
CN102834691B (en) Surveying method
CN105928498A (en) Determination Of Object Data By Template-based Uav Control
Perfetti et al. Fisheye photogrammetry: tests and methodologies for the survey of narrow spaces
CN105627948A (en) Large-scale complex curved surface measurement system and application thereof
CN112818990B (en) Method for generating target detection frame, method and system for automatically labeling image data
Moussa Integration of digital photogrammetry and terrestrial laser scanning for cultural heritage data recording
CN109900274B (en) Image matching method and system
CN108596117B (en) Scene monitoring method based on two-dimensional laser range finder array
Borrmann et al. Robotic mapping of cultural heritage sites
CN111060006A (en) Viewpoint planning method based on three-dimensional model
CN114820924A (en) Method and system for analyzing museum visit based on BIM and video monitoring
Pulcrano et al. 3D cameras acquisitions for the documentation of cultural heritage
Barrile et al. 3D modeling with photogrammetry by UAVs and model quality verification
WO2022078439A1 (en) Apparatus and method for acquisition and matching of 3d information of space and object
CN205352322U (en) Large -scale complicated curved surface measurement system
CN112381190B (en) Cable force testing method based on mobile phone image recognition
Itakura et al. Voxel-based leaf area estimation from three-dimensional plant images
CN112254671B (en) Multi-time combined 3D acquisition system and method
Uyanik et al. A method for determining 3D surface points of objects by a single camera and rotary stage
Ahmad Yusri et al. Preservation of cultural heritage: a comparison study of 3D modelling between laser scanning, depth image, and photogrammetry methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant