CN110231023B - Intelligent visual sampling method, system and device - Google Patents
Intelligent visual sampling method, system and device Download PDFInfo
- Publication number
- CN110231023B CN110231023B CN201910353497.8A CN201910353497A CN110231023B CN 110231023 B CN110231023 B CN 110231023B CN 201910353497 A CN201910353497 A CN 201910353497A CN 110231023 B CN110231023 B CN 110231023B
- Authority
- CN
- China
- Prior art keywords
- sampling
- camera
- laser
- pixel coordinate
- shot image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C15/00—Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
- G01C15/02—Means for marking measuring points
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention relates to the field of measurement, in particular to an intelligent visual sampling method, system and device. The method comprises the steps of establishing a reference coordinate system through a camera initial position, shooting by the camera at a sampling position to obtain a shot image, selecting sampling points in the shot image, finishing information sampling work of all the sampling points by taking deviation angles and distances corresponding to positions of laser spots as coordinate values under the reference coordinate system when the sampling points are located at the camera initial position, finally outputting a space three-dimensional model containing information of all the sampling points, and rapidly acquiring reconnaissance data through the space three-dimensional model to realize accurate and efficient engineering acceptance work.
Description
Technical Field
The invention relates to the field of measurement, in particular to an intelligent visual sampling method, system and device.
Background
Engineering construction is carried out according to the construction design drawing, construction acceptance is carried out according to the construction drawing, and the material is reduced by preventing the work stealing of a construction unit. In engineering investigation, a total station is generally used, manual operation is relied on, horizontal and vertical braking screws are controlled, angle adjustment is carried out, observation is carried out through an eyepiece in the adjustment process until a cross wire is aligned with a target point, and sampling of a single target point is completed. The whole operation process needs manual participation, the total station belongs to a precision instrument, the adjustment process of the screw needs concentration, when the sampling of a plurality of target points needs to be continuously completed, the whole sampling process is time-consuming and labor-consuming, and the manual work is difficult to be accurately and efficiently completed.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the intelligent visual sampling method, the system and the device for identifying, tracking and measuring the target by using the camera instead of human eyes are provided, and the sampling can be automatically, accurately and efficiently completed.
In order to solve the above technical problems, a first technical solution adopted by the present invention is:
an intelligent visual sampling method, comprising the steps of:
s1, recording the initial position of the camera, and establishing a reference coordinate system; the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
s2, shooting by the camera at the sampling position to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
s3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point;
s4, adjusting the shooting angle of the camera according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
and S6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all the sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all the sampling points.
The second technical scheme adopted by the invention is as follows:
an intelligent visual sampling system comprising one or more processors and memory, said memory storing a program that when executed by the processors performs the steps of:
s1, recording the initial position of the camera, and establishing a reference coordinate system; the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
s2, shooting by the camera at the sampling position to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
s3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point;
s4, adjusting the shooting angle of the camera according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
and S6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all the sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all the sampling points.
The third technical scheme adopted by the invention is as follows:
an intelligent vision sampling device comprises a camera, a laser range finder, a holder, an angle encoder and a processor; the laser range finder is arranged on the camera, and a light spot of laser of the laser range finder is positioned in a shooting picture of the camera; the camera and the angle encoder are respectively installed on the holder and can rotate along with the holder at a full angle, and the camera, the laser range finder and the angle encoder are respectively electrically connected with the processor.
The invention has the beneficial effects that:
according to the intelligent visual sampling method, the system and the device, a reference coordinate system is established through the initial position of the camera, the camera is positioned at the sampling position to shoot to obtain a shot image, sampling points are selected in the shot image, the information sampling work of all the sampling points is completed according to the deviation angle and the distance corresponding to the position of the laser spot when the sampling points are positioned at the initial position of the camera as coordinate values under the reference coordinate system, a space three-dimensional model containing the information of all the sampling points is finally output, the reconnaissance data can be quickly obtained through the space three-dimensional model, and the accurate and efficient engineering acceptance work is realized.
Drawings
FIG. 1 is a flow chart of the steps of the intelligent visual sampling method of the present invention;
FIG. 2 is a schematic diagram of an intelligent visual sampling system according to the present invention;
FIG. 3 is a schematic diagram of a coordinate system on a captured image according to the present invention;
description of reference numerals:
1. a processor; 2. a memory.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Referring to fig. 1, the intelligent visual sampling method provided by the present invention includes the following steps:
s1, recording the initial position of the camera, and establishing a reference coordinate system; the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
s2, shooting by the camera at the sampling position to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
s3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point;
s4, adjusting the shooting angle of the camera according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
and S6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all the sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all the sampling points.
From the above description, the beneficial effects of the present invention are:
according to the intelligent visual sampling method provided by the invention, a reference coordinate system is established through the initial position of the camera, the camera is positioned at the sampling position to shoot to obtain a shot image, sampling points are selected in the shot image, the information sampling work of all the sampling points is completed according to the deviation angle and the distance corresponding to the position of the laser spot when the sampling points are positioned at the initial position of the camera as coordinate values under the reference coordinate system, a space three-dimensional model containing the information of all the sampling points is finally output, and the reconnaissance data can be quickly obtained through the space three-dimensional model, so that the accurate and efficient engineering acceptance work is realized.
Further, step S2 is specifically:
projecting light spots of laser on a camera on a plurality of targets which have the same structure and different distances from a laser emission source respectively to obtain corresponding pixel coordinate data on the targets at different distances;
fitting an image motion track model of laser light spots at different distances according to corresponding pixel coordinate data on targets at different distances;
the camera is located at a sampling position to shoot to obtain a shot image, the distance from the laser to the light spot is substituted into the image motion track model, and a first pixel coordinate value of the light spot of the laser on the shot image is obtained through calculation.
As can be seen from the above description, by the above method, the distance measurement data of the laser (i.e. the distance from the laser to the light spot) can be converted into pixel coordinate values on the shot image, which is convenient for obtaining survey data subsequently.
Further, step S3 is specifically:
receiving sampling information, and extracting a corresponding sampling object image in a preset image database according to the sampling information;
and identifying the sampling object image in the shot image to obtain a corresponding second pixel coordinate value.
According to the above description, the method can realize automatic identification and improve the measurement efficiency.
Referring to fig. 2, the present invention provides an intelligent visual sampling system, which includes one or more processors 1 and a memory 2, where the memory 2 stores a program, and when the program is executed by the processor 1, the following steps are implemented:
s1, recording the initial position of the camera, and establishing a reference coordinate system; the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
s2, shooting by the camera at the sampling position to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
s3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point;
s4, adjusting the shooting angle of the camera according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
and S6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all the sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all the sampling points.
From the above description, the beneficial effects of the present invention are:
according to the intelligent visual sampling system provided by the invention, a reference coordinate system is established through the initial position of the camera, the camera is positioned at the sampling position to shoot to obtain a shot image, sampling points are selected in the shot image, the information sampling work of all the sampling points is completed according to the deviation angle and the distance corresponding to the position of a laser spot when the sampling points are positioned at the initial position of the camera as coordinate values under the reference coordinate system, a space three-dimensional model containing the information of all the sampling points is finally output, and the reconnaissance data can be quickly obtained through the space three-dimensional model, so that the accurate and efficient engineering acceptance work is realized.
Further, the program when executed by the processor further performs the steps of:
projecting light spots of laser on a camera on a plurality of targets which have the same structure and different distances from a laser emission source respectively to obtain corresponding pixel coordinate data on the targets at different distances;
fitting an image motion track model of laser light spots at different distances according to corresponding pixel coordinate data on targets at different distances;
the camera is located at a sampling position to shoot to obtain a shot image, the distance from the laser to the light spot is substituted into the image motion track model, and a first pixel coordinate value of the light spot of the laser on the shot image is obtained through calculation.
As can be seen from the above description, the distance measurement data of the laser (i.e. the distance from the laser to the light spot) can be converted into pixel coordinate values on the shot image, so as to facilitate the subsequent acquisition of survey data.
Further, the program when executed by the processor further performs the steps of:
receiving sampling information, and extracting a corresponding sampling object image in a preset image database according to the sampling information;
and identifying the sampling object image in the shot image to obtain a corresponding second pixel coordinate value.
According to the above description, automatic identification can be realized, and the measurement efficiency is improved.
The invention also provides an intelligent visual sampling device, which comprises a camera, a laser range finder, a holder, an angle encoder and a processor; the laser range finder is arranged on the camera, and a light spot of laser of the laser range finder is positioned in a shooting picture of the camera; the camera and the angle encoder are respectively installed on the holder and can rotate along with the holder at a full angle, and the camera, the laser range finder and the angle encoder are respectively electrically connected with the processor.
From the above description, the beneficial effects of the present invention are:
according to the intelligent visual sampling device provided by the invention, a reference coordinate system is established through the initial position of the camera, the camera is positioned at the sampling position to shoot to obtain a shot image, sampling points are selected in the shot image, the information sampling work of all the sampling points is completed according to the deviation angle and the distance corresponding to the position of a laser spot when the sampling points are positioned at the initial position of the camera as coordinate values under the reference coordinate system, a space three-dimensional model containing the information of all the sampling points is finally output, and the reconnaissance data can be quickly obtained through the space three-dimensional model, so that the accurate and efficient engineering acceptance work is realized.
Further, the device also comprises a memory, and the memory is electrically connected with the processor.
Referring to fig. 1 and fig. 3, a first embodiment of the present invention is:
the invention provides an intelligent visual sampling method, which comprises the following steps:
s1, recording the initial position of the camera, and establishing a reference coordinate system; the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
in this embodiment, when the pan/tilt head is at the set horizontal and vertical zero orientations, the initial position of the camera is taken as the initial position of the camera, and when the initial position of the camera (that is, the pan/tilt head is at the initial position), the light beam of the laser is taken as ZwAxes, establishing a spatial reference coordinate system XwYwZw。
S2, rotating the camera to enable the camera to be located at the sampling position to shoot to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
it should be noted that the initial position of the camera (i.e. when the horizontal and vertical angles of the pan/tilt head are zero) is a fixed position, and a reference coordinate system is established at this position. The sampling position of the camera refers to a position where the camera is rotated to an unexpected target for sampling, if the camera is rotated to a sampling position at the upper left corner of a building to shoot an image, coordinates of a target sampling point in the upper left corner image under a camera initial position reference coordinate system are obtained; and then, turning to the sampling position at the upper right corner to shoot the image to obtain the coordinates of the target sampling point in the image at the upper right corner under the reference coordinate system of the initial position of the camera. The image target sampling points at any sampling position can be unified to the same reference coordinate system established by the initial position of the camera.
For example, after the initial position of the camera is set, the camera is turned to the position A to be used as a camera sampling position to shoot an image, and three-dimensional coordinates of a target sampling point in the image A under the initial position of the camera are obtained through a series of steps; turning the camera to the position B, taking the camera as a sampling position of the camera to shoot an image, and obtaining a three-dimensional coordinate of a target sampling point in the image B at the initial position of the camera through a series of steps; thus A, B the target sample points between images are correlated.
Wherein, the step S2 specifically includes:
projecting light spots of laser on a camera on a plurality of targets which have the same structure and different distances from a laser emission source respectively to obtain corresponding pixel coordinate data on the targets at different distances;
fitting an image motion track model of laser light spots at different distances according to corresponding pixel coordinate data on targets at different distances;
the camera is located at a sampling position to shoot to obtain a shot image, the distance from the laser to the light spot is substituted into the image motion track model, and a first pixel coordinate value of the light spot of the laser on the shot image is obtained through calculation.
In the embodiment, the laser range finder obtains the spot distance, calculates the pixel coordinate of the spot according to the spot distance, traverses the pixel coordinate list of the sampling point, and calculates the difference value list of the horizontal and vertical pixel coordinates of the spot pixel coordinate and the sampling point pixel coordinate; converting the pixel coordinate difference list into the angle difference list through the field angle of the camera; rotating the holder to enable the pixel coordinates of the light spot to coincide with the pixel coordinates of the sampling point, obtaining the distance of the sampling point by the laser range finder, obtaining the deviation angle of the sampling point by the angle encoder, and obtaining a sampling point information list by the distance and the deviation angle;
the calculation method of the pixel coordinates of the light spot comprises the following steps: firstly, in a laboratory, the light spots of the laser range finder are projected on targets with different distances, the targets are made of materials with good light reflection degree and high bottom color identification degree, pixel coordinate data of the light spots at different distances in the camera imaging process are obtained by combining RGB and circle center feature recognition and extraction technologies, and image motion track models of the light spots at different distances are fitted by the pixel coordinate data; substituting the light spot distance into the motion trail model to calculate the light spot pixel coordinate of the light spot in the shot image;
the field angle of the camera comprises a horizontal field angle and a vertical field angle, wherein the horizontal field angle refers to a spatial horizontal angle range which can be seen by the camera, and the vertical field angle refers to a spatial vertical angle range which can be seen by the camera;
the angle difference comprises a horizontal angle difference and a vertical angle difference, wherein the horizontal angle difference is a horizontal pixel coordinate difference/column number of the image/horizontal field angle, and the vertical angle difference is a horizontal pixel coordinate difference/column number of the image/horizontal field angle;
the deviation angle comprises a horizontal deviation angle and a vertical deviation angle, and when the holder is located at the sampling position, the angle encoder obtains the horizontal deviation angle and the vertical deviation angle of the sampling position based on the initial position of the holder.
S3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point; or select a sample point from a sample library.
Wherein, the step S3 specifically includes:
receiving sampling information, and extracting a corresponding sampling object image in a preset image database according to the sampling information; and identifying the sampling object image in the shot image to obtain a corresponding second pixel coordinate value.
In the present embodiment, as shown in fig. 3, a coordinate system u-v with the top left corner of the captured image as the origin and the pixel as the unit is provided, where the abscissa u and the ordinate v of the pixel are the number of columns and the number of rows of the captured image, respectively;
the sample repository is a collection containing many different sampling targets; the sampling target is an engineering project sample drawing and name required to be checked and accepted in engineering check and acceptance, such as engineering projects of precast piles, column hoops and the like in building construction; when the sampling target is selected, the sampling point of the sampling target can be automatically positioned in the image without manually selecting the sampling point;
automatically positioning the sampling points of the sampling target in the shot image, namely shooting engineering project images including precast piles, column hoops and the like needing engineering acceptance by the cameras in each project in advance to form a large number of data sample images; manually labeling the data sample image, wherein the labeling means that engineering project areas such as precast piles and column hoops are framed out on the data sample image to obtain coordinates of the project areas on the data sample image, naming the project areas according to the engineering projects to obtain annotation tags of { data sample images and coordinates | naming }, constructing a regional convolutional neural network artificial intelligent deep learning model with a segmentation mask according to the scale of the data sample image and the number of the varieties of the engineering projects, injecting the annotation tags into the model for training, and stopping training by continuously adjusting hyper-parameters of the model and observing fitting and prediction errors until the prediction errors are reduced to be below a reasonable threshold value to obtain an excellent parameter model; and when the sampling target in the sampling library is selected, obtaining the area coordinate of the sampling target in the image by using the excellent parameter model of the image, and automatically obtaining the sampling point.
S4, adjusting the shooting angle of the camera according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
and S6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all the sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all the sampling points.
In this embodiment, the coordinates of each sampling point in the reference coordinate system are obtained from the sampling point information listThe superscript n denotes the nth sample point, which in turn results in a list of sample point coordinates.
Wherein D isnIs the sample point distance, α is the sample point vertical departure angle, β is the sample point horizontal departure angle, R (X)wα) is wound around XwExpressed in the form of a transformation matrix with an axis rotation of α degrees, R (Y)wβ) is wound around YwThe axis rotation is represented in the form of a transformation matrix of β degrees.
Establishing a space three-dimensional model of all the sampling points according to the sampling point coordinate list;
selecting any two sampling points A, B in the engineering three-dimensional model with the coordinates of Size information can be obtained
Selecting any three sampling points A, B, C in the engineering three-dimensional model with the coordinates of Area information can be obtained
p=(a+b+c)/2;
selecting any plurality of non-coplanar sampling points in the engineering three-dimensional model, such as the sampling point A, B, C, D, with the coordinate ofVolume information can be obtained
By the mode, the reconnaissance data can be rapidly acquired, and accurate and efficient engineering acceptance work is realized.
Referring to fig. 2, the second embodiment of the present invention is:
the invention provides an intelligent visual sampling system, which comprises one or more processors 1 and a memory 2, wherein the memory 2 stores a program, and the program realizes the following steps when being executed by the processor 1:
s1, recording the initial position of the camera, and establishing a reference coordinate system; the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
s2, shooting by the camera at the sampling position to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
s3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point;
s4, adjusting the shooting angle of the camera according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
and S6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all the sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all the sampling points.
Further, the program when executed by the processor further performs the steps of:
projecting light spots of laser on a camera on a plurality of targets which have the same structure and different distances from a laser emission source respectively to obtain corresponding pixel coordinate data on the targets at different distances;
fitting an image motion track model of laser light spots at different distances according to corresponding pixel coordinate data on targets at different distances;
the camera is located at a sampling position to shoot to obtain a shot image, the distance from the laser to the light spot is substituted into the image motion track model, and a first pixel coordinate value of the light spot of the laser on the shot image is obtained through calculation.
Further, the program when executed by the processor further performs the steps of:
receiving sampling information, and extracting a corresponding sampling object image in a preset image database according to the sampling information;
and identifying the sampling object image in the shot image to obtain a corresponding second pixel coordinate value.
The third embodiment of the invention is as follows:
the invention provides an intelligent visual sampling device, which comprises a camera, a laser range finder, a holder, an angle encoder and a processor, wherein the camera is connected with the laser range finder; the laser range finder is arranged on the camera, and a light spot of laser of the laser range finder is positioned in a shooting picture of the camera; the camera and the angle encoder are respectively installed on the holder and can rotate along with the holder at a full angle, and the camera, the laser range finder and the angle encoder are respectively electrically connected with the processor.
The processor is configured to record an initial position of the camera and establish a reference coordinate system; controlling a camera to be positioned at a sampling position to shoot to obtain a shot image, and calculating according to the distance from laser to a light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image; receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point; adjusting the shooting angle of the camera through the rotation of the holder according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained; comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and acquiring the distance of the sampling point measured by the laser at the current position; and selecting other sampling points for multiple times by taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system until all the sampling points on the shot image are received, and establishing a corresponding space three-dimensional model according to all the sampling points.
Further, the device also comprises a memory, and the memory is electrically connected with the processor. The memory is used for storing an image database (sampling library) which contains a collection of a plurality of different sampling targets; the sampling target is an engineering project sample drawing and name required to be checked and accepted in engineering check and acceptance, such as engineering projects of precast piles, column hoops and the like in building construction; when the sampling target is selected, the sampling point of the sampling target can be automatically positioned in the image without manually selecting the sampling point; automatically positioning the sampling points of the sampling target in the shot image, namely shooting engineering project images including precast piles, column hoops and the like needing engineering acceptance by the cameras in each project in advance to form a large number of data sample images; manually labeling the data sample image, wherein the labeling means that engineering project areas such as precast piles and column hoops are framed out on the data sample image to obtain coordinates of the project areas on the data sample image, naming the project areas according to the engineering projects to obtain annotation tags of { data sample images and coordinates | naming }, constructing a regional convolutional neural network artificial intelligent deep learning model with a segmentation mask according to the scale of the data sample image and the number of the varieties of the engineering projects, injecting the annotation tags into the model for training, and stopping training by continuously adjusting hyper-parameters of the model and observing fitting and prediction errors until the prediction errors are reduced to be below a reasonable threshold value to obtain an excellent parameter model; and when the sampling target in the sampling library is selected, obtaining the area coordinate of the sampling target in the image by using the excellent parameter model of the image, and automatically obtaining the sampling point.
In summary, according to the intelligent visual sampling method, system and device provided by the invention, a reference coordinate system is established through the initial position of the camera, the camera is located at the sampling position to shoot to obtain a shot image, sampling points are selected from the shot image, the information sampling work of all the sampling points is completed according to the deviation angle and the distance corresponding to the position of the laser spot when the sampling point is located at the initial position of the camera as coordinate values under the reference coordinate system, and finally a space three-dimensional model containing the information of all the sampling points is output.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.
Claims (6)
1. An intelligent visual sampling method, comprising the steps of:
s1, recording the initial position of the camera, and establishing a reference coordinate system; the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
s2, shooting by the camera at the sampling position to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
s3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point;
s4, adjusting the shooting angle of the camera according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
and S6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all the sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all the sampling points.
2. The intelligent visual sampling method according to claim 1, wherein step S2 specifically comprises:
projecting light spots of laser on a camera on a plurality of targets which have the same structure and different distances from a laser emission source respectively to obtain corresponding pixel coordinate data on the targets at different distances;
fitting an image motion track model of laser light spots at different distances according to corresponding pixel coordinate data on targets at different distances;
the camera is located at a sampling position to shoot to obtain a shot image, the distance from the laser to the light spot is substituted into the image motion track model, and a first pixel coordinate value of the light spot of the laser on the shot image is obtained through calculation.
3. The intelligent visual sampling method according to claim 1, wherein step S3 specifically comprises:
receiving sampling information, and extracting a corresponding sampling object image in a preset image database according to the sampling information;
and identifying the sampling object image in the shot image to obtain a corresponding second pixel coordinate value.
4. An intelligent visual sampling system comprising one or more processors and memory, said memory storing a program that when executed by the processors performs the steps of:
s1, recording the initial position of the camera, and establishing a reference coordinate system; the camera is provided with a laser emission source, and a laser spot is positioned in a shooting picture of the camera;
s2, shooting by the camera at the sampling position to obtain a shot image, and calculating according to the distance from the laser to the light spot to obtain a first pixel coordinate value of the light spot of the laser on the shot image;
s3, receiving a sampling point in the shot image to obtain a second pixel coordinate value corresponding to the sampling point;
s4, adjusting the shooting angle of the camera according to the difference value between the first pixel coordinate value and the second pixel coordinate value, so that the sampling point is overlapped with the light spot corresponding to the first pixel coordinate value, and the current position of the camera is obtained;
s5, comparing the current position of the camera with the recorded initial position of the camera to obtain a deviation angle corresponding to the sampling point, and obtaining the distance of the sampling point measured by the laser at the current position;
and S6, taking the deviation angle and the distance corresponding to the sampling point as coordinate values under a reference coordinate system, and repeatedly executing the steps S3-S5 until all the sampling points on the shot image are received, and then establishing a corresponding space three-dimensional model according to all the sampling points.
5. The intelligent visual sampling system of claim 4, wherein the program, when executed by the processor, further performs the steps of:
projecting light spots of laser on a camera on a plurality of targets which have the same structure and different distances from a laser emission source respectively to obtain corresponding pixel coordinate data on the targets at different distances;
fitting an image motion track model of laser light spots at different distances according to corresponding pixel coordinate data on targets at different distances;
the camera is located at a sampling position to shoot to obtain a shot image, the distance from the laser to the light spot is substituted into the image motion track model, and a first pixel coordinate value of the light spot of the laser on the shot image is obtained through calculation.
6. The intelligent visual sampling system of claim 4, wherein the program, when executed by the processor, further performs the steps of:
receiving sampling information, and extracting a corresponding sampling object image in a preset image database according to the sampling information;
and identifying the sampling object image in the shot image to obtain a corresponding second pixel coordinate value.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910353497.8A CN110231023B (en) | 2019-04-29 | 2019-04-29 | Intelligent visual sampling method, system and device |
CN202010080608.5A CN111076712B (en) | 2019-04-29 | 2019-04-29 | Automatic building method, system and device of space three-dimensional model |
CN202010080606.6A CN111256669A (en) | 2019-04-29 | 2019-04-29 | Automatic sampling device |
PCT/CN2019/101688 WO2020220522A1 (en) | 2019-04-29 | 2019-08-21 | Intelligent visual sampling method, system and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910353497.8A CN110231023B (en) | 2019-04-29 | 2019-04-29 | Intelligent visual sampling method, system and device |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010080608.5A Division CN111076712B (en) | 2019-04-29 | 2019-04-29 | Automatic building method, system and device of space three-dimensional model |
CN202010080606.6A Division CN111256669A (en) | 2019-04-29 | 2019-04-29 | Automatic sampling device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110231023A CN110231023A (en) | 2019-09-13 |
CN110231023B true CN110231023B (en) | 2020-02-21 |
Family
ID=67860934
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010080606.6A Pending CN111256669A (en) | 2019-04-29 | 2019-04-29 | Automatic sampling device |
CN201910353497.8A Active CN110231023B (en) | 2019-04-29 | 2019-04-29 | Intelligent visual sampling method, system and device |
CN202010080608.5A Active CN111076712B (en) | 2019-04-29 | 2019-04-29 | Automatic building method, system and device of space three-dimensional model |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010080606.6A Pending CN111256669A (en) | 2019-04-29 | 2019-04-29 | Automatic sampling device |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010080608.5A Active CN111076712B (en) | 2019-04-29 | 2019-04-29 | Automatic building method, system and device of space three-dimensional model |
Country Status (2)
Country | Link |
---|---|
CN (3) | CN111256669A (en) |
WO (1) | WO2020220522A1 (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111414508B (en) * | 2020-03-17 | 2022-09-13 | 金钱猫科技股份有限公司 | Method and terminal for searching and realizing visualization in design model |
CN111614903B (en) * | 2020-05-28 | 2021-08-10 | 西安航空学院 | Method for removing faculae in image shooting |
CN112099028A (en) * | 2020-09-03 | 2020-12-18 | 深圳市迈测科技股份有限公司 | Laser spot automatic tracking method and device, storage medium and laser ranging device |
CN112288810B (en) * | 2020-10-29 | 2023-04-07 | 铜陵有色金属集团股份有限公司 | Sampling positioning method, device, system and computer storage medium |
CN112866579B (en) * | 2021-02-08 | 2022-07-01 | 上海巡智科技有限公司 | Data acquisition method and device and readable storage medium |
CN113050113B (en) * | 2021-03-10 | 2023-08-01 | 广州南方卫星导航仪器有限公司 | Laser spot positioning method and device |
CN113552054A (en) * | 2021-07-16 | 2021-10-26 | 苏州苏试试验集团股份有限公司 | Control device and control method for automatic positioning of environmental test chamber |
CN113693636B (en) * | 2021-08-30 | 2023-11-24 | 南方科技大学 | Sampling method, sampling system and storage medium |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005127992A (en) * | 2003-09-30 | 2005-05-19 | Tokyo Univ Of Agriculture | Instrument and method for measuring position of moving object by laser range finder |
JP5019478B2 (en) * | 2008-09-26 | 2012-09-05 | 独立行政法人日本原子力研究開発機構 | Marker automatic registration method and system |
CN102445183B (en) * | 2011-10-09 | 2013-12-18 | 福建汇川数码技术科技有限公司 | Positioning method of ranging laser point of remote ranging system based on paralleling of laser and camera |
CN103047969B (en) * | 2012-12-07 | 2016-03-16 | 北京百度网讯科技有限公司 | By method and the mobile terminal of mobile terminal generating three-dimensional figures picture |
CN203148438U (en) * | 2013-03-08 | 2013-08-21 | 武汉海达数云技术有限公司 | Integrated mobile three-dimensional measuring device |
CN103557796B (en) * | 2013-11-19 | 2016-06-08 | 天津工业大学 | 3 D positioning system and localization method based on laser ranging and computer vision |
CN103557821A (en) * | 2013-11-21 | 2014-02-05 | 福建汇川数码技术科技有限公司 | Method for achieving three-dimensional space measuring under non-leveling, non-centering and height-measuring states |
CN104613948A (en) * | 2015-02-03 | 2015-05-13 | 北京航空航天大学 | Multi-angle tunable laser dotting device |
AU2015101098A6 (en) * | 2015-08-10 | 2016-03-10 | Wisetech Global Limited | Volumetric estimation methods, devices, & systems |
CN105486289B (en) * | 2016-01-31 | 2018-03-23 | 山东科技大学 | A kind of laser photography measuring system and camera calibration method |
CN106152971B (en) * | 2016-07-28 | 2018-07-17 | 南京航空航天大学 | Laser three-dimensional scanning marker method under machine vision auxiliary |
CN106073895B (en) * | 2016-08-12 | 2019-01-04 | 杭州三坛医疗科技有限公司 | Noninvasive type real-time surgery positions 3D navigation equipment |
CN106249427B (en) * | 2016-08-31 | 2018-11-20 | 河北汉光重工有限责任公司 | A kind of optic axis adjusting method based on laser imaging |
CN108828555B (en) * | 2017-05-18 | 2020-08-04 | 金钱猫科技股份有限公司 | Accurate measurement method, system and device based on coordinate transformation |
CN108844456B (en) * | 2017-05-18 | 2020-06-02 | 金钱猫科技股份有限公司 | Rapid image measuring method and system |
CN108050928B (en) * | 2017-09-05 | 2024-03-12 | 东莞中子科学中心 | Visual measuring instrument and visual measuring method |
CN107909029A (en) * | 2017-11-14 | 2018-04-13 | 福州瑞芯微电子股份有限公司 | A kind of real scene virtualization acquisition method and circuit |
CN109307477B (en) * | 2018-12-04 | 2020-10-13 | 福建汇川物联网技术科技股份有限公司 | Displacement measurement system and method |
-
2019
- 2019-04-29 CN CN202010080606.6A patent/CN111256669A/en active Pending
- 2019-04-29 CN CN201910353497.8A patent/CN110231023B/en active Active
- 2019-04-29 CN CN202010080608.5A patent/CN111076712B/en active Active
- 2019-08-21 WO PCT/CN2019/101688 patent/WO2020220522A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
CN111076712A (en) | 2020-04-28 |
WO2020220522A1 (en) | 2020-11-05 |
CN111076712B (en) | 2021-08-31 |
CN110231023A (en) | 2019-09-13 |
CN111256669A (en) | 2020-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110231023B (en) | Intelligent visual sampling method, system and device | |
US9965870B2 (en) | Camera calibration method using a calibration target | |
Biskup et al. | A stereo imaging system for measuring structural parameters of plant canopies | |
Herráez et al. | 3D modeling by means of videogrammetry and laser scanners for reverse engineering | |
CN105069743B (en) | Detector splices the method for real time image registration | |
CN102834691B (en) | Surveying method | |
US20140132729A1 (en) | Method and apparatus for camera-based 3d flaw tracking system | |
CN105627948A (en) | Large-scale complex curved surface measurement system and application thereof | |
Moussa | Integration of digital photogrammetry and terrestrial laser scanning for cultural heritage data recording | |
CN108596117B (en) | Scene monitoring method based on two-dimensional laser range finder array | |
CN112818990B (en) | Method for generating target detection frame, method and system for automatically labeling image data | |
CN109900274B (en) | Image matching method and system | |
CN114838668B (en) | Tunnel displacement monitoring method and system | |
Borrmann et al. | Robotic mapping of cultural heritage sites | |
CN111611989A (en) | Multi-target accurate positioning identification method based on autonomous robot | |
CN111060006A (en) | Viewpoint planning method based on three-dimensional model | |
CN112529948A (en) | Mature pomegranate positioning method based on Mask R-CNN and 3-dimensional sphere fitting | |
CN114820924A (en) | Method and system for analyzing museum visit based on BIM and video monitoring | |
Barrile et al. | 3D modeling with photogrammetry by UAVs and model quality verification | |
CN205352322U (en) | Large -scale complicated curved surface measurement system | |
WO2022078439A1 (en) | Apparatus and method for acquisition and matching of 3d information of space and object | |
CN112381190B (en) | Cable force testing method based on mobile phone image recognition | |
Uyanik et al. | A method for determining 3D surface points of objects by a single camera and rotary stage | |
Kahmen et al. | 3D object sensing using rotating CCD cameras | |
Simon et al. | Asserting the precise position of 3D and multispectral acquisition systems for multisensor registration applied to cultural heritage analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |