CN110619663A - Video image target positioning method based on three-dimensional laser point cloud - Google Patents
Video image target positioning method based on three-dimensional laser point cloud Download PDFInfo
- Publication number
- CN110619663A CN110619663A CN201910799858.1A CN201910799858A CN110619663A CN 110619663 A CN110619663 A CN 110619663A CN 201910799858 A CN201910799858 A CN 201910799858A CN 110619663 A CN110619663 A CN 110619663A
- Authority
- CN
- China
- Prior art keywords
- image
- video
- point
- target
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000012544 monitoring process Methods 0.000 claims abstract description 35
- 238000005259 measurement Methods 0.000 claims abstract description 15
- 238000003384 imaging method Methods 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000007499 fusion processing Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a video image target positioning method based on three-dimensional laser point cloud, which belongs to the technical field of vehicle-mounted laser scanning point cloud data processing, and is characterized in that a vehicle-mounted mobile measurement system is utilized to quickly acquire three-dimensional laser point cloud data in a monitoring range of video monitoring equipment, and according to the laser point cloud data and a certain frame of image of a video obtained by the video monitoring equipment, matching is carried out according to the point cloud and the image homonymy characteristic points, and the internal and external orientation elements of the video monitoring equipment are calculated; then generating a distance image with the same resolution ratio consistent with the shooting direction of the video equipment according to the elements of the inside and outside directions of the video monitoring equipment and the scanned point cloud data; and finally, acquiring the corresponding image point coordinates of the target object on the video image at any moment through the video image at any moment, acquiring the corresponding distance of the target image point on the distance image, and calculating the actual coordinates corresponding to the target according to the external orientation elements of the video monitoring equipment, thereby realizing the accurate positioning of the target in the video image.
Description
Technical Field
The invention belongs to the technical field of vehicle-mounted laser scanning point cloud data processing, and particularly relates to a video image target positioning method based on three-dimensional laser point cloud.
Background
The video monitoring technology is widely applied to the security industry, the application of the video technology is higher and higher, the traditional video monitoring system is generally only applied to obtaining target image characteristics and target characteristic measurement, the target image characteristics are generally directly judged from a video image, and the target characteristic measurement is generally to obtain information such as measurement of a target ground object by comparing the size of a target on the image with a reference characteristic object on the image. Most of the current video monitoring equipment consists of a camera, and according to the perspective transformation relation, the obtained video images do not have the capability of obtaining target positioning and the target motion state, so that the related geographic information of the target cannot be obtained. Therefore, how to obtain the geographic information of the target ground object by using the video image is still a difficulty. This limits the application of video surveillance technology in practical engineering.
The vehicle-mounted mobile measurement system is a novel active multi-sensor integrated system, and has the advantages of high precision, high resolution, high efficiency, safety, convenient operation, real-time data acquisition, capability of measuring at night, high operation efficiency, short mapping period, high-speed continuous and dynamic measurement and the like, and generally adopts sensors mainly comprising: the system comprises an inertial navigation system (GPS/INS), an industrial CCD camera, a three-dimensional laser scanner, a milemeter and the like, and works in a non-contact laser measurement mode.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides a video image target positioning method based on three-dimensional laser point cloud, which is used for researching the relative spatial position relationship between a video image and point cloud data; the method has the advantages that the concept of projection distance images is introduced, the method can be combined with a vehicle-mounted mobile measurement system to quickly and conveniently obtain the target geographic coordinate information from the video images, the design is reasonable, the defects of the prior art are overcome, and the method has a good effect.
In order to achieve the purpose, the invention adopts the following technical scheme:
a video image target positioning method based on three-dimensional laser point cloud comprises the following steps:
step 1: rapidly acquiring three-dimensional coordinate point cloud data in a monitoring range of video monitoring equipment by using a vehicle-mounted mobile measurement system, converting the three-dimensional coordinate point cloud data into a plane projection coordinate system through data fusion processing and space coordinate reference transformation, and obtaining point cloud data under the plane projection coordinate system as basic point cloud data so that different video equipment data are under the same measurement coordinate system;
step 2: calculating the inner orientation element and the outer orientation element of the video monitoring equipment by selecting the point cloud and the corresponding homonymous feature points on the video monitoring image;
and step 3: generating a high-resolution distance image consistent with the orientation of the video image by utilizing the relative spatial position relationship between the video monitoring image and the point cloud;
and 4, step 4: selecting any target point from the video monitoring image, acquiring distance information corresponding to the target point from the distance image according to the corresponding relation between the video monitoring image and the distance image, and finally calculating an actual coordinate corresponding to the target point according to the spatial position relation of video monitoring.
Preferably, the step 2 specifically includes the following steps:
step 2.1: selecting characteristic points;
by visual interpretation, feature points on roads and independent rod-shaped targets or target points laid manually are selected from the point cloud, and point cloud feature points (X, Y, Z) are foundTCorresponding homonymous image point (x) in video imagep,yp,zp)T;
Step 2.2: matching the point cloud with the video image;
let the coordinate of the video camera center under the image space coordinate system be (x)0,y0,-f)TThe zoom factor is lambda, and the coordinates of the image point of the target on the video image are (x)p,yp,zp)TThe target coordinate of the target in the plane projection coordinate system is (X, Y, Z)TThe coordinate of the shooting center of the video equipment is (X)0,Y0,Z0)TExterior squareBit element of
And (3) establishing a collinear equation by utilizing three-point collinearity:
wherein:
λ is a proportionality coefficient;
x, Y and Z are actual coordinates of the object point, namely coordinates under a plane projection coordinate system;
(X0,Y0,Z0)Tfor video camera centre (x)0,y0,-f)TCoordinates under a planar projection coordinate system;
(Xp,Yp,Zp)Tis an image point (x)p,yp,zp)TCoordinates under a planar projection coordinate system;
the above equation can be converted into:
wherein:
and (3) linearizing the equation, carrying out adjustment calculation by using a least square principle, iteratively solving parameters, and terminating iteration if a calculation result meets the precision requirement to obtain the internal orientation element and the external orientation element of the video equipment.
Preferably, in step 2.1, at least 5 homonymous feature points are selected as basic data for solution.
Preferably, the step 3 specifically includes the following steps:
step 3.1: determining the target and the distance image imaging plane coordinate system according to the plane projection coordinate system and taking the X and Y directions of the video image as the distance image imaging plane coordinate systemThe intersection point pixel coordinates of the video equipment center connecting line and the imaging plane calculate the space distance between the target and the video equipment center, the space distance is used as the distance value of the pixel point, and the space distance is expressed by combining three RGB color components, namely:to ensure that the recorded distance can be accurate to the centimeter level, the three color components range from R: 1-100; g: 1 to 255; b: 1 to 255;
step 3.2: circularly traversing all the targets to generate a distance image of the area, and keeping the minimum value when a plurality of laser point projections exist in the same pixel point due to the fact that the point cloud density is inconsistent with the distance image resolution; and interpolating the non-scanned positions, namely the pixel points without laser point projection, by adopting a bilinear interpolation method to generate a complete distance image.
Preferably, the step 4 specifically includes the following steps:
step 4.1: in any video image, target selection is carried out to obtain the image point coordinate (x) corresponding to the targetp,yp,zp)TMatching according to the corresponding relation between the video image and the distance image, and acquiring the distance D corresponding to the image point from the distance image;
step 4.2: according to the exterior orientation elements of the video image, converting the coordinates of the target image point in an image space coordinate system into the coordinates in an actual plane projection coordinate system, wherein the conversion relation is as follows:
wherein:
step 4.3: video shooting under actual plane projection coordinate systemShadow center point coordinate is (X)0,Y0,Z0)TThe coordinate of the image point corresponding to the target point under the actual plane projection coordinate system is (X)p,Yp,Zp)T;
The direction vector from the center point of the video camera to the target pointComprises the following steps:
step 4.4: according to the distance D between the video shooting center matched from the distance image in the step 4.1 and the target point and the direction vector from the video shooting center point to the target point calculated in the step 4.3The space coordinate position of the target point under the actual plane projection coordinate system can be obtained:
the invention has the following beneficial technical effects:
the invention provides a video image target positioning method based on three-dimensional laser point cloud, which is used for researching the geographic position information of a target point obtained in a video image; the concept of projection distance images is introduced, the method is easy to operate, and can be permanently used only by using a vehicle-mounted mobile measurement system to acquire data once, so that a large amount of manpower and material resources are saved; the interaction is simple, and the absolute precision of the target point can reach 5 cm. The method can meet the actual engineering requirements, can introduce geographic information data, and can expand the application of the video technology in various fields.
Drawings
Fig. 1 is a data processing flow chart of a video image target positioning method based on three-dimensional laser point cloud provided by the invention.
Detailed Description
The invention is described in further detail below with reference to the following figures and detailed description:
as shown in fig. 1, a video image target positioning method based on three-dimensional laser point cloud includes the following steps:
a video image target positioning method based on three-dimensional laser point cloud comprises the following steps:
step 1: rapidly acquiring three-dimensional coordinate point cloud data in a monitoring range of video monitoring equipment by using a vehicle-mounted mobile measurement system, converting the three-dimensional coordinate point cloud data into a plane projection coordinate system through data fusion processing and space coordinate reference transformation, and obtaining point cloud data under the plane projection coordinate system as basic point cloud data so that different video equipment data are under the same measurement coordinate system;
step 2: calculating the inner orientation element and the outer orientation element of the video monitoring equipment by selecting the point cloud and the corresponding homonymous feature points on the video monitoring image;
the method specifically comprises the following steps:
step 2.1: selecting characteristic points;
by visual interpretation, feature points on roads and independent rod-shaped targets or target points laid manually are selected from the point cloud, and point cloud feature points (X, Y, Z) are foundTCorresponding homonymous image point (x) in video imagep,yp,zp)T(ii) a Because the solution parameters include an internal orientation element of (x)0,y0,-f)TThe exterior orientation element is9 unknowns, a pair of feature points can be listed as 2 equations, so at least 5 points are required to be selected to solve for the 9 unknowns.
Step 2.2: matching the point cloud with the video image;
let the coordinate of the video camera center under the image space coordinate system be (x)0,y0,-f)TThe zoom factor is lambda, and the coordinates of the image point of the target on the video image are (x)p,yp,zp)TWith the object projected on a planeThe coordinates of the target under the shadow coordinate system are (X, Y, Z)TThe coordinate of the shooting center of the video equipment is (X)0,Y0,Z0)TThe exterior orientation element is
And (3) establishing a collinear equation by utilizing three-point collinearity:
wherein:
λ is a proportionality coefficient;
x, Y and Z are actual coordinates of the object point, namely coordinates under a plane projection coordinate system;
(X0,Y0,Z0)Tfor video camera centre (x)0,y0,-f)TCoordinates under a planar projection coordinate system;
(Xp,Yp,Zp)Tis an image point (x)p,yp,zp)TCoordinates under a planar projection coordinate system;
the above equation can be converted into:
wherein:
and (3) linearizing the equation, carrying out adjustment calculation by using a least square principle, iteratively solving parameters, and terminating iteration if a calculation result meets the precision requirement to obtain the internal orientation element and the external orientation element of the video equipment.
And step 3: generating a high-resolution distance image consistent with the orientation of the video image by utilizing the relative spatial position relationship between the video monitoring image and the point cloud; the method specifically comprises the following steps:
step 3.1: according to a plane projection coordinate system, taking the X and Y directions of a video image as a distance image imaging plane coordinate system, determining the pixel coordinate of the intersection point of a connecting line between a target and the center of video equipment and an imaging plane, calculating the spatial distance between the target and the center of the video equipment as a distance value of the pixel point, and expressing the spatial distance by using the combination of three RGB color components, namely:such as: at a distance of 260.25m, the RGB three components are 25, 5,1, which ensures that the recorded distance is accurate to the centimeter level, and the three color components range from R: 1-100; g: 1 to 255; b: 1 to 255;
step 3.2: circularly traversing all the targets to generate a distance image of the area, and keeping the minimum value when a plurality of laser point projections exist in the same pixel point due to the fact that the point cloud density is inconsistent with the distance image resolution; and interpolating the non-scanned positions, namely the pixel points without laser point projection, by adopting a bilinear interpolation method to generate a complete distance image.
And 4, step 4: selecting any target point from the video monitoring image, acquiring distance information corresponding to the target point from the distance image according to the corresponding relation between the video monitoring image and the distance image, and finally calculating an actual coordinate corresponding to the target point according to the spatial position relation of video monitoring. The method specifically comprises the following steps:
step 4.1: in any video image, target selection is carried out to obtain the image point coordinate (x) corresponding to the targetp,yp,zp)TMatching according to the corresponding relation between the video image and the distance image, and acquiring the distance D corresponding to the image point from the distance image, namely the distance D between the shooting center of the video monitoring equipment and the target point;
step 4.2: according to the exterior orientation elements of the video image, converting the coordinates of the target image point in an image space coordinate system into the coordinates in an actual plane projection coordinate system, wherein the conversion relation is as follows:
wherein:
step 4.3: under the actual plane projection coordinate system, the coordinate of the video photography central point is (X)0,Y0,Z0)TThe coordinate of the image point corresponding to the target point under the actual plane projection coordinate system is (X)p,Yp,Zp)T;
The direction vector from the center point of the video camera to the target pointComprises the following steps:
step 4.4: according to the distance D between the video shooting center matched from the distance image in the step 4.1 and the target point and the direction vector from the video shooting center point to the target point calculated in the step 4.3The space coordinate position of the target point under the actual plane projection coordinate system can be obtained:
and 4, measuring the vehicle speed according to the coordinate system information obtained in the step 4.4, collecting track points of the running of the front right wheel of the vehicle, and measuring and calculating the running speed of the vehicle according to the track and the video frame time interval, wherein the error is less than 0.1 m/s.
And 4, measuring the height of the person according to the coordinate system information obtained in the step 4.4, collecting the three-dimensional coordinate information of the foot position and the head photographing base line position of the person, and constructing a spatial geometric model with the video monitoring probe to measure the height of the person, wherein the height error is less than 5 cm.
It is to be understood that the above description is not intended to limit the present invention, and the present invention is not limited to the above examples, and those skilled in the art may make modifications, alterations, additions or substitutions within the spirit and scope of the present invention.
Claims (5)
1. A video image target positioning method based on three-dimensional laser point cloud is characterized by comprising the following steps: the method comprises the following steps:
step 1: rapidly acquiring three-dimensional coordinate point cloud data in a monitoring range of video monitoring equipment by using a vehicle-mounted mobile measurement system, converting the three-dimensional coordinate point cloud data into a plane projection coordinate system through data fusion processing and space coordinate reference transformation, and obtaining point cloud data under the plane projection coordinate system as basic point cloud data so that different video equipment data are under the same measurement coordinate system;
step 2: calculating the inner orientation element and the outer orientation element of the video monitoring equipment by selecting the point cloud and the corresponding homonymous feature points on the video monitoring image;
and step 3: generating a high-resolution distance image consistent with the orientation of the video image by utilizing the relative spatial position relationship between the video monitoring image and the point cloud;
and 4, step 4: selecting any target point from the video monitoring image, acquiring distance information corresponding to the target point from the distance image according to the corresponding relation between the video monitoring image and the distance image, and finally calculating an actual coordinate corresponding to the target point according to the spatial position relation of video monitoring.
2. The method for positioning the target of the video surveillance image based on the three-dimensional laser point cloud as claimed in claim 1, wherein: the step 2 specifically comprises the following steps:
step 2.1: selecting characteristic points;
selecting roads and independent rod-shaped objects from the point cloud by visual interpretationAnd finding out point cloud feature points (X, Y, Z)TCorresponding homonymous image point (x) in video imagep,yp,zp)T;
Step 2.2: matching the point cloud with the video image;
let the coordinate of the video camera center under the image space coordinate system be (x)0,y0,-f)TThe zoom factor is lambda, and the coordinates of the image point of the target on the video image are (x)p,yp,zp)TThe target coordinate of the target in the plane projection coordinate system is (X, Y, Z)TThe coordinate of the shooting center of the video equipment is (X)0,Y0,Z0)TThe exterior orientation element is
And (3) establishing a collinear equation by utilizing three-point collinearity:
wherein:
λ is a proportionality coefficient;
x, Y and Z are actual coordinates of the object point, namely coordinates under a plane projection coordinate system;
(X0,Y0,Z0)Tfor video camera centre (x)0,y0,-f)TCoordinates under a planar projection coordinate system;
(Xp,Yp,Zp)Tis an image point (x)p,yp,zp)TCoordinates under a planar projection coordinate system;
the above equation can be converted into:
wherein:
and (3) linearizing the equation, carrying out adjustment calculation by using a least square principle, iteratively solving parameters, and terminating iteration if a calculation result meets the precision requirement to obtain the internal orientation element and the external orientation element of the video equipment.
3. The method for positioning the target of the video surveillance image based on the three-dimensional laser point cloud as claimed in claim 2, wherein: in step 2.1, at least 5 homonymous feature points are selected as basic data for calculation.
4. The method for positioning the video image target based on the three-dimensional laser point cloud of claim 1, wherein: the step 3 specifically comprises the following steps:
step 3.1: according to a plane projection coordinate system, taking the X and Y directions of a video image as a distance image imaging plane coordinate system, determining the pixel coordinate of the intersection point of a connecting line between a target and the center of video equipment and an imaging plane, calculating the spatial distance between the target and the center of the video equipment as a distance value of the pixel point, and expressing the spatial distance by using the combination of three RGB color components, namely:to ensure that the recorded distance can be accurate to the centimeter level, the three color components range from R: 1-100; g: 1 to 255; b: 1 to 255;
step 3.2: circularly traversing all the targets to generate a distance image of the area, and keeping the minimum value when a plurality of laser point projections exist in the same pixel point due to the fact that the point cloud density is inconsistent with the distance image resolution; and interpolating the non-scanned positions, namely the pixel points without laser point projection, by adopting a bilinear interpolation method to generate a complete distance image.
5. The method for positioning the video image target based on the three-dimensional laser point cloud of claim 1, wherein: the step 4 specifically comprises the following steps:
step 4.1: in any video image, target selection is carried out to obtain the image point coordinate (x) corresponding to the targetp,yp,zp)TMatching according to the corresponding relation between the video image and the distance image, and acquiring the distance D corresponding to the image point from the distance image;
step 4.2: according to the exterior orientation elements of the video image, converting the coordinates of the target image point in an image space coordinate system into the coordinates in an actual plane projection coordinate system, wherein the conversion relation is as follows:
wherein:
step 4.3: under the actual plane projection coordinate system, the coordinate of the video photography central point is (X)0,Y0,Z0)TThe coordinate of the image point corresponding to the target point under the actual plane projection coordinate system is (X)p,Yp,Zp)T;
The direction vector from the center point of the video camera to the target pointComprises the following steps:
step 4.4: according to the distance D between the video shooting center matched from the distance image in the step 4.1 and the target point and the direction vector from the video shooting center point to the target point calculated in the step 4.3The space coordinate position of the target point under the actual plane projection coordinate system can be obtained:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910799858.1A CN110619663A (en) | 2019-08-28 | 2019-08-28 | Video image target positioning method based on three-dimensional laser point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910799858.1A CN110619663A (en) | 2019-08-28 | 2019-08-28 | Video image target positioning method based on three-dimensional laser point cloud |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110619663A true CN110619663A (en) | 2019-12-27 |
Family
ID=68922094
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910799858.1A Pending CN110619663A (en) | 2019-08-28 | 2019-08-28 | Video image target positioning method based on three-dimensional laser point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110619663A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111223101A (en) * | 2020-01-17 | 2020-06-02 | 湖南视比特机器人有限公司 | Point cloud processing method, point cloud processing system, and storage medium |
CN111950370A (en) * | 2020-07-10 | 2020-11-17 | 重庆邮电大学 | Dynamic environment offline visual milemeter expansion method |
CN112013830A (en) * | 2020-08-20 | 2020-12-01 | 中国电建集团贵州电力设计研究院有限公司 | Accurate positioning method for unmanned aerial vehicle inspection image detection defects of power transmission line |
CN112102246A (en) * | 2020-08-18 | 2020-12-18 | 东南大学 | Method for evaluating stability of buttress on two sides in box culvert jacking process |
CN113487746A (en) * | 2021-05-25 | 2021-10-08 | 武汉海达数云技术有限公司 | Optimal associated image selection method and system in vehicle-mounted point cloud coloring |
CN113701720A (en) * | 2021-08-31 | 2021-11-26 | 中煤科工集团重庆研究院有限公司 | Identification system for photogrammetric coordinate positioning |
CN113984081A (en) * | 2020-10-16 | 2022-01-28 | 北京猎户星空科技有限公司 | Positioning method, positioning device, self-moving equipment and storage medium |
CN114266830A (en) * | 2021-12-28 | 2022-04-01 | 北京建筑大学 | Underground large-space high-precision positioning method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056659A (en) * | 2016-05-27 | 2016-10-26 | 山东科技大学 | Building corner space position automatic extraction method in vehicle laser scanning point cloud |
WO2016185637A1 (en) * | 2015-05-20 | 2016-11-24 | 三菱電機株式会社 | Point-cloud-image generation device and display system |
CN106204547A (en) * | 2016-06-29 | 2016-12-07 | 山东科技大学 | The method automatically extracting shaft-like atural object locus from Vehicle-borne Laser Scanning point cloud |
CN109115186A (en) * | 2018-09-03 | 2019-01-01 | 山东科技大学 | A kind of 360 ° for vehicle-mounted mobile measuring system can measure full-view image generation method |
-
2019
- 2019-08-28 CN CN201910799858.1A patent/CN110619663A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016185637A1 (en) * | 2015-05-20 | 2016-11-24 | 三菱電機株式会社 | Point-cloud-image generation device and display system |
CN106056659A (en) * | 2016-05-27 | 2016-10-26 | 山东科技大学 | Building corner space position automatic extraction method in vehicle laser scanning point cloud |
CN106204547A (en) * | 2016-06-29 | 2016-12-07 | 山东科技大学 | The method automatically extracting shaft-like atural object locus from Vehicle-borne Laser Scanning point cloud |
CN109115186A (en) * | 2018-09-03 | 2019-01-01 | 山东科技大学 | A kind of 360 ° for vehicle-mounted mobile measuring system can measure full-view image generation method |
Non-Patent Citations (1)
Title |
---|
李征航 等: "《GPS测量与数据处理(第三版)》", 31 May 2016 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111223101B (en) * | 2020-01-17 | 2023-08-11 | 湖南视比特机器人有限公司 | Point cloud processing method, point cloud processing system and storage medium |
CN111223101A (en) * | 2020-01-17 | 2020-06-02 | 湖南视比特机器人有限公司 | Point cloud processing method, point cloud processing system, and storage medium |
CN111950370B (en) * | 2020-07-10 | 2022-08-26 | 重庆邮电大学 | Dynamic environment offline visual milemeter expansion method |
CN111950370A (en) * | 2020-07-10 | 2020-11-17 | 重庆邮电大学 | Dynamic environment offline visual milemeter expansion method |
CN112102246A (en) * | 2020-08-18 | 2020-12-18 | 东南大学 | Method for evaluating stability of buttress on two sides in box culvert jacking process |
CN112013830A (en) * | 2020-08-20 | 2020-12-01 | 中国电建集团贵州电力设计研究院有限公司 | Accurate positioning method for unmanned aerial vehicle inspection image detection defects of power transmission line |
CN112013830B (en) * | 2020-08-20 | 2024-01-30 | 中国电建集团贵州电力设计研究院有限公司 | Accurate positioning method for inspection image detection defects of unmanned aerial vehicle of power transmission line |
CN113984081A (en) * | 2020-10-16 | 2022-01-28 | 北京猎户星空科技有限公司 | Positioning method, positioning device, self-moving equipment and storage medium |
CN113984081B (en) * | 2020-10-16 | 2024-05-03 | 北京猎户星空科技有限公司 | Positioning method, positioning device, self-mobile equipment and storage medium |
CN113487746B (en) * | 2021-05-25 | 2023-02-24 | 武汉海达数云技术有限公司 | Optimal associated image selection method and system in vehicle-mounted point cloud coloring |
CN113487746A (en) * | 2021-05-25 | 2021-10-08 | 武汉海达数云技术有限公司 | Optimal associated image selection method and system in vehicle-mounted point cloud coloring |
CN113701720A (en) * | 2021-08-31 | 2021-11-26 | 中煤科工集团重庆研究院有限公司 | Identification system for photogrammetric coordinate positioning |
CN113701720B (en) * | 2021-08-31 | 2023-08-15 | 中煤科工集团重庆研究院有限公司 | Identification system for photogrammetry coordinate positioning |
CN114266830A (en) * | 2021-12-28 | 2022-04-01 | 北京建筑大学 | Underground large-space high-precision positioning method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110619663A (en) | Video image target positioning method based on three-dimensional laser point cloud | |
US10116920B2 (en) | Balancing colors in a scanned three-dimensional image | |
US11112501B2 (en) | Using a two-dimensional scanner to speed registration of three-dimensional scan data | |
CN109115186B (en) | 360-degree measurable panoramic image generation method for vehicle-mounted mobile measurement system | |
US11035955B2 (en) | Registration calculation of three-dimensional scanner data performed between scans based on measurements by two-dimensional scanner | |
US8284240B2 (en) | System for adaptive three-dimensional scanning of surface characteristics | |
Wenzel et al. | Image acquisition and model selection for multi-view stereo | |
Nagai et al. | UAV-borne 3-D mapping system by multisensor integration | |
Scaramuzza et al. | Extrinsic self calibration of a camera and a 3d laser range finder from natural scenes | |
CN109813335B (en) | Calibration method, device and system of data acquisition system and storage medium | |
CN109597095A (en) | Backpack type 3 D laser scanning and three-dimensional imaging combined system and data capture method | |
US20180052233A1 (en) | Using a two-dimensional scanner to speed registration of three-dimensional scan data | |
CN107861920B (en) | Point cloud data registration method | |
KR101308744B1 (en) | System for drawing digital map | |
Kersten et al. | Comparative geometrical investigations of hand-held scanning systems | |
Barazzetti et al. | 3D scanning and imaging for quick documentation of crime and accident scenes | |
JP4077385B2 (en) | Global coordinate acquisition device using image processing | |
US20230351625A1 (en) | A method for measuring the topography of an environment | |
KR101409802B1 (en) | System for analysis space information using three dimensions 3d scanner | |
Rieke-Zapp et al. | Digital photogrammetry for measuring soil surface roughness | |
Mohammed et al. | The effect of polynomial order on georeferencing remote sensing images | |
US11927692B2 (en) | Correcting positions after loop closure in simultaneous localization and mapping algorithm | |
Sundlie et al. | Integer computation of image orthorectification for high speed throughput | |
JP7143001B1 (en) | Measuring system and measuring method | |
Yang et al. | Design of 3D Laser Radar Based on Laser Triangulation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191227 |