CN111145095B - VR (virtual reality) graph generation method with scale measurement and data acquisition device - Google Patents

VR (virtual reality) graph generation method with scale measurement and data acquisition device Download PDF

Info

Publication number
CN111145095B
CN111145095B CN201911362662.2A CN201911362662A CN111145095B CN 111145095 B CN111145095 B CN 111145095B CN 201911362662 A CN201911362662 A CN 201911362662A CN 111145095 B CN111145095 B CN 111145095B
Authority
CN
China
Prior art keywords
point cloud
panorama
coordinates
coordinate system
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911362662.2A
Other languages
Chinese (zh)
Other versions
CN111145095A (en
Inventor
陈诺
洪涛
卢雄辉
欧阳若愚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Nuoda Communication Technology Co ltd
Original Assignee
Shenzhen Nuoda Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Nuoda Communication Technology Co ltd filed Critical Shenzhen Nuoda Communication Technology Co ltd
Priority to CN201911362662.2A priority Critical patent/CN111145095B/en
Publication of CN111145095A publication Critical patent/CN111145095A/en
Application granted granted Critical
Publication of CN111145095B publication Critical patent/CN111145095B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a VR chart generating method with scale measurement and a data acquisition device, wherein the method comprises the following steps: s1, acquiring a VR panorama and a three-dimensional laser spot set by using a data acquisition device, so that pixels at each angle of the VR panorama can find corresponding three-dimensional laser spots; s2, generating a local point cloud map; and S3, generating a depth map capable of performing scale measurement according to the VR panorama and the local point cloud map. The invention designs a set of portable data acquisition device, which has strong operability; the fusion of the point cloud data and the VR panorama is realized; and the measurement of the scale in the VR panorama is realized.

Description

VR (virtual reality) graph generation method with scale measurement and data acquisition device
Technical Field
The invention relates to the technical field of VR (virtual reality), in particular to a VR chart generation method with scale measurement and a data acquisition device.
Background
VR technology has evolved rapidly in recent years, and is used for online house watching in real estate industry, for recovery of criminal scenes in criminal investigation industry, for increasing reality of games in game industry, and so on. However, VR is collected based on a panoramic camera, and lacks of scale information of the environment, so that the application of the technology in some industries is greatly restricted, and the technology is particularly suitable for the criminal investigation industry. In criminal investigation industry, the restoration of the case scene is the key of case breaking, but the scale information can not be obtained though the scene information of the case scene can be obtained through the VR technology at present, so that key case breaking information such as the size of the footprints, the distance between the footprints, the size of blood trace, how far the murder is away from a target object and the like can not be obtained effectively, measurement is needed through manpower, time and labor are wasted, more reserved data are easy to occur, mistakes are easy to occur, and if new people replace the previous cases subsequently, the search is continued, and the search and the arrangement of the data are also a small task. For example, in the real estate industry, when a customer looks at a house online, the customer can only see the scene information of the environment, and the scale information of each room, each window and the like cannot be effectively obtained, so that the customer cannot truly know the house source information.
The existing VR technology is mainly used for acquiring scene information of an environment, and does not have scale information. Therefore, the comprehensive environmental information cannot be provided in the industries of criminal investigation, real estate and the like, so that the use of VR technology is greatly limited.
Disclosure of Invention
The invention aims at the defects, and designs a data acquisition device based on the three-dimensional scale information of the environment while the VR panorama information of the environment is acquired, and then the VR panorama information is combined with the three-dimensional scale information to realize the generation of a scale VR panorama and the measurement of the scale of an object in the panorama.
In order to achieve the above purpose, the specific technical scheme of the invention is as follows:
first, the present invention provides a data acquisition device, which includes: panorama VR camera, three-dimensional laser radar, tripod, revolving stage, step motor, panorama VR camera sets up at the tripod top, carry on three-dimensional laser radar setting on the revolving stage on the tripod, this revolving stage is driven by step motor.
The invention provides a VR chart generating method with the measurement of the size based on the data acquisition device, which comprises the following steps:
s1, acquiring a VR panorama and a three-dimensional laser spot set by using a data acquisition device, so that pixels at each angle of the VR panorama can find corresponding three-dimensional laser spots;
s2, generating a local point cloud map;
and S3, generating a depth map capable of performing scale measurement according to the VR panorama and the local point cloud map.
Further, in the step S1, the method for enabling the pixels of each angle of the VR panorama to find the corresponding three-dimensional laser spot is as follows:
selecting a speed of the stepper motor and determining a stepper motor rotation time, wherein:
the stepper motor speed calculation formula is as follows:
v M =360÷W÷f l (1)
wherein W is the horizontal pixel of VR panorama, f 1 Update frequency for the three-dimensional radar; through the formula, the panoramic image pixels corresponding to the rotated angle of the three-dimensional laser radar can be effectively ensured to find the corresponding three-dimensional laser points;
the rotation angle of the stepping motor is calculated as follows:
t M >360/v M (2)
t is in M For the duration of motor rotation, the time can effectively ensure that the three-dimensional radar rotates more than 360 degrees, so that pixels at each angle of the VR panorama can find corresponding three-dimensional laser points.
Preferably, the method for generating the local point cloud map in the step S2 is as follows:
s21, the first frame rotation angle theta 0 As a reference angle, setting the pose of the turntable as a fixed coordinate system, setting the pose as the fixed coordinate system, namely, the origin of the coordinate system of the local point cloud map, and splicing all the subsequent point clouds into the local point cloud map based on the reference coordinate system;
s22, aligning the current frame point cloud with the corner frame time stamp to ensure that the corner and the current frame point cloud are acquired simultaneously;
s23, determining a conversion relation between the current radar frame and a fixed coordinate system according to the relation between the current rotation angle and the reference rotation angle, wherein the conversion relation is shown in the following formula:
in theta t For the current corner frame corner, T t A rotation matrix between the current radar frame and a fixed coordinate system;
after the rotation matrix is acquired, converting the current point cloud frame into a fixed coordinate system, wherein the conversion method comprises the following steps:
(p fx p fy p fz ) T =T t *(p x p y p z ) T (4)
in (p) fx ,p fy ,p fz ) For the coordinates of the point under the fixed coordinate system after conversion, (p) x ,p y ,p z ) The coordinates of the point in the current radar coordinate system;
s24, converting all the current radar frame point clouds into a fixed coordinate system according to the conversion relation of the formula (4) in the step S23, then splicing the converted current frame point clouds into the existing local point cloud map directly according to the position relation, and finally recycling the process until the motor stops rotating, so that the complete local point cloud map is completed.
Preferably, in the step S22, the specific method for aligning the current frame point cloud with the corner frame timestamp is as follows:
firstly, comparing a corner frame time stamp in a corner container with a current radar frame time stamp, finding out a first corner frame which is larger than a current radar frame as a current corner frame, then comparing the time stamp of the current corner frame and a previous frame of the current corner frame with the time stamp of the current radar frame, and selecting a corner frame which is close to the current radar frame as a last used current corner frame.
Preferably, in the step S3, the method for generating the depth image capable of performing scale measurement according to the VR panorama and the local point cloud map is as follows:
s31, mapping of the local point cloud map coordinates and the spherical VR panorama coordinates is obtained;
s32, mapping between spherical VR panorama coordinates and depth map coordinates is obtained;
and S33, indirectly acquiring a mapping relation between the local point cloud map coordinates and the depth image coordinates according to the S31 and the S32, and attaching the depth in the depth image.
Further, the method for obtaining the mapping between the local point cloud map coordinates and the spherical VR panorama coordinates in step S31 is as follows:
s311, converting the point cloud from the fixed coordinate system in the step S22 to the camera coordinate system, wherein the conversion formula is as follows:
(p cx p cy p cz ) Tc T f *(p fx p fy p fz ) T (5)
in (p) cx ,p cy ,p cz ) For a point in the camera coordinate system, c T f the matrix can be obtained by measuring the position relationship between the fixed coordinate system and the camera coordinate system directly through a ruler;
s312, calculating longitude and latitude of points in the point cloud under a longitude and latitude coordinate system, wherein the calculation formula is as follows:
θ longitude =atan2(p y ,p x ) (6)
θ longitude ,θ latitude the longitude and latitude of the point cloud in the spherical longitude and latitude coordinate system are respectively.
Preferably, in the step S32, the method for obtaining the mapping between the spherical VR panorama coordinates and the depth map coordinates is as follows:
s321, since the spherical VR panorama is divided pixels with equal latitude and longitude intervals, the spherical panorama can be unfolded into a two-dimensional plane image, the horizontal direction is divided with equal latitude intervals, and the vertical direction is divided with equal latitude intervals by 0 mapping relation as follows:
(c x c y ) T =(θ longitute ÷d long θ latiude ÷d lati ) T (8)
in (c) x ,c y ) Pixel coordinates of a two-dimensional planar image, (d) long ,d lati ) Longitude and latitude pixel longitudes, respectively.
Preferably, in the step S33, the method for adhering the depth in the depth image is as follows:
the depth of each point in the local point cloud map is calculated according to the following calculation formula:
and r is the depth, each point is calculated by the formula and attached to each pixel of the depth image, so that the attachment of the depth image point cloud is realized.
Preferably, in step S3, after generating the depth map capable of performing scale measurement, the method for performing scale measurement on the depth map is as follows:
s341, acquiring pixel coordinates and depth:
clicking two points in the VR panorama, mapping to two pixel coordinates in the depth image through a formula (8), and reading depth values attached to the two pixels in the depth image according to the two pixel coordinates;
s342, restoring the point cloud:
first, mapping to longitude and latitude coordinates according to pixel coordinates as follows:
(θ′ longitude θ′ latitude ) T =(c′x*d long c′ y *d lati ) T (10)
in (θ' longitude ,θ’ latitude ) For mapped longitude and latitude, (c' x ,c’ y ) The pixel coordinates acquired in step S341;
and then restoring the space point according to the longitude and latitude and the depth value:
in the formula (p' x ,p’ y ,p’ z ) R' is the depth value obtained in the process of step S341;
s343, distance measurement:
according to formulas (10) and (11), the point clicked by the user can be converted into a real space
Three-dimensional points, again according to the following formula:
the Euler distance of two points in the two-point space can be obtained, and the measurement of the scale is realized; wherein:
(p’ x1 ,p’ y1 ,p’ z1 ),(p’ x2 ,p’ y2 ,p’ z2 ) Two points in the VR panorama clicked respectively.
Compared with the prior art, the invention has the advantages that:
(1) A set of portable data acquisition device is designed, and the operability is strong;
(2) Fusion of the point cloud data and the VR panorama is realized;
(3) And the measurement of the scale in the VR chart is realized.
Drawings
FIG. 1 is a schematic view of a data acquisition device of the present invention;
FIG. 2 is a flow chart of the local point cloud map generation in the present invention;
FIG. 3 is a block diagram of a Lei Dazhen and corner frame time pair Ji Liucheng in accordance with the present invention;
fig. 4 is a VR panorama and an expanded view of the present invention.
Detailed Description
In order that those of ordinary skill in the art will readily understand and practice the invention, embodiments of the invention will be further described with reference to the drawings.
Referring to fig. 1, the present invention provides a data acquisition device, which includes: panorama VR camera 1, three-dimensional laser radar 2, tripod 3, revolving stage 4, step motor 5, panorama VR camera 1 sets up at the tripod 3 top, carry on three-dimensional laser radar 2 setting on the revolving stage 4 on tripod 3, this revolving stage 4 is driven by step motor 5.
Based on the data acquisition device, the invention provides a VR chart generation method with scale measurement, which comprises the following steps:
s1, acquiring a VR panorama and a three-dimensional laser spot set by using a data acquisition device, so that pixels at each angle of the VR panorama can find corresponding three-dimensional laser spots.
In the data acquisition process, how to select the speed of the stepper motor 5 and determine the rotation time of the stepper motor 5 are included, so that each pixel in the panoramic image can find a corresponding laser point in the three-dimensional point cloud map, otherwise, certain pixel point positions cannot effectively acquire scale information. The calculation formula of the output speed of the stepping motor 5 is as follows:
v M =360÷W÷f l (1)
wherein W is the horizontal pixel of VR panorama, f 1 Update frequency for the three-dimensional radar; through the formula, the panoramic image pixels corresponding to the rotated angle of the three-dimensional laser radar can be effectively ensured to find the corresponding three-dimensional laser points;
the rotation angle of the stepping motor is calculated as follows:
t M >360/v M (2)
t is in M For the duration of motor rotation, the time can effectively ensure that the three-dimensional radar rotates more than 360 degrees, so that pixels at each angle of the VR panorama can find corresponding three-dimensional laser points.
S2, generating a local point cloud map.
Specifically, the local point cloud map generation process is as shown in FIG. 2,
(1) First, the first frame angle θ 0 For referenceSetting the pose of the turntable as a fixed coordinate system, namely setting the pose as a fixed coordinate system, namely setting the origin of the coordinate system of the local point cloud map, and splicing all the subsequent point clouds into the local point cloud map based on the reference coordinate system;
(2) Then aligning the current frame point cloud with the corner frame time stamp to ensure that the corner and the current frame point cloud are acquired simultaneously; the specific method for aligning the current frame point cloud with the corner frame time stamp is as follows (see fig. 3):
firstly, comparing a corner frame time stamp in a corner container with a current radar frame time stamp, finding out a first corner frame which is larger than a current radar frame as a current corner frame, then comparing the time stamp of the current corner frame and a previous frame of the current corner frame with the time stamp of the current radar frame, and selecting a corner frame which is close to the current radar frame as a last used current corner frame.
(3) After the radar frame and the corner frame are aligned, determining a conversion relation between the current radar frame and a fixed coordinate system according to the relation between the current corner and a reference corner, wherein the conversion relation is shown in the following formula:
in theta t For the current corner frame corner, T t A rotation matrix between the current radar frame and a fixed coordinate system;
after the rotation matrix is acquired, converting the current point cloud frame into a fixed coordinate system, wherein the conversion method comprises the following steps:
(p fx p fy p fz ) T =T t *(p x p z p z ) T (4)
in (p) fx ,p fy ,p fz ) For the coordinates of the point under the fixed coordinate system after conversion, (p) x ,p y ,p z ) The coordinates of the point in the current radar coordinate system;
(4) All the current radar frame point clouds can be converted into a fixed coordinate system according to the conversion relation of the formula (4), then the converted current frame point clouds are spliced into the existing local point cloud map directly according to the position relation, and finally the process is recycled until the motor stops rotating, so that the complete local point cloud map is completed.
And S3, generating a depth map capable of performing scale measurement according to the VR panorama and the local point cloud map.
The generation of the depth map comprises three parts, namely mapping of the local point cloud map coordinates and the spherical VR panorama coordinates, mapping of the spherical VR panorama coordinates and the depth image coordinates and attaching of the depth image depth.
(1) And mapping the local point cloud map coordinates and the spherical VR panorama coordinates. The VR panoramic camera can capture a 360-degree panoramic image, and the panoramic image is displayed in a form shown in the left diagram of fig. 4, and is a complete spherical image with pixels divided by longitude and latitude, so that mapping of the point cloud and the spherical image pixels can be performed through angle information of the point cloud relative to the sphere in space. The calculation of the map is as follows:
A. converting the point cloud from the fixed coordinate system in step S2 to the camera coordinate system, the conversion formula is as follows:
(p cx p cy p cz ) Tc T f *(p fx p fy p fz ) T (5)
in (p) cx ,p cy ,p cz ) For a point in the camera coordinate system, c T f the matrix can be obtained by measuring the position relationship between the fixed coordinate system and the camera coordinate system directly through a ruler;
B. the longitude and latitude of the point in the point cloud under the longitude and latitude coordinate system are calculated, and the calculation formula is as follows:
θ longitude =atan2(p y ,p x ) (6)
θ longitude ,θ latitude the longitude and latitude of the point cloud in the spherical longitude and latitude coordinate system are respectively. Therefore, through the two formulas, the local point cloud map can be mapped to each angle of the longitude and latitude coordinate system, and the mapping of the local point cloud map and the VR panorama is realized.
(2) Mapping of spherical VR panorama coordinates and depth map coordinates
The spherical VR panorama is divided pixels at equal intervals of longitude and latitude, and thus the spherical panorama can be unfolded into a two-dimensional planar image shown on the right side of fig. 4, divided at equal intervals of longitude in the horizontal direction, and divided at equal latitude in the vertical direction. The mapping relationship is as follows:
(c x c y ) T =(θ longitute ÷d long θ latitude ÷d lati ) T (8)
in (c) x ,c y ) Pixel coordinates of a two-dimensional planar image, (d) long ,d lati ) The pixel longitudes in the longitudinal direction and the latitudinal direction, respectively, can be set manually, and the smaller the value setting is, the higher the pixel of the depth image is, and the later measured longitudes can be improved. The mapping relation effectively maps longitude and latitude coordinates in a longitude and latitude coordinate system to pixel coordinates of a two-dimensional plane image, and the plane image is the required depth image.
(3) Attachment of depth in depth images
So far, the mapping relation between the local point cloud map coordinates and the longitude and latitude coordinates of the spherical panorama and the mapping relation between the longitude and latitude coordinates and the depth image coordinates are established, so that the mapping relation between the local point cloud map coordinates and the depth image coordinates is indirectly obtained. The depth of each point in the point cloud can then be attached to the depth image using this relationship. The depth of the point is calculated as follows:
and r is the depth, each point is calculated by the formula and attached to each pixel of the depth image, so that the attachment of the depth image point cloud is realized. Thus, the depth image is completed.
Dimensional measurement
After the depth image is completed, the scale measurement can be performed, namely, the distance between any two points can be measured in the VR panorama, and the specific method is as follows:
the process is divided into three aspects: pixel coordinate and depth acquisition, point cloud restoration, and distance measurement
(1) Acquisition of pixel coordinates and depth
The user clicks two points in the panorama, maps to two pixel coordinates in the depth image through formula (8), and reads the depth values attached to the two pixels in the depth image according to the two pixel coordinates.
(2) Reduction of point cloud
First, mapping to longitude and latitude coordinates according to pixel coordinates as follows:
(θ′ longitude θ′ latitude ) T =(c′ x *d long c′ y *d lati ) T (10)
in (θ' longitude ,θ’ latitude ) For mapped longitude and latitude, (c' x ,c’ y ) The pixel coordinates acquired in step S341.
And then restoring the space point according to the longitude and latitude and the depth value:
in the formula (p' x ,p’ y ,p’ z ) R' is the depth value obtained in the process of step S341;
(3) Distance measurement:
according to formulas (10) and (11), the point clicked by the user can be converted into a three-dimensional point in the real space, and then according to the following formula:
the Euler distance of two points in the two-point space can be obtained, and the measurement of the scale is realized; wherein:
(p’ x1 ,p’ y1 ,p’ z1 ),(p’ x2 ,p’ y2 ,p’ z2 ) Two points in the VR panorama clicked respectively.
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (8)

1. The VR chart generating method with the scale measurement is characterized by comprising the following steps:
s1, acquiring a VR panorama and a three-dimensional laser spot set by using a data acquisition device, so that pixels at each angle of the VR panorama can find corresponding three-dimensional laser spots; wherein, the data acquisition device includes: the panoramic VR camera is arranged at the top of the tripod, the three-dimensional laser radar carried on the turntable is arranged on the tripod, and the turntable is driven by the stepping motor;
s2, generating a local point cloud map;
s3, generating a depth map capable of performing scale measurement according to the VR panorama and the local point cloud map;
in the step S1, the method for enabling the pixels at each angle of the VR panorama to find the corresponding three-dimensional laser spot is as follows:
selecting a speed of the stepper motor and determining a stepper motor rotation time, wherein:
the stepper motor speed calculation formula is as follows:
v M =360÷W÷f l (1)
wherein W is the horizontal pixel of VR panorama, f 1 Update frequency for the three-dimensional radar; through the formula, the panoramic image pixels corresponding to the rotated angle of the three-dimensional laser radar can be effectively ensured to find the corresponding three-dimensional laser points;
the rotation angle of the stepping motor is calculated as follows:
t M >360/v M (2)
t is in M For the duration of motor rotation, the time can effectively ensure that the three-dimensional radar rotates more than 360 degrees, so that pixels at each angle of the VR panorama can find corresponding three-dimensional laser points.
2. The VR map generating method with scale measurement of claim 1, wherein the method of generating a local point cloud map in step S2 is as follows:
s21, the first frame rotation angle theta 0 As a reference angle, setting the pose of the turntable as a fixed coordinate system, setting the pose as the fixed coordinate system, namely, the origin of the coordinate system of the local point cloud map, and splicing all the subsequent point clouds into the local point cloud map based on the reference coordinate system;
s22, aligning the current frame point cloud with the corner frame time stamp to ensure that the corner and the current frame point cloud are acquired simultaneously;
s23, determining a conversion relation between the current radar frame and a fixed coordinate system according to the relation between the current rotation angle and the reference rotation angle, wherein the conversion relation is shown in the following formula:
in theta t For the current corner frame corner, T t A rotation matrix between the current radar frame and a fixed coordinate system;
after the rotation matrix is acquired, converting the current point cloud frame into a fixed coordinate system, wherein the conversion method comprises the following steps:
(p fx p fy p fz ) T =T t *(p x p y p z ) T (4)
in (p) fx ,p fy ,p fz ) For the coordinates of the point under the fixed coordinate system after conversion, (p) x ,p y ,p z ) The coordinates of the point in the current radar coordinate system;
s24, converting all the current radar frame point clouds into a fixed coordinate system according to the conversion relation of the formula (4) in the step S23, then splicing the converted current frame point clouds into the existing local point cloud map directly according to the position relation, and finally recycling the process until the motor stops rotating, so that the complete local point cloud map is completed.
3. The VR map generating method with scale measurement according to claim 2, wherein in step S22, the specific method for aligning the current frame point cloud with the corner frame time stamp is as follows:
firstly, comparing a corner frame time stamp in a corner container with a current radar frame time stamp, finding out a first corner frame which is larger than a current radar frame as a current corner frame, then comparing the time stamp of the current corner frame and a previous frame of the current corner frame with the time stamp of the current radar frame, and selecting a corner frame which is close to the current radar frame as a last used current corner frame.
4. The VR map generating method with scale measurement according to claim 3, wherein in the step S3, the method for generating the depth image capable of scale measurement according to the VR panorama and the local point cloud map is as follows:
s31, mapping of the local point cloud map coordinates and the spherical VR panorama coordinates is obtained;
s32, mapping between spherical VR panorama coordinates and depth map coordinates is obtained;
and S33, indirectly acquiring a mapping relation between the local point cloud map coordinates and the depth image coordinates according to the S31 and the S32, and attaching the depth in the depth image.
5. The VR map generating method with scale measurement of claim 4, wherein the method for obtaining the map of the local point cloud map coordinates and the spherical VR panorama coordinates in step S31 is as follows:
s311, converting the point cloud from the fixed coordinate system in the step S22 to the camera coordinate system, wherein the conversion formula is as follows:
(p cx p cy p cz ) Tc T f *(p fx p fy p fz ) T (5)
in (p) cx ,p cy ,p cz ) For a point in the camera coordinate system, c T f the matrix can be obtained by measuring the position relationship between the fixed coordinate system and the camera coordinate system directly through a ruler;
s312, calculating longitude and latitude of points in the point cloud under a longitude and latitude coordinate system, wherein the calculation formula is as follows:
θ longitude =atan2(p y ,P x ) (6)
θ longitudelatitude the longitude and latitude of the point cloud in the spherical longitude and latitude coordinate system are respectively.
6. The VR map generating method with scale measurement of claim 5, wherein in step S32, the method for obtaining the mapping between the spherical VR panorama coordinates and the depth map coordinates is as follows:
s321, the spherical VR panorama is divided pixels with equal latitude and longitude intervals, so that the spherical panorama can be unfolded into a two-dimensional plane image, the spherical panorama is divided with equal latitude and longitude intervals in the horizontal direction, and the spherical VR panorama is divided with equal latitude in the vertical direction, and the mapping relation is as follows:
(c x c y )T=(θ longitute ÷d long θ latitude ÷d lati ) T (8)
in (c) x ,c y ) Pixel coordinates of a two-dimensional planar image, (d) long ,d lati ) Longitude and latitude pixel longitudes, respectively.
7. The VR map generating method with scale measurement of claim 6, wherein in step S33, the method for attaching the depth in the depth image is as follows:
the depth of each point in the local point cloud map is calculated according to the following calculation formula:
and r is the depth, each point is calculated by the formula and attached to each pixel of the depth image, so that the attachment of the depth image point cloud is realized.
8. The VR map generating method with scale measurement according to claim 7, wherein in step S3, after generating the depth map capable of scale measurement, the method for scale measurement is as follows:
s341, acquiring pixel coordinates and depth:
clicking two points in the VR panorama, mapping to two pixel coordinates in the depth image through a formula (8), and reading depth values attached to the two pixels in the depth image according to the two pixel coordinates;
s342, restoring the point cloud:
first, mapping to longitude and latitude coordinates according to pixel coordinates as follows:
(θ′ longitute θ′ latitude ) T =(c′ x *d long c′ y *d lati ) T (10)
in (θ' longitude ,θ’ latitude ) For mapped longitude and latitude, (c' x ,c’ y ) The pixel coordinates acquired in step S341;
and then restoring the space point according to the longitude and latitude and the depth value:
in the formula (p' x ,p’ y ,p’ z ) R' is the depth value obtained in the process of step S341;
s343, distance measurement:
according to formulas (10) and (11), the point clicked by the user can be converted into a three-dimensional point in the real space, and then according to the following formula:
the Euler distance of two points in the two-point space can be obtained, and the measurement of the scale is realized; wherein: (p' x1 ,p’ y1 ,p’ z1 ),(p’ x2 ,p’ y2 ,p’ z2 ) Two points in the VR panorama clicked respectively.
CN201911362662.2A 2019-12-25 2019-12-25 VR (virtual reality) graph generation method with scale measurement and data acquisition device Active CN111145095B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911362662.2A CN111145095B (en) 2019-12-25 2019-12-25 VR (virtual reality) graph generation method with scale measurement and data acquisition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911362662.2A CN111145095B (en) 2019-12-25 2019-12-25 VR (virtual reality) graph generation method with scale measurement and data acquisition device

Publications (2)

Publication Number Publication Date
CN111145095A CN111145095A (en) 2020-05-12
CN111145095B true CN111145095B (en) 2023-10-10

Family

ID=70520228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911362662.2A Active CN111145095B (en) 2019-12-25 2019-12-25 VR (virtual reality) graph generation method with scale measurement and data acquisition device

Country Status (1)

Country Link
CN (1) CN111145095B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020962A (en) * 2012-11-27 2013-04-03 武汉海达数云技术有限公司 Rapid level detection method of mouse applied to three-dimensional panoramic picture
CN103729883A (en) * 2013-12-30 2014-04-16 浙江大学 Three-dimensional environmental information collection and reconstitution system and method
CN106959080A (en) * 2017-04-10 2017-07-18 上海交通大学 A kind of large complicated carved components three-dimensional pattern optical measuring system and method
CN108288292A (en) * 2017-12-26 2018-07-17 中国科学院深圳先进技术研究院 A kind of three-dimensional rebuilding method, device and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9583074B2 (en) * 2011-07-20 2017-02-28 Google Inc. Optimization of label placements in street level images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020962A (en) * 2012-11-27 2013-04-03 武汉海达数云技术有限公司 Rapid level detection method of mouse applied to three-dimensional panoramic picture
CN103729883A (en) * 2013-12-30 2014-04-16 浙江大学 Three-dimensional environmental information collection and reconstitution system and method
CN106959080A (en) * 2017-04-10 2017-07-18 上海交通大学 A kind of large complicated carved components three-dimensional pattern optical measuring system and method
CN108288292A (en) * 2017-12-26 2018-07-17 中国科学院深圳先进技术研究院 A kind of three-dimensional rebuilding method, device and equipment

Also Published As

Publication number Publication date
CN111145095A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN112894832A (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
JP5538667B2 (en) Position / orientation measuring apparatus and control method thereof
JP4245963B2 (en) Method and system for calibrating multiple cameras using a calibration object
CN112184890B (en) Accurate positioning method of camera applied to electronic map and processing terminal
CN107507274A (en) A kind of quick restoring method of public security criminal-scene three-dimensional live based on cloud computing
Honkamaa et al. Interactive outdoor mobile augmentation using markerless tracking and GPS
Gomez-Jauregui et al. Quantitative evaluation of overlaying discrepancies in mobile augmented reality applications for AEC/FM
US11989827B2 (en) Method, apparatus and system for generating a three-dimensional model of a scene
US9749597B2 (en) Precise target positioning in geographical imaging
CN112714266A (en) Method and device for displaying label information, electronic equipment and storage medium
JP6928217B1 (en) Measurement processing equipment, methods and programs
Rodríguez‐Gonzálvez et al. A hybrid approach to create an archaeological visualization system for a Palaeolithic cave
CN111145095B (en) VR (virtual reality) graph generation method with scale measurement and data acquisition device
WO2023103883A1 (en) Automatic object annotation method and apparatus, electronic device and storage medium
CN114089836B (en) Labeling method, terminal, server and storage medium
US20160349409A1 (en) Photovoltaic shade impact prediction
KR102458559B1 (en) Construction management system and method using mobile electric device
US11418716B2 (en) Spherical image based registration and self-localization for onsite and offsite viewing
Wahbeh et al. Toward the Interactive 3D Modelling Applied to Ponte Rotto in Rome
Mahinda et al. Development of an effective 3D mapping technique for heritage structures
CN114187344A (en) Map construction method, device and equipment
Adamczyk et al. Three-dimensional measurement system for crime scene documentation
Abrams et al. Web-accessible geographic integration and calibration of webcams
CN113763561B (en) POI data generation method and device, storage medium and electronic equipment
CN113920144B (en) Real-scene photo ground vision field analysis method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230904

Address after: 518000 Room 601, building r2-b, Gaoxin industrial village, No. 020, Gaoxin South seventh Road, Gaoxin community, Yuehai street, Nanshan District, Shenzhen, Guangdong

Applicant after: Shenzhen Nuoda Communication Technology Co.,Ltd.

Address before: 518000 a501, 5th floor, Shanshui building, Nanshan cloud Valley Innovation Industrial Park, 4093 Liuxian Avenue, Taoyuan Street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN WUJING INTELLIGENT ROBOT Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant