VR (virtual reality) graph generation method with scale measurement and data acquisition device
Technical Field
The invention relates to the technical field of VR (virtual reality), in particular to a VR chart generation method with scale measurement and a data acquisition device.
Background
VR technology has evolved rapidly in recent years, and is used for online house watching in real estate industry, for recovery of criminal scenes in criminal investigation industry, for increasing reality of games in game industry, and so on. However, VR is collected based on a panoramic camera, and lacks of scale information of the environment, so that the application of the technology in some industries is greatly restricted, and the technology is particularly suitable for the criminal investigation industry. In criminal investigation industry, the restoration of the case scene is the key of case breaking, but the scale information can not be obtained though the scene information of the case scene can be obtained through the VR technology at present, so that key case breaking information such as the size of the footprints, the distance between the footprints, the size of blood trace, how far the murder is away from a target object and the like can not be obtained effectively, measurement is needed through manpower, time and labor are wasted, more reserved data are easy to occur, mistakes are easy to occur, and if new people replace the previous cases subsequently, the search is continued, and the search and the arrangement of the data are also a small task. For example, in the real estate industry, when a customer looks at a house online, the customer can only see the scene information of the environment, and the scale information of each room, each window and the like cannot be effectively obtained, so that the customer cannot truly know the house source information.
The existing VR technology is mainly used for acquiring scene information of an environment, and does not have scale information. Therefore, the comprehensive environmental information cannot be provided in the industries of criminal investigation, real estate and the like, so that the use of VR technology is greatly limited.
Disclosure of Invention
The invention aims at the defects, and designs a data acquisition device based on the three-dimensional scale information of the environment while the VR panorama information of the environment is acquired, and then the VR panorama information is combined with the three-dimensional scale information to realize the generation of a scale VR panorama and the measurement of the scale of an object in the panorama.
In order to achieve the above purpose, the specific technical scheme of the invention is as follows:
first, the present invention provides a data acquisition device, which includes: panorama VR camera, three-dimensional laser radar, tripod, revolving stage, step motor, panorama VR camera sets up at the tripod top, carry on three-dimensional laser radar setting on the revolving stage on the tripod, this revolving stage is driven by step motor.
The invention provides a VR chart generating method with the measurement of the size based on the data acquisition device, which comprises the following steps:
s1, acquiring a VR panorama and a three-dimensional laser spot set by using a data acquisition device, so that pixels at each angle of the VR panorama can find corresponding three-dimensional laser spots;
s2, generating a local point cloud map;
and S3, generating a depth map capable of performing scale measurement according to the VR panorama and the local point cloud map.
Further, in the step S1, the method for enabling the pixels of each angle of the VR panorama to find the corresponding three-dimensional laser spot is as follows:
selecting a speed of the stepper motor and determining a stepper motor rotation time, wherein:
the stepper motor speed calculation formula is as follows:
v M =360÷W÷f l (1)
wherein W is the horizontal pixel of VR panorama, f 1 Update frequency for the three-dimensional radar; through the formula, the panoramic image pixels corresponding to the rotated angle of the three-dimensional laser radar can be effectively ensured to find the corresponding three-dimensional laser points;
the rotation angle of the stepping motor is calculated as follows:
t M >360/v M (2)
t is in M For the duration of motor rotation, the time can effectively ensure that the three-dimensional radar rotates more than 360 degrees, so that pixels at each angle of the VR panorama can find corresponding three-dimensional laser points.
Preferably, the method for generating the local point cloud map in the step S2 is as follows:
s21, the first frame rotation angle theta 0 As a reference angle, setting the pose of the turntable as a fixed coordinate system, setting the pose as the fixed coordinate system, namely, the origin of the coordinate system of the local point cloud map, and splicing all the subsequent point clouds into the local point cloud map based on the reference coordinate system;
s22, aligning the current frame point cloud with the corner frame time stamp to ensure that the corner and the current frame point cloud are acquired simultaneously;
s23, determining a conversion relation between the current radar frame and a fixed coordinate system according to the relation between the current rotation angle and the reference rotation angle, wherein the conversion relation is shown in the following formula:
in theta t For the current corner frame corner, T t A rotation matrix between the current radar frame and a fixed coordinate system;
after the rotation matrix is acquired, converting the current point cloud frame into a fixed coordinate system, wherein the conversion method comprises the following steps:
(p fx p fy p fz ) T =T t *(p x p y p z ) T (4)
in (p) fx ,p fy ,p fz ) For the coordinates of the point under the fixed coordinate system after conversion, (p) x ,p y ,p z ) The coordinates of the point in the current radar coordinate system;
s24, converting all the current radar frame point clouds into a fixed coordinate system according to the conversion relation of the formula (4) in the step S23, then splicing the converted current frame point clouds into the existing local point cloud map directly according to the position relation, and finally recycling the process until the motor stops rotating, so that the complete local point cloud map is completed.
Preferably, in the step S22, the specific method for aligning the current frame point cloud with the corner frame timestamp is as follows:
firstly, comparing a corner frame time stamp in a corner container with a current radar frame time stamp, finding out a first corner frame which is larger than a current radar frame as a current corner frame, then comparing the time stamp of the current corner frame and a previous frame of the current corner frame with the time stamp of the current radar frame, and selecting a corner frame which is close to the current radar frame as a last used current corner frame.
Preferably, in the step S3, the method for generating the depth image capable of performing scale measurement according to the VR panorama and the local point cloud map is as follows:
s31, mapping of the local point cloud map coordinates and the spherical VR panorama coordinates is obtained;
s32, mapping between spherical VR panorama coordinates and depth map coordinates is obtained;
and S33, indirectly acquiring a mapping relation between the local point cloud map coordinates and the depth image coordinates according to the S31 and the S32, and attaching the depth in the depth image.
Further, the method for obtaining the mapping between the local point cloud map coordinates and the spherical VR panorama coordinates in step S31 is as follows:
s311, converting the point cloud from the fixed coordinate system in the step S22 to the camera coordinate system, wherein the conversion formula is as follows:
(p cx p cy p cz ) T = c T f *(p fx p fy p fz ) T (5)
in (p) cx ,p cy ,p cz ) For a point in the camera coordinate system, c T f the matrix can be obtained by measuring the position relationship between the fixed coordinate system and the camera coordinate system directly through a ruler;
s312, calculating longitude and latitude of points in the point cloud under a longitude and latitude coordinate system, wherein the calculation formula is as follows:
θ longitude =atan2(p y ,p x ) (6)
θ longitude ,θ latitude the longitude and latitude of the point cloud in the spherical longitude and latitude coordinate system are respectively.
Preferably, in the step S32, the method for obtaining the mapping between the spherical VR panorama coordinates and the depth map coordinates is as follows:
s321, since the spherical VR panorama is divided pixels with equal latitude and longitude intervals, the spherical panorama can be unfolded into a two-dimensional plane image, the horizontal direction is divided with equal latitude intervals, and the vertical direction is divided with equal latitude intervals by 0 mapping relation as follows:
(c x c y ) T =(θ longitute ÷d long θ latiude ÷d lati ) T (8)
in (c) x ,c y ) Pixel coordinates of a two-dimensional planar image, (d) long ,d lati ) Longitude and latitude pixel longitudes, respectively.
Preferably, in the step S33, the method for adhering the depth in the depth image is as follows:
the depth of each point in the local point cloud map is calculated according to the following calculation formula:
and r is the depth, each point is calculated by the formula and attached to each pixel of the depth image, so that the attachment of the depth image point cloud is realized.
Preferably, in step S3, after generating the depth map capable of performing scale measurement, the method for performing scale measurement on the depth map is as follows:
s341, acquiring pixel coordinates and depth:
clicking two points in the VR panorama, mapping to two pixel coordinates in the depth image through a formula (8), and reading depth values attached to the two pixels in the depth image according to the two pixel coordinates;
s342, restoring the point cloud:
first, mapping to longitude and latitude coordinates according to pixel coordinates as follows:
(θ′ longitude θ′ latitude ) T =(c′x*d long c′ y *d lati ) T (10)
in (θ' longitude ,θ’ latitude ) For mapped longitude and latitude, (c' x ,c’ y ) The pixel coordinates acquired in step S341;
and then restoring the space point according to the longitude and latitude and the depth value:
in the formula (p' x ,p’ y ,p’ z ) R' is the depth value obtained in the process of step S341;
s343, distance measurement:
according to formulas (10) and (11), the point clicked by the user can be converted into a real space
Three-dimensional points, again according to the following formula:
the Euler distance of two points in the two-point space can be obtained, and the measurement of the scale is realized; wherein:
(p’ x1 ,p’ y1 ,p’ z1 ),(p’ x2 ,p’ y2 ,p’ z2 ) Two points in the VR panorama clicked respectively.
Compared with the prior art, the invention has the advantages that:
(1) A set of portable data acquisition device is designed, and the operability is strong;
(2) Fusion of the point cloud data and the VR panorama is realized;
(3) And the measurement of the scale in the VR chart is realized.
Drawings
FIG. 1 is a schematic view of a data acquisition device of the present invention;
FIG. 2 is a flow chart of the local point cloud map generation in the present invention;
FIG. 3 is a block diagram of a Lei Dazhen and corner frame time pair Ji Liucheng in accordance with the present invention;
fig. 4 is a VR panorama and an expanded view of the present invention.
Detailed Description
In order that those of ordinary skill in the art will readily understand and practice the invention, embodiments of the invention will be further described with reference to the drawings.
Referring to fig. 1, the present invention provides a data acquisition device, which includes: panorama VR camera 1, three-dimensional laser radar 2, tripod 3, revolving stage 4, step motor 5, panorama VR camera 1 sets up at the tripod 3 top, carry on three-dimensional laser radar 2 setting on the revolving stage 4 on tripod 3, this revolving stage 4 is driven by step motor 5.
Based on the data acquisition device, the invention provides a VR chart generation method with scale measurement, which comprises the following steps:
s1, acquiring a VR panorama and a three-dimensional laser spot set by using a data acquisition device, so that pixels at each angle of the VR panorama can find corresponding three-dimensional laser spots.
In the data acquisition process, how to select the speed of the stepper motor 5 and determine the rotation time of the stepper motor 5 are included, so that each pixel in the panoramic image can find a corresponding laser point in the three-dimensional point cloud map, otherwise, certain pixel point positions cannot effectively acquire scale information. The calculation formula of the output speed of the stepping motor 5 is as follows:
v M =360÷W÷f l (1)
wherein W is the horizontal pixel of VR panorama, f 1 Update frequency for the three-dimensional radar; through the formula, the panoramic image pixels corresponding to the rotated angle of the three-dimensional laser radar can be effectively ensured to find the corresponding three-dimensional laser points;
the rotation angle of the stepping motor is calculated as follows:
t M >360/v M (2)
t is in M For the duration of motor rotation, the time can effectively ensure that the three-dimensional radar rotates more than 360 degrees, so that pixels at each angle of the VR panorama can find corresponding three-dimensional laser points.
S2, generating a local point cloud map.
Specifically, the local point cloud map generation process is as shown in FIG. 2,
(1) First, the first frame angle θ 0 For referenceSetting the pose of the turntable as a fixed coordinate system, namely setting the pose as a fixed coordinate system, namely setting the origin of the coordinate system of the local point cloud map, and splicing all the subsequent point clouds into the local point cloud map based on the reference coordinate system;
(2) Then aligning the current frame point cloud with the corner frame time stamp to ensure that the corner and the current frame point cloud are acquired simultaneously; the specific method for aligning the current frame point cloud with the corner frame time stamp is as follows (see fig. 3):
firstly, comparing a corner frame time stamp in a corner container with a current radar frame time stamp, finding out a first corner frame which is larger than a current radar frame as a current corner frame, then comparing the time stamp of the current corner frame and a previous frame of the current corner frame with the time stamp of the current radar frame, and selecting a corner frame which is close to the current radar frame as a last used current corner frame.
(3) After the radar frame and the corner frame are aligned, determining a conversion relation between the current radar frame and a fixed coordinate system according to the relation between the current corner and a reference corner, wherein the conversion relation is shown in the following formula:
in theta t For the current corner frame corner, T t A rotation matrix between the current radar frame and a fixed coordinate system;
after the rotation matrix is acquired, converting the current point cloud frame into a fixed coordinate system, wherein the conversion method comprises the following steps:
(p fx p fy p fz ) T =T t *(p x p z p z ) T (4)
in (p) fx ,p fy ,p fz ) For the coordinates of the point under the fixed coordinate system after conversion, (p) x ,p y ,p z ) The coordinates of the point in the current radar coordinate system;
(4) All the current radar frame point clouds can be converted into a fixed coordinate system according to the conversion relation of the formula (4), then the converted current frame point clouds are spliced into the existing local point cloud map directly according to the position relation, and finally the process is recycled until the motor stops rotating, so that the complete local point cloud map is completed.
And S3, generating a depth map capable of performing scale measurement according to the VR panorama and the local point cloud map.
The generation of the depth map comprises three parts, namely mapping of the local point cloud map coordinates and the spherical VR panorama coordinates, mapping of the spherical VR panorama coordinates and the depth image coordinates and attaching of the depth image depth.
(1) And mapping the local point cloud map coordinates and the spherical VR panorama coordinates. The VR panoramic camera can capture a 360-degree panoramic image, and the panoramic image is displayed in a form shown in the left diagram of fig. 4, and is a complete spherical image with pixels divided by longitude and latitude, so that mapping of the point cloud and the spherical image pixels can be performed through angle information of the point cloud relative to the sphere in space. The calculation of the map is as follows:
A. converting the point cloud from the fixed coordinate system in step S2 to the camera coordinate system, the conversion formula is as follows:
(p cx p cy p cz ) T = c T f *(p fx p fy p fz ) T (5)
in (p) cx ,p cy ,p cz ) For a point in the camera coordinate system, c T f the matrix can be obtained by measuring the position relationship between the fixed coordinate system and the camera coordinate system directly through a ruler;
B. the longitude and latitude of the point in the point cloud under the longitude and latitude coordinate system are calculated, and the calculation formula is as follows:
θ longitude =atan2(p y ,p x ) (6)
θ longitude ,θ latitude the longitude and latitude of the point cloud in the spherical longitude and latitude coordinate system are respectively. Therefore, through the two formulas, the local point cloud map can be mapped to each angle of the longitude and latitude coordinate system, and the mapping of the local point cloud map and the VR panorama is realized.
(2) Mapping of spherical VR panorama coordinates and depth map coordinates
The spherical VR panorama is divided pixels at equal intervals of longitude and latitude, and thus the spherical panorama can be unfolded into a two-dimensional planar image shown on the right side of fig. 4, divided at equal intervals of longitude in the horizontal direction, and divided at equal latitude in the vertical direction. The mapping relationship is as follows:
(c x c y ) T =(θ longitute ÷d long θ latitude ÷d lati ) T (8)
in (c) x ,c y ) Pixel coordinates of a two-dimensional planar image, (d) long ,d lati ) The pixel longitudes in the longitudinal direction and the latitudinal direction, respectively, can be set manually, and the smaller the value setting is, the higher the pixel of the depth image is, and the later measured longitudes can be improved. The mapping relation effectively maps longitude and latitude coordinates in a longitude and latitude coordinate system to pixel coordinates of a two-dimensional plane image, and the plane image is the required depth image.
(3) Attachment of depth in depth images
So far, the mapping relation between the local point cloud map coordinates and the longitude and latitude coordinates of the spherical panorama and the mapping relation between the longitude and latitude coordinates and the depth image coordinates are established, so that the mapping relation between the local point cloud map coordinates and the depth image coordinates is indirectly obtained. The depth of each point in the point cloud can then be attached to the depth image using this relationship. The depth of the point is calculated as follows:
and r is the depth, each point is calculated by the formula and attached to each pixel of the depth image, so that the attachment of the depth image point cloud is realized. Thus, the depth image is completed.
Dimensional measurement
After the depth image is completed, the scale measurement can be performed, namely, the distance between any two points can be measured in the VR panorama, and the specific method is as follows:
the process is divided into three aspects: pixel coordinate and depth acquisition, point cloud restoration, and distance measurement
(1) Acquisition of pixel coordinates and depth
The user clicks two points in the panorama, maps to two pixel coordinates in the depth image through formula (8), and reads the depth values attached to the two pixels in the depth image according to the two pixel coordinates.
(2) Reduction of point cloud
First, mapping to longitude and latitude coordinates according to pixel coordinates as follows:
(θ′ longitude θ′ latitude ) T =(c′ x *d long c′ y *d lati ) T (10)
in (θ' longitude ,θ’ latitude ) For mapped longitude and latitude, (c' x ,c’ y ) The pixel coordinates acquired in step S341.
And then restoring the space point according to the longitude and latitude and the depth value:
in the formula (p' x ,p’ y ,p’ z ) R' is the depth value obtained in the process of step S341;
(3) Distance measurement:
according to formulas (10) and (11), the point clicked by the user can be converted into a three-dimensional point in the real space, and then according to the following formula:
the Euler distance of two points in the two-point space can be obtained, and the measurement of the scale is realized; wherein:
(p’ x1 ,p’ y1 ,p’ z1 ),(p’ x2 ,p’ y2 ,p’ z2 ) Two points in the VR panorama clicked respectively.
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.