CN113886510A - Terminal interaction method, device, equipment and storage medium - Google Patents

Terminal interaction method, device, equipment and storage medium Download PDF

Info

Publication number
CN113886510A
CN113886510A CN202111097732.3A CN202111097732A CN113886510A CN 113886510 A CN113886510 A CN 113886510A CN 202111097732 A CN202111097732 A CN 202111097732A CN 113886510 A CN113886510 A CN 113886510A
Authority
CN
China
Prior art keywords
point cloud
map
maps
terminal
splicing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111097732.3A
Other languages
Chinese (zh)
Inventor
齐越
张瑞韩
王君义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peng Cheng Laboratory
Original Assignee
Peng Cheng Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peng Cheng Laboratory filed Critical Peng Cheng Laboratory
Priority to CN202111097732.3A priority Critical patent/CN113886510A/en
Publication of CN113886510A publication Critical patent/CN113886510A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the field of terminal interaction, in particular to a terminal interaction method, a terminal interaction device, terminal interaction equipment and a storage medium. According to the invention, the point cloud maps corresponding to the terminals are spliced together to obtain a complete point cloud spliced map. The spliced point cloud maps still keep respective independent coordinate systems, but the position conversion data information between the point clouds can be obtained through the positions of the point cloud maps in the point cloud splicing map, so that the points on the two point cloud maps are in one-to-one correspondence. And then one terminal operates the other point cloud map in the point cloud map according to the position conversion data information, and the other terminal can also operate the point cloud map in which the terminal is located to realize the interaction of a plurality of terminals, and the interaction efficiency among users can be improved through the multi-terminal interaction.

Description

Terminal interaction method, device, equipment and storage medium
Technical Field
The invention relates to the field of terminal interaction, in particular to a terminal interaction method, a terminal interaction device, terminal interaction equipment and a storage medium.
Background
The point cloud map is a map formed by points carrying real environment information one by one, and a user operates a virtual object in the point cloud map through a terminal so as to realize interaction between the user and the virtual object. In the interaction process, various operations of the user on the virtual object are fed back, the user performs further operations on the virtual object according to the feedback result, and the user can generate real, fresh, imaginative and interactive controllable feelings on the virtual object through the operations. Because the point cloud map can be formed by the real environment where the user is located, the interaction expresses the virtual object as a part of the real environment, so that the real environment is enhanced, the feeling and experience of the user are enhanced, and the reality, the telepresence and the immersion are created for the user.
The interaction is only limited to the interaction of one user through one terminal, and the operation of a plurality of users on the virtual object together cannot be realized.
In summary, the prior art cannot realize multi-user interaction.
Thus, there is a need for improvements and enhancements in the art.
Disclosure of Invention
In order to solve the technical problems, the invention provides a terminal interaction method, a terminal interaction device and a storage medium, and solves the problem that multi-user interaction cannot be realized in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a terminal interaction method, including:
acquiring each point cloud map corresponding to each terminal;
splicing the point cloud maps corresponding to the terminals to obtain a spliced point cloud map;
obtaining position conversion data information among the point cloud maps according to the point cloud maps and the point cloud splicing maps;
and controlling each terminal to interact in the point cloud splicing map according to the position conversion data information.
In one implementation, the stitching the point cloud maps corresponding to the terminals to obtain a stitched point cloud stitched map includes:
obtaining characteristic information corresponding to each point cloud map according to each point cloud map;
and splicing the point cloud maps according to the characteristic information corresponding to the point cloud maps to obtain a point cloud splicing map.
In one implementation, the obtaining, according to each point cloud map, feature information corresponding to each point cloud map includes:
obtaining point cloud densities corresponding to the point cloud maps according to the point cloud maps;
and when the point cloud density is smaller than a set value, obtaining the characteristic information corresponding to each point cloud map according to the point clouds in each point cloud map.
In one implementation, the obtaining, according to each point cloud map, feature information corresponding to each point cloud map includes:
obtaining point cloud densities corresponding to the point cloud maps according to the point cloud maps;
when the point cloud density is larger than or equal to a set value, applying a feature extraction algorithm to each point cloud map to extract feature point clouds to obtain key point clouds corresponding to each point cloud map;
and obtaining the characteristic information corresponding to each point cloud map according to the key point cloud.
In one implementation manner, the splicing each point cloud map according to the feature information corresponding to each point cloud map to obtain a point cloud spliced map includes:
obtaining characteristic values of the characteristic information corresponding to the point cloud maps according to the characteristic information corresponding to the point cloud maps;
according to the characteristic values, carrying out position transformation on each point cloud map, and recording the point cloud map after the position transformation as a point cloud rough transformation map;
calculating the distance between points in each point cloud rough transformation map;
and according to the distance, carrying out position transformation on each point cloud rough transformation map to obtain the point cloud splicing map.
In one implementation, the performing location transformation on each point cloud map according to the feature values, and recording the point cloud map after location transformation as a point cloud rough transformation map includes:
and according to the characteristic values, applying a random sampling consistency algorithm to each point cloud map to carry out position transformation to obtain point cloud rough transformation maps corresponding to the point cloud maps.
In one implementation, the performing, according to the distance, position transformation on each point cloud rough transformation map to obtain the point cloud stitching map includes:
according to the distance, obtaining the sum of the distances between all points in each point cloud rough transformation map in the distance;
and carrying out position transformation on each point cloud rough transformation map according to the sum of the distances between all points in each point cloud rough transformation map until the sum of the distances is the minimum value, and obtaining the point cloud splicing map.
In one implementation, the obtaining of the position conversion data information between the point cloud maps according to the point cloud maps and the point cloud stitching map includes:
obtaining coordinate transformation information of each point cloud map relative to the point cloud splicing map according to each point cloud map and the point cloud splicing map;
and obtaining coordinate transformation information among the point cloud maps in the position transformation data information according to the coordinate transformation information of the point cloud maps relative to the point cloud splicing map.
In one implementation, the obtaining each point cloud map corresponding to each terminal includes:
acquiring a real image of the environment where each terminal is located through each terminal;
and constructing a point cloud map of the environment where each terminal is located according to the real image.
In one implementation, the controlling each terminal to interact in the point cloud stitching map according to the position conversion data information includes:
obtaining a point cloud environment splicing map in the point cloud splicing map according to the point cloud splicing map, wherein the point cloud environment splicing map is obtained by splicing the point cloud maps of the environments where the terminals are located;
obtaining a target model through each terminal;
placing the target model in the point cloud environment stitching map;
and controlling each terminal to carry out interactive operation on the target model in the point cloud environment splicing map according to the position conversion data information.
In a second aspect, an embodiment of the present invention further provides a device for a terminal interaction method, where the device includes the following components:
the point cloud map acquisition module is used for acquiring each point cloud map corresponding to each terminal;
the splicing module is used for splicing the point cloud maps corresponding to the terminals to obtain a spliced point cloud splicing map;
the position conversion module is used for obtaining position conversion data information among the point cloud maps according to the point cloud maps and the point cloud splicing maps;
and the interaction module is used for controlling each terminal to interact in the point cloud splicing map according to the position conversion data information.
In a third aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes a memory, a processor, and a terminal interaction program that is stored in the memory and is executable on the processor, and when the processor executes the terminal interaction program, the steps of the terminal interaction method are implemented.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a terminal interaction program is stored on the computer-readable storage medium, and when the terminal interaction program is executed by a processor, the steps of the terminal interaction method are implemented.
Has the advantages that: according to the invention, the point cloud maps corresponding to the terminals are spliced together to obtain a complete point cloud spliced map. The spliced point cloud maps still keep respective independent coordinate systems, but the position conversion data information between the point clouds can be obtained through the positions of the point cloud maps in the point cloud splicing map, so that the points on the two point cloud maps are in one-to-one correspondence. And then one terminal operates the other point cloud map in the point cloud map according to the position conversion data information, and the other terminal can also operate the point cloud map in which the terminal is located to realize the interaction of a plurality of terminals. The interaction efficiency between users can be improved through multi-terminal interaction.
Drawings
FIG. 1 is an overall flow chart of the present invention;
FIG. 2 is a schematic diagram of a FREAK sampling model according to the present invention;
FIG. 3 is a sparse point cloud map of the present invention;
FIG. 4 is a schematic diagram of the FPFH characteristics corresponding to the extracted point cloud chart of the present invention;
FIG. 5 is a point cloud stitching map of the present invention;
FIG. 6 is a flow chart of two client interactions of the present invention;
fig. 7 is a schematic block diagram of an internal structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is clearly and completely described below by combining the embodiment and the attached drawings of the specification. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Research shows that the point cloud map is a map formed by points carrying real environment information one by one, and a user operates a virtual object in the point cloud map through a terminal so as to realize interaction between the user and the virtual object. In the interaction process, various operations of the user on the virtual object are fed back, the user performs further operations on the virtual object according to the feedback result, and the user can generate real, fresh, imaginative and interactive controllable feelings on the virtual object through the operations. Because the point cloud map can be formed by the real environment where the user is located, the interaction expresses the virtual object as a part of the real environment, so that the real environment is enhanced, the feeling and experience of the user are enhanced, and the reality, the telepresence and the immersion are created for the user.
The interaction is only limited to the interaction of one user through one terminal, and the operation of a plurality of users on the virtual object together cannot be realized.
In order to solve the technical problems, the invention provides a terminal interaction method, a terminal interaction device and a storage medium, and solves the problem that multi-user interaction cannot be realized in the prior art. In specific implementation, point cloud maps corresponding to all the devices are obtained, then all the point cloud maps are spliced to obtain a point cloud splicing map, after the point cloud splicing map is obtained, position conversion data information among all the point cloud maps is obtained, and finally all the terminals are controlled to interact in the point cloud splicing map through the position conversion data information. The invention can realize the interaction of a plurality of terminals, and the interaction efficiency among users can be improved by the multi-terminal interaction.
For example, two users are provided, each user holds one terminal, point cloud maps a and B in the two terminals are obtained firstly, then the two point cloud maps are spliced together through an image matching algorithm to obtain a point cloud splicing map C, and the point cloud splicing map C is presented to the two user terminals at the same time. The coordinate conversion relationship (position conversion data information) between the point cloud map a and the point cloud map B is obtained while the point cloud mosaic map C is obtained. For example, the coordinates corresponding to a point a in the point cloud map a are (a1, a2, a3), and the user terminal corresponding to the point cloud map B does not know the coordinates of the point a in the coordinate system of the point cloud map B, but the embodiment already obtains the coordinate transformation relationship between the point cloud map a and the point cloud map B, and can obtain the coordinates of the point a in the coordinate system of the point cloud map B through the coordinate transformation relationship, so that the user terminal B can perform corresponding operations on the point a.
Exemplary method
The terminal interaction method of the embodiment can be applied to products with an image playing function, such as televisions, computers and the like. In this embodiment, as shown in fig. 1, the terminal interaction method specifically includes the following steps:
and S100, acquiring each point cloud map corresponding to each terminal.
The point cloud map can be stored in the terminal, or can be formed by taking an image of the surrounding environment where the user is located by the terminal and extracting feature points in the image. Whether the point cloud map is stored inside or formed by images, the same parts are arranged among the point cloud maps, and the subsequent splicing can be completed only by the same parts.
When the point cloud map is an image of the surrounding environment, the process of constructing the point cloud map by each terminal device through the image is as follows:
a plurality of images of successive frames in the environment are first taken and stored, and then points are distinguished based on orb (organized FAST and Rotated bright) free (FAST relevance) features to determine whether feature points in subsequent frames are new feature points that have not been recorded or latest estimates of historical feature points. Since the continuous frame images are obtained, some feature points already appear in the previous frame image, if the point cloud map is constructed by using the same feature points, data redundancy is caused, and the calculation amount is increased, so that the redundant feature points can be removed while the feature points in the images are obtained through the ORB-front in the embodiment. The specific principle of the ORB-FREAK algorithm is as follows:
ORB-FREAK characterization the same Oriented FAST method as ORB was used to find features, and the FREAK method was used for characterization, rather than the BRIEF method of classical ORB.
Based on FAST, Oriented FAST solves the scale invariance by an image pyramid method, and solves the rotation invariance by a centroid calibration direction method, and for an image block, the method defines the moment of the image block:
Figure BDA0003269668980000071
wherein m ispqIs the (p + q) order space moment in the image block, I (x, y) is the pixel density function on the image at the two-dimensional coordinate (x, y), when p is 0 and q is 0, the 0 order moment m of the image block can be obtained00The physical meaning is the area of the region, and when p is 1, q is 0, and p is 0, q is 1, the order 1 spatial moment m can be obtained10And m01The centroid C of the image block can be expressed as
Figure BDA0003269668980000072
Construction of a vector from the corner point O
Figure BDA0003269668980000073
To the centroid C, then the direction of the image block can be defined as
Figure BDA0003269668980000075
The FREAK descriptor simulates the human retina model to build a sampling model, as shown in fig. 2, the center of each circle represents a sampling point, the circle represents the receptive field of the point, and gaussian kernel smoothing is used. The farther from the center of the image, the larger the radius of the gaussian kernel, and the overlapping region exists between the fields of the sampling points. For estimating the rotation of the key point, the FREAK calculates the global direction by using the long pair of local gradients and the symmetric receptive field about the center, so that the rotation invariance can be realized, that is, even if the collected current frame image is rotated and changed relative to the previous frame image, the feature point of the current frame image can still be extracted, and the feature point of the current frame image and the feature point of the previous frame image do not coincide.
The FREAK descriptor is a binary string, descriptor F:
Figure BDA0003269668980000074
Pais the corresponding receptive field, N is the length of the descriptor, T (P)a) Is a binary test bit.
Figure BDA0003269668980000081
Figure BDA0003269668980000082
And
Figure BDA0003269668980000083
is PaThen matching of the FREAK descriptors can be performed using a method such as hamming distance. For the point cloud information obtained from the underlying interface, ORB-FREAK features are used to obtain F-sum of the point to determine whether the point is a new feature point or a latest estimate of historical feature points. The points of the surrounding image are extracted through ORB-FREAK, the points are used to form the sparse point cloud shown in fig. 3, and the point cloud map of the embodiment is obtained by further processing the sparse point cloud.
The real images in the real environment are adopted to construct the point cloud map, and when the user interacts on the point cloud map, the sense of Augmented Reality (AR) can be provided for the user. The augmented reality technology aims to fuse a virtual environment generated by a computer with a real environment, so that a user can feel real, fresh, imaginable and controllable in interaction on the augmented reality environment. The AR includes a system of three basic features: virtual-real fusion, real-time interaction and three-dimensional registration. The virtual-real fusion is to represent the virtual environment as a part of the real environment, so as to realize the enhancement of the real environment and further enhance the experience of the user; the real-time interaction means that the system can provide natural and real-time feedback for various behaviors of the user in an augmented reality environment, if the interaction has large delay, the participation sense of the user can be seriously reduced, and the perception capability of the user is influenced; the aim of three-dimensional registration is to accurately match virtual environment information into a real environment to create reality, presence and immersion.
In the embodiment, when the point cloud map is constructed based on the real image, positioning information of the real environment and features (SLAM) of the real environment are comprehensively considered, the SLAM refers to a technology that the device tracks the direction of an unknown environment map in the map while constructing or updating the map, and the technology is a research hotspot in the field of computer vision. The visual SLAM is concerned more due to the characteristics of low cost, wide application range and the like, but the method excessively depends on the characteristic information of the surrounding environment and cannot process the situations of scene texture loss and dynamic scenes, an Inertial Measurement Unit (IMU) can measure the angular velocity and the acceleration of a sensor, the angular velocity and the acceleration have obvious complementarity with a visual sensor, and the visual SLAM is generally equipped on mobile terminal equipment at present, and the visual Inertial SLAM scheme is suitable for being applied to a three-dimensional registration module for augmented reality application.
And S200, splicing the point cloud maps corresponding to the terminals to obtain a spliced point cloud map.
In the embodiment, when the point cloud map is spliced, according to the point cloud density of the point cloud map, point cloud map splicing is directly performed by using the preprocessed point cloud, and point cloud map splicing is performed after the preprocessed point cloud is further processed. The following describes a detailed process of further performing point cloud map stitching on the preprocessed point cloud when the point cloud density is greater than or equal to a set value, and includes the following steps S201, S202, S203, S204, S205, and S206:
s201, applying a feature extraction algorithm to each point cloud map to extract feature point clouds to obtain key point clouds corresponding to the point cloud maps.
When the image shot by the terminal is a large scene image, the number of pixel points of the image is large. ISS (Intrasic Shape signatures) key points can be extracted, and the features of the point cloud can be further abstracted by the ISS key points. The process of extracting the key point cloud by the ISS is as follows:
ISS is a characteristic point extraction method based on characteristic value analysis, and for each key point SiIs provided with
Si={Pi,fi} (6)
Wherein f isiRepresenting the intrinsic shape characteristics, independent of the viewpoint, where intrinsic refers to the characteristics of a point within an ellipsoid formed around a cloud of points, the ellipsoid being related to the covariance matrix of that point and its neighbors, the magnitude of the eigenvalues being effectively the length of the ellipsoid axis. The shape of the ellipsoid is an abstract summary of the distribution of neighboring points. And for FiIs provided with
Figure BDA0003269668980000091
piWhich represents the position of the point itself,
Figure BDA0003269668980000092
is a shape descriptor of a point.
S202, obtaining the characteristic information corresponding to each point cloud map according to the key point cloud.
In the embodiment, an area of the key point cloud in the point cloud map is obtained through the key point cloud, and then a feature histogram (FPFH) of the area is calculated, wherein information corresponding to the feature histogram is feature information corresponding to the point cloud map of the embodiment. The principle of the feature histogram (FPFH) is described below:
pfh (point Feature histogram) Feature distribution histogram obtains geometric information around points by analyzing the difference between nearby normal directions, and the algorithm pairs all points near the target point (not only the selected key point and its neighbors, but also neighbors and itself). For each pair, a fixed coordinate system is then calculated from their normals. In this way, the difference between the normals can be encoded in 3-angle variables. These variables are saved along with the euclidean distances between the points and then divided into histograms after all pairs have been calculated.
FHFP is an accelerated version of PFH, and only considers the direct connection between the current key Point and the neighboring points, and removes the extra links between the neighboring points to obtain an spf (simplified Point Feature histogram), and then determines the k neighborhood of each Point again, and uses the neighboring SPFH values to perform weighting calculation to obtain the final histogram of FPFH.
Figure BDA0003269668980000101
The time complexity of the embodiment using FHFP is O (nk) compared to PFH2) When the point cloud map is reduced to o (nk), as shown in fig. 4, the feature information of the point cloud map obtained by using FHFP in this embodiment is shown.
And S203, obtaining the characteristic value of the characteristic information corresponding to each point cloud map according to the characteristic information corresponding to each point cloud map.
The feature histogram (FPFH) obtained in step S202 contains a plurality of feature values of the point cloud, and representative feature values, such as a pixel point mean value of the point cloud, are selected from the feature values.
S204, according to the characteristic values, carrying out position transformation on each point cloud map, and recording the point cloud map after the position transformation as a point cloud rough transformation map, wherein the step comprises the following steps: and according to the characteristic values, applying a random sampling consistency algorithm to each point cloud map to carry out position transformation to obtain point cloud rough transformation maps corresponding to the point cloud maps.
In the embodiment, RANSAC (random sample consensus algorithm for point cloud map application) is adopted to complete matching and splicing of two or more point cloud maps according to the characteristic values. The RANSAC principle is as follows:
RANSAC realizes coarse matching and splicing among point cloud maps by repeating the following steps:
(1) two or more points in the point cloud map that match the eigenvalues are randomly selected and taken as a random subset of the original data, which subset is considered as the assumed interior points.
(2) A hypothetical model is selected that causes all points in (1) to be considered inliers.
(3) All other data were tested against the model in (2). Points that fit well to the model are considered part of the consensus set according to some model-specific loss function.
(4) If enough points have been classified as part of the consensus set, the estimation model is appropriate.
(5) The model is refined using all members of the consensus set to re-estimate the model.
This process is repeated a given number of times, each time producing a new hypothetical model, and if its consensus set is larger than the previously saved models, the refined model is retained, and the above steps are repeated to complete the rough matching stitch of the point cloud map.
RANSAC is essentially to find parts with the same or similar characteristic values in the two point cloud maps, and then the parts with the same or similar characteristic values in the two point cloud maps are spliced to obtain a point cloud rough matching splicing map.
And S205, calculating the distance between the points in each point cloud rough transformation map.
S206, according to the distance, carrying out position transformation on each point cloud rough transformation map to obtain the point cloud splicing map, wherein the position transformation comprises the following steps: according to the distance, obtaining the sum of the distances between all points in each point cloud rough transformation map in the distance; and carrying out position transformation on each point cloud rough transformation map according to the sum of the distances between all points in each point cloud rough transformation map until the sum of the distances is the minimum value, and obtaining the point cloud splicing map.
The essence of steps S205 and S206 is that the ICP algorithm is used to perform position transformation on the point cloud rough transformation map to complete fine matching and stitching of the point cloud map, and finally the point cloud stitching map required by this embodiment is obtained, and the specific process is as follows:
the ICP algorithm finds the nearest point (P) in the target point cloud P and the source point cloud Q to be matched (namely, in two point cloud maps, the point in one point cloud map is used as the target point cloud, and the point in the other point cloud map is used as the source point cloud) according to certain constraint conditionsi,qi) The optimal matching parameters R and t are then calculated such that the error function E (R, t) is minimized.
Figure BDA0003269668980000121
Where n is the number of nearest neighbor point pairs, piFor a point in the target point cloud P, qiIs the source point in cloud Q and piAnd R is a rotation matrix and t is a translation vector.
The ICP algorithm steps are as follows:
(1) taking a point set P in a target point cloud Pi∈P;
(2) Finding out corresponding point set Q in source point cloud QiBelongs to Q, so that | | | Qi-piL is minimum;
(3) calculating a rotation matrix R and a translation matrix t to minimize an error function (in this embodiment, by rotating and translating each point cloud rough transformation map, calculating values of the error function after each rotation and translation, finding a position corresponding to each point cloud rough transformation map corresponding to the minimum value among the values, where the position is a position required to be aligned for fine matching and stitching among the point cloud rough transformation maps);
(4) to piAnd (3) carrying out rotation and translation transformation by using the rotation matrix R and the translation matrix t obtained in the previous step to obtain a new corresponding point set:
Figure BDA0003269668980000122
(5) calculating p'iCorresponding point set qiAverage distance of
Figure BDA0003269668980000123
(6) If d is smaller than a given threshold value or larger than a preset maximum iteration number, stopping iterative computation, otherwise, returning to the second part until the convergence condition position is met, and obtaining the required point cloud splicing map (namely the map after each point cloud map is precisely matched and spliced). The point cloud stitching map obtained by using S201, S202, S203, S204, S205, and S206 is shown in fig. 5.
When the surrounding environment image shot by each terminal corresponds to a small scene and the number of the pixel points of the image is small, the key point cloud in the step S201 is replaced by the preprocessed point cloud, and the key point cloud in the point cloud map does not need to be extracted. Similarly, the point cloud maps of the small scenes can be stitched by performing S201, S202, S203, S204, S205, and S206.
In this embodiment, different methods are adopted to obtain the point cloud stitching map according to the size of the scene, because for a larger scene, the matching speed can be greatly increased by using the ISS key point cloud as the subsequent input under the premise of a small loss of the matching precision, and table 1 shows the time required for using the key point cloud and not using the key point cloud. But for smaller scenes, mismatching can easily occur if key point clouds are used.
TABLE 1
Time(ms) MSE
Without ISS 6093 0.1322
ISS 464 0.2107
For example, a first terminal and a second terminal respectively obtain a point cloud map a and a point cloud map B, firstly, calculating feature information of the point cloud map a and feature information of the point cloud map B, splicing matched positions of the feature information in the two point cloud maps together to obtain a rough-matching spliced point cloud map, then calculating a distance between any point in the point cloud map a and any point in the point cloud map B in the rough-matching spliced point cloud map, rotating and translating the two point cloud maps, calculating the distance again, and finding the positions of the two point cloud maps corresponding to the minimum distance in all the distances, wherein the point cloud spliced map formed by the precise matching of the two point cloud maps is the point cloud spliced map.
And S300, obtaining position conversion data information among the point cloud maps according to the point cloud maps and the point cloud splicing maps.
The position conversion data information of the present embodiment is relative coordinate information between the point cloud maps.
And S400, controlling each terminal to interact in the point cloud splicing map according to the position conversion data information. The method comprises the following steps: obtaining a point cloud environment splicing map in the point cloud splicing map according to the point cloud splicing map, wherein the point cloud environment splicing map is obtained by splicing the point cloud maps of the environments where the terminals are located; obtaining a target model through each terminal; placing the target model in the point cloud environment stitching map; according to the position conversion data information, controlling each terminal to carry out interactive operation on a target model in the point cloud environment splicing map
For example, a point cloud map is formed by splicing a point cloud map a and a point cloud map B, wherein a target model is placed in the point cloud map a where a terminal user is located, and since respective coordinate systems of the point cloud map a and the point cloud map B are not unified, but after the two point cloud maps are spliced, the coordinate position of the target model in the point cloud map a is known, and then data information is converted according to the position between the two point cloud maps, the coordinate position of the target model in the coordinate system where the point cloud map B is located can be known, and another terminal user can directly control the target model.
The following describes the interaction process of multiple user terminals according to the present invention by taking two clients (user terminals) as an example:
as shown in fig. 6, a client a and a client b are both constructed with sparse point cloud maps first, then point cloud preprocessing is performed on the sparse point cloud maps of the client a and the client b, ISS key points are then extracted, FPFH features of key point areas are calculated, and finally RANSAC coarse matching and ICP fine matching are performed in sequence according to the FPFH features, so that a point cloud stitching map is obtained, coordinate system association information between the two sparse point cloud maps is obtained according to the point cloud stitching map, and the coordinate system association information is fed back to the client a and the client b.
In the process of the operation, the device poses of the client a and the client b are bound with the rendering camera, when the client a and the client b rotate or move, the virtual image displayed by the rendering camera correspondingly rotates or moves, so that a user has an idea of appearing as if the virtual image exists in the real world. Meanwhile, the user can shoot a real image through the camera and then place the virtual object in the real image, and the virtual image is determined by the relative position of the virtual object and the rendering camera, so that sudden change of the image cannot be generated. The user can dynamically load the external model subjected to AssetBundle packaging processing, the external model is placed in a real image scene through point touch, the pose of the virtual object is related to the information of the three-dimensional registration points around the placed target point instead of being placed in the absolute position of the virtual coordinate, the pose of the virtual object is estimated and updated synchronously with the pose of the equipment, and a better anchoring effect is achieved. The user may also interact with the virtual object through gestures such as translation, rotation, zoom, and the like.
After the client a and the client b both place the virtual object at the designated position, after the client a performs corresponding operation on the virtual object, the client b can see the state change of the virtual object after the operation, or the client a and the client b can synchronously perform operation on the virtual object to realize multi-user information synchronization, the synchronization mode between the clients adopts a mixed mode, for the operation of the user, a frame synchronization mode is used, namely, the server broadcasts the instruction of an operation sender, all the clients acquire the same output by the same input, but the condition that the operation of a local user is not consistent occurs in the test process, the requirement on the consistency of time is not very high considering the application of the system, the local user is allowed to directly execute the instruction after the operation instruction is sent, but the instruction of subsequent broadcasting is not accepted, therefore, the operation fluency of the local user is ensured. For the subsequent state of the virtual object, a state synchronization mode is used, namely, the server broadcasts the state of the virtual object at regular time to avoid the condition that part of the client side loses packets to cause operation defects, and meanwhile, in order to reduce the condition that the movement of the object being controlled is too abrupt due to larger time interval of state synchronization, interpolation processing is performed locally at the client side, namely, after receiving the target value, the target value is gradually approached in the subsequent frame until the difference is smaller than a criterion, and the performance of the client side is optimized.
In summary, the invention obtains a complete point cloud map by splicing the point cloud maps corresponding to the terminals together. The spliced point cloud maps still keep respective independent coordinate systems, but the position conversion data information between the point clouds can be obtained through the positions of the point cloud maps in the point cloud splicing map, so that the points on the two point cloud maps are in one-to-one correspondence. And then one terminal operates the other point cloud map in the point cloud map according to the position conversion data information, and the other terminal can also operate the point cloud map in which the terminal is located to realize the interaction of a plurality of terminals. The interaction efficiency between users can be improved through multi-terminal interaction.
Exemplary devices
The embodiment also provides a device of the terminal interaction method, and the device comprises the following components:
the point cloud map acquisition module is used for acquiring each point cloud map corresponding to each terminal;
the splicing module is used for splicing the point cloud maps corresponding to the terminals to obtain a spliced point cloud splicing map;
the position conversion module is used for obtaining position conversion data information among the point cloud maps according to the positions of the point cloud maps in the point cloud splicing maps;
the interaction module is used for controlling each terminal to carry out interaction in the point cloud splicing map according to the position conversion data information
Based on the above embodiments, the present invention further provides a terminal device, whose functional block diagram can be shown in fig. 7, where the terminal device includes a processor, a memory, a network interface, a display screen, and a temperature sensor, which are connected by a system bus. Wherein the processor of the terminal device is configured to provide computing and control capabilities. The memory of the terminal equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the terminal device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a terminal interaction method. The display screen of the terminal equipment can be a liquid crystal display screen or an electronic ink display screen, and the temperature sensor of the terminal equipment is arranged in the terminal equipment in advance and used for detecting the operating temperature of the internal equipment.
It will be understood by those skilled in the art that the block diagram of fig. 7 is only a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation to the terminal device to which the solution of the present invention is applied, and a specific terminal device may include more or less components than those shown in the figure, or may combine some components, or have different arrangements of components.
In one embodiment, a terminal device is provided, where the terminal device includes a memory, a processor, and a terminal interaction program stored in the memory and executable on the processor, and when the processor executes the terminal interaction program, the processor implements the following operation instructions:
acquiring each point cloud map corresponding to each terminal;
splicing the point cloud maps corresponding to the terminals to obtain a spliced point cloud map;
obtaining position conversion data information among the point cloud maps according to the point cloud maps and the point cloud splicing maps;
and controlling each terminal to interact in the point cloud splicing map according to the position conversion data information.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the present invention discloses a terminal interaction method, apparatus, device and storage medium, wherein the method comprises: acquiring each point cloud map corresponding to each terminal; splicing the point cloud maps corresponding to the terminals to obtain a spliced point cloud map; obtaining position conversion data information among the point cloud maps according to the point cloud maps and the point cloud splicing maps; and controlling each terminal to interact in the point cloud splicing map according to the position conversion data information. The invention can realize the interaction of a plurality of terminals, and the interaction efficiency among users can be improved by multi-terminal interaction.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (13)

1. A terminal interaction method is characterized by comprising the following steps:
acquiring each point cloud map corresponding to each terminal;
splicing the point cloud maps corresponding to the terminals to obtain a spliced point cloud map;
obtaining position conversion data information among the point cloud maps according to the point cloud maps and the point cloud splicing maps;
and controlling each terminal to interact in the point cloud splicing map according to the position conversion data information.
2. The terminal interaction method of claim 1, wherein the stitching the point cloud maps corresponding to the terminals to obtain a stitched point cloud stitched map, comprises:
obtaining characteristic information corresponding to each point cloud map according to each point cloud map;
and splicing the point cloud maps according to the characteristic information corresponding to the point cloud maps to obtain a point cloud splicing map.
3. The terminal interaction method of claim 2, wherein the obtaining of the feature information corresponding to each point cloud map according to each point cloud map comprises:
obtaining point cloud densities corresponding to the point cloud maps according to the point cloud maps;
and when the point cloud density is smaller than a set value, obtaining the characteristic information corresponding to each point cloud map according to the point clouds in each point cloud map.
4. The terminal interaction method of claim 2, wherein the obtaining of the feature information corresponding to each point cloud map according to each point cloud map comprises:
obtaining point cloud densities corresponding to the point cloud maps according to the point cloud maps;
when the point cloud density is larger than or equal to a set value, applying a feature extraction algorithm to each point cloud map to extract feature point clouds to obtain key point clouds corresponding to each point cloud map;
and obtaining the characteristic information corresponding to each point cloud map according to the key point cloud.
5. The terminal interaction method according to any one of claims 2 to 4, wherein the step of obtaining the point cloud registration map by registering the point cloud maps according to the feature information corresponding to the point cloud maps comprises:
obtaining characteristic values of the characteristic information corresponding to the point cloud maps according to the characteristic information corresponding to the point cloud maps;
according to the characteristic values, carrying out position transformation on each point cloud map, and recording the point cloud map after the position transformation as a point cloud rough transformation map;
calculating the distance between points in each point cloud rough transformation map;
and according to the distance, carrying out position transformation on each point cloud rough transformation map to obtain the point cloud splicing map.
6. The terminal interaction method according to claim 5, wherein the performing location transformation on each point cloud map according to the feature value and recording the point cloud map after location transformation as a point cloud rough transformation map comprises:
and according to the characteristic values, applying a random sampling consistency algorithm to each point cloud map to carry out position transformation to obtain point cloud rough transformation maps corresponding to the point cloud maps.
7. The terminal interaction method of claim 5, wherein the performing the position transformation on each point cloud rough transformation map according to the distance to obtain the point cloud stitching map comprises:
according to the distance, obtaining the sum of the distances between all points in each point cloud rough transformation map in the distance;
and carrying out position transformation on each point cloud rough transformation map according to the sum of the distances between all points in each point cloud rough transformation map until the sum of the distances is the minimum value, and obtaining the point cloud splicing map.
8. The terminal interaction method of claim 1, wherein the obtaining of the position conversion data information between the point cloud maps according to the point cloud maps and the point cloud stitching map comprises:
obtaining coordinate transformation information of each point cloud map relative to the point cloud splicing map according to each point cloud map and the point cloud splicing map;
and obtaining coordinate transformation information among the point cloud maps in the position transformation data information according to the coordinate transformation information of the point cloud maps relative to the point cloud splicing map.
9. The terminal interaction method of claim 1, wherein the obtaining each point cloud map corresponding to each terminal comprises:
acquiring a real image of the environment where each terminal is located through each terminal;
and constructing a point cloud map of the environment where each terminal is located according to the real image.
10. The terminal interaction method of claim 9, wherein the controlling each terminal to interact in the point cloud stitching map according to the position conversion data information comprises:
obtaining a point cloud environment splicing map in the point cloud splicing map according to the point cloud splicing map, wherein the point cloud environment splicing map is obtained by splicing the point cloud maps of the environments where the terminals are located;
obtaining a target model through each terminal;
placing the target model in the point cloud environment stitching map;
and controlling each terminal to carry out interactive operation on the target model in the point cloud environment splicing map according to the position conversion data information.
11. A device of a terminal interaction method is characterized by comprising the following components:
the point cloud map acquisition module is used for acquiring each point cloud map corresponding to each terminal;
the splicing module is used for splicing the point cloud maps corresponding to the terminals to obtain a spliced point cloud splicing map;
the position conversion module is used for obtaining position conversion data information among the point cloud maps according to the point cloud maps and the point cloud splicing maps;
and the interaction module is used for controlling each terminal to interact in the point cloud splicing map according to the position conversion data information.
12. A terminal device, characterized in that the terminal device comprises a memory, a processor and a terminal interaction program stored in the memory and operable on the processor, and the processor implements the steps of the terminal interaction method according to any one of claims 1 to 4 when executing the terminal interaction program.
13. A computer-readable storage medium, having a terminal interaction program stored thereon, which, when executed by a processor, implements the steps of the terminal interaction method according to any one of claims 1 to 4.
CN202111097732.3A 2021-09-18 2021-09-18 Terminal interaction method, device, equipment and storage medium Pending CN113886510A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111097732.3A CN113886510A (en) 2021-09-18 2021-09-18 Terminal interaction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111097732.3A CN113886510A (en) 2021-09-18 2021-09-18 Terminal interaction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113886510A true CN113886510A (en) 2022-01-04

Family

ID=79009911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111097732.3A Pending CN113886510A (en) 2021-09-18 2021-09-18 Terminal interaction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113886510A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511673A (en) * 2022-01-26 2022-05-17 哈尔滨工程大学 Improved ICP-based seabed local environment preliminary construction method
CN116030134A (en) * 2023-02-14 2023-04-28 长沙智能驾驶研究院有限公司 Positioning method, apparatus, device, readable storage medium and program product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635705A (en) * 2008-07-23 2010-01-27 上海赛我网络技术有限公司 Interaction method based on three-dimensional virtual map and figure and system for realizing same
CN108133458A (en) * 2018-01-17 2018-06-08 视缘(上海)智能科技有限公司 A kind of method for automatically split-jointing based on target object spatial point cloud feature
CN111201797A (en) * 2017-10-12 2020-05-26 微软技术许可有限责任公司 Point-to-point remote location for devices

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635705A (en) * 2008-07-23 2010-01-27 上海赛我网络技术有限公司 Interaction method based on three-dimensional virtual map and figure and system for realizing same
CN111201797A (en) * 2017-10-12 2020-05-26 微软技术许可有限责任公司 Point-to-point remote location for devices
CN108133458A (en) * 2018-01-17 2018-06-08 视缘(上海)智能科技有限公司 A kind of method for automatically split-jointing based on target object spatial point cloud feature

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511673A (en) * 2022-01-26 2022-05-17 哈尔滨工程大学 Improved ICP-based seabed local environment preliminary construction method
CN114511673B (en) * 2022-01-26 2022-12-09 哈尔滨工程大学 Improved ICP-based seabed local environment preliminary construction method
CN116030134A (en) * 2023-02-14 2023-04-28 长沙智能驾驶研究院有限公司 Positioning method, apparatus, device, readable storage medium and program product

Similar Documents

Publication Publication Date Title
Boukhayma et al. 3d hand shape and pose from images in the wild
CN111598998B (en) Three-dimensional virtual model reconstruction method, three-dimensional virtual model reconstruction device, computer equipment and storage medium
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
CN109657583B (en) Face key point detection method and device, computer equipment and storage medium
US11222471B2 (en) Implementing three-dimensional augmented reality in smart glasses based on two-dimensional data
CN111598993B (en) Three-dimensional data reconstruction method and device based on multi-view imaging technology
CN113012282B (en) Three-dimensional human body reconstruction method, device, equipment and storage medium
CN109683699B (en) Method and device for realizing augmented reality based on deep learning and mobile terminal
CN112614213A (en) Facial expression determination method, expression parameter determination model, medium and device
US20240046557A1 (en) Method, device, and non-transitory computer-readable storage medium for reconstructing a three-dimensional model
CN113034652A (en) Virtual image driving method, device, equipment and storage medium
CN113689503B (en) Target object posture detection method, device, equipment and storage medium
CN113886510A (en) Terminal interaction method, device, equipment and storage medium
WO2022052782A1 (en) Image processing method and related device
CN113643366B (en) Multi-view three-dimensional object attitude estimation method and device
CN116977522A (en) Rendering method and device of three-dimensional model, computer equipment and storage medium
US20230401799A1 (en) Augmented reality method and related device
CN115578515B (en) Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device
CN113593001A (en) Target object three-dimensional reconstruction method and device, computer equipment and storage medium
JP2023131117A (en) Joint perception model training, joint perception method, device, and medium
CN114972634A (en) Multi-view three-dimensional deformable human face reconstruction method based on feature voxel fusion
CN114494395A (en) Depth map generation method, device and equipment based on plane prior and storage medium
CN117237431A (en) Training method and device of depth estimation model, electronic equipment and storage medium
CN114882106A (en) Pose determination method and device, equipment and medium
CN110009717B (en) Animation figure binding recording system based on monocular depth map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination