CN113160421A - Space type real object interaction virtual experiment method based on projection - Google Patents

Space type real object interaction virtual experiment method based on projection Download PDF

Info

Publication number
CN113160421A
CN113160421A CN202110088261.3A CN202110088261A CN113160421A CN 113160421 A CN113160421 A CN 113160421A CN 202110088261 A CN202110088261 A CN 202110088261A CN 113160421 A CN113160421 A CN 113160421A
Authority
CN
China
Prior art keywords
data
point cloud
cloud data
pose
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110088261.3A
Other languages
Chinese (zh)
Inventor
袁庆曙
王若楠
潘志庚
柳嘉鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Normal University
Original Assignee
Hangzhou Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Normal University filed Critical Hangzhou Normal University
Priority to CN202110088261.3A priority Critical patent/CN113160421A/en
Publication of CN113160421A publication Critical patent/CN113160421A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a projection-based spatial real object interactive virtual experiment method, wherein a system for realizing the method comprises a real object, a depth camera, a data processor and a projector, and the method comprises the following steps: the data processor constructs an actual experiment scene and a virtual experiment scene containing a real object; after the depth camera acquires the depth data of an actual experimental scene, converting the depth data into point cloud data in real time according to internal parameters of the depth camera; the data processor divides the point cloud data of the physical object from the point cloud data, determines the pose data of the physical object based on the point cloud data of the physical object, and corrects the pose of the digital model corresponding to the physical object in the virtual experimental scene based on the pose data of the physical object; and projecting the virtual experiment scene subjected to the pose correction of the digital model into an actual experiment scene by using the projector. The method of the invention has no interactive load, easy judgment of interactive intention, non-isolated input and output spaces and multi-user cooperative operation.

Description

Space type real object interaction virtual experiment method based on projection
Technical Field
The invention relates to the technical field of augmented reality, in particular to a projection-based spatial real object interactive virtual experiment method.
Background
The spatial augmented reality is one of three forms (a helmet type, a handheld type and a spatial type) of augmented reality, and is a technology for directly presenting an image of a virtual object in a physical space by means of various devices to achieve a virtual-real fusion visual effect. In a spatial augmented reality system, virtual content is typically not dependent on a display medium in a digital space, such as a screen or helmet, but is presented directly onto real-world objects, so that a user can interact with the virtual content in a physical space. The spatial augmented reality has been widely applied in the fields of industry, traffic, medical treatment and the like, and the application of the spatial augmented reality to teaching is one of the hot spots of current research.
The human-computer interaction is a process of communicating with a computer, the interaction technology is a mode of realizing human-computer conversation through computer input and output equipment, the traditional interaction mode has great limitation on presentation effect and user experience, and researches show that a real object is introduced into an interaction system to form a real object user interface (TUI) and a user is allowed to directly control a physical object in the real world to realize interaction with a digital space, so that natural interaction experience is provided.
At present, online teaching becomes a hotspot of the education industry, however, experimental teaching is difficult to develop online, and research on virtual experiments and simulation technologies thereof arouses attention of the education world.
Patent application with publication number CN111897422A discloses a real object interaction method for real-time fusion of virtual and real objects, comprising: calculating a mapping relation between virtual reality equipment coordinate systems of a server side and a client side, wherein the mapping relation comprises a rotation matrix and a displacement vector so as to realize the unification of object positions on coordinate representation; obtaining a template of a target object, modeling template characteristics, tracking the position of a desktop object, extracting the outline of the object, obtaining the main direction of the object, obtaining two fixed points in the main direction, calculating a rotation matrix of the object relative to an initial state through singular value decomposition, and calculating a rotation angle through the rotation matrix; the method comprises the steps of obtaining the displacement of a real object from a server end relative to an initial position and the rotation relative to an initial orientation, adjusting the position, calculating to obtain the current position and orientation of a virtual object, and displaying the virtual object on virtual reality equipment of a client end according to the obtained position and orientation.
The object interaction method simplifies the problem of object posture estimation into the problems of position tracking of an object on a horizontal desktop and calculation of a rotation angle around the direction vertical to the desktop, does not need to attach a label to the object, establish a template, a data set and the like, but an interactive desktop system generally cannot be operated by a real object, and is limited in application.
Patent application with publication number CN110288657A discloses an enhanced implementation three-dimensional registration method based on Kinect, which includes: step 1, calibrating a Kinect color camera and a depth camera; step 2, obtaining a color image and a depth image based on the Kinect, generating three-dimensional point cloud, further converting the three-dimensional point cloud into a depth map, and fusing point cloud information and a Fast Marching Method (FMM) to repair the depth map; step 3, aligning the depth image with the color image; step 4, automatically judging a close-range mode based on the depth histogram; step 5, under a non-close-range mode, calculating the pose of the camera by adopting a three-dimensional registration method based on Fast ICP (inductively coupled plasma), and finishing three-dimensional registration under the non-close-range mode; step 6, under the close shot mode, calculating the pose of the camera by adopting a three-dimensional registration method fusing Fast ICP and ORB, and finishing three-dimensional registration under the close shot mode; and 7, overlapping the virtual object in the color image of the real scene, and displaying the virtual and real overlapping results.
The method overcomes the restriction of Kinect hardware, improves the precision of three-dimensional registration, enlarges the application range of an augmented reality system, deduces the motion of a camera by tracking the position of an object, and finishes the estimation of the target pose, but the tracking of the object based on a color camera has limitation in a projection interaction environment, and the comfort of human-computer interaction and the flexibility of the system are not high.
Disclosure of Invention
The invention provides a projection-based spatial real object interactive virtual experiment method, which can realize multi-user multi-object parallel interaction through a pose algorithm of real object local characteristics and continuity hypothesis, and can realize a real object interactive virtual experiment in a label-free spatial manner.
A projection-based spatial real object interactive virtual experiment method is disclosed, wherein a system for realizing the method comprises a real object, a depth camera, a data processor and a projector, and the method comprises the following steps:
(1) the data processor constructs an actual experiment scene and a virtual experiment scene containing a real object;
(2) after the depth camera acquires the depth data of an actual experimental scene, converting the depth data into point cloud data in real time according to internal parameters of the depth camera;
(3) the data processor divides the point cloud data of the physical object from the point cloud data, determines the position and orientation data of the physical object based on the point cloud data of the physical object, and corrects the position and orientation of the digital model corresponding to the physical object in the virtual experimental scene based on the position and orientation data of the physical object;
(4) and projecting the virtual experiment scene subjected to the pose correction of the digital model into an actual experiment scene by using the projector.
In the step (2), the depth data is the vertical distance from the space point to the camera plane. In step (3), the method for determining the pose data of the physical object comprises the following steps:
firstly, directly acquiring point cloud data which contains a large amount of unnecessary data in a scene, processing the point cloud data by adopting a direct filtering algorithm, designating a dimension and a value range under the dimension, sequentially traversing the point cloud data, judging whether the value of the point cloud data on the designated dimension is in the value range, deleting points of which the value is not in the value range, and forming filtered point cloud data by remaining points after traversing is finished, wherein the algorithm is suitable for eliminating invalid operation backgrounds;
and secondly, performing down-sampling and segmentation on the filtered point cloud data by adopting a voxel filtering algorithm to obtain multi-object point cloud data. The voxel filtering algorithm is adopted, so that the number of point cloud data is reduced on the premise that the original geometric structure of the point cloud data is not changed, and the detection rate of a physical object is improved;
thirdly, processing the multi-object point cloud data, and taking the point with the minimum Z coordinate of each multi-object point cloud data as a reference, and taking local point cloud data as real-time segmented target data for iterative registration;
and fourthly, registering parameters of the target data and the point cloud data in the virtual scene by adopting an iterative closest point algorithm (ICP) to obtain the position and posture data of the real object in the depth camera coordinate system.
And fifthly, obtaining pose data between the depth camera and the projector coordinate system through camera calibration, and obtaining the pose data of the real object by combining the pose data of the real object in the depth camera coordinate system.
Preferably, in the second step, the filtered point cloud data is downsampled by adopting a voxel filtering algorithm and is segmented, and the segmentation method adopts a method of combining an euclidean algorithm and a region-based growing algorithm.
The point cloud data is data with high redundancy, non-uniformity and lack of topological information, a target object is difficult to segment in an interactive scene, the Euclidean algorithm uses the distance between neighbors as a judgment standard, and the region growing algorithm utilizes information such as a normal line and curvature to judge whether the point cloud data should be clustered.
Preferably, in the third step, the target data is obtained by further filtering by using a kalman filter algorithm. Due to noise in the point cloud data, even if the physical object digital model is kept static, the position tracked by the depth camera fluctuates randomly, the noise is amplified during movement prediction, the quasi-static attitude of the physical object digital model is estimated by adopting a Kalman filtering algorithm, and the fluctuation of the physical object digital model during movement can be reduced.
The internal parameters of the depth camera are parameters related to the characteristics of the depth camera, and the parameters comprise:
focal length f, distance from the focal point of the camera to the imaging plane;
the adjacent pixel points dx and dy respectively represent how many units a pixel in the horizontal direction and the vertical direction on the image respectively occupies, and are the key for reflecting the conversion between the physical coordinate relationship of the image and the pixel coordinate system in reality;
coordinates (u) of the origin of the image coordinate system in the pixel coordinate system0,v0) And the number of horizontal and vertical pixels representing the phase difference between the center pixel coordinate of the image and the image origin pixel coordinate.
Preferably, in the fourth step, the target data acquired in real time and the point cloud data in the virtual scene are obtained by performing coarse registration by using a sampling consistency registration algorithm of a fast point feature histogram.
The accuracy of the iterative closest point algorithm is greatly influenced by the initial pose of the physical object, the convergence speed is low, the pose change of the physical object between every two continuous frames is very small, the coarse registration algorithm is firstly adopted at the initial operation of the system, and then the ICP registration is carried out on the basis, so that the registration accuracy and the convergence rate can be effectively improved.
Wherein the coarse registration algorithm comprises:
(1) firstly, calculating direction information of a source point cloud (a point cloud set converted by a digital model) and a target point cloud (target point cloud data processed by a computer), namely extracting a normal vector;
(2) extracting Fast Point Feature Histograms (FPFH) features of the source point cloud and the target point cloud according to the normal information;
(3) randomly selecting a plurality of sample points P from the target point cloud, finding out points Q similar to the FPFH (field programmable gate flash) of the sample points of the target point cloud from the source point cloud, and forming a one-to-one corresponding relation between the points and the sample points;
(4) calculating a rigid transformation matrix M by using the corresponding point pairs, applying the transformation matrix to the target point cloud, wherein P 'is P multiplied by M, and calculating the registration error between P' and Q;
(5) and repeating the steps, comparing the obtained result with a preset threshold value for measuring errors to determine whether the matrix is the optimal transformation, and if so, changing the matrix into the initial relative pose of the target data and the point cloud data converted by the three-dimensional model in the virtual scene.
The fine registration algorithm includes:
(1) for each point M in the source point cloud M (the point cloud set after the digital model is converted)iSearching the corresponding point N with the nearest distance in the target point cloud N (the target point cloud data after being processed by the computer)iThe initial corresponding point pair is formed as the corresponding point of the point in the target point cloud;
(2) calculating a rotation matrix R and a translational vector T to minimize the mean square error between corresponding point sets;
(3) setting a point pair distance threshold epsilon and the maximum iteration times max of the algorithm, acting the transformation matrix obtained in the last step on the source point cloud M to obtain a new point cloud M ', calculating the distance error between M ' and N, if the error of two iterations is less than epsilon or the current iteration times is more than max, ending the iteration, otherwise updating the initially registered point set into M ' and N, repeating the steps until a convergence condition is met, and outputting the rotation matrix R and the translational vector T at the moment.
In the fifth step, the pose data between the depth camera and the projector coordinate system is obtained through camera calibration, and the camera calibration method is a structured light-based camera calibration method, and includes: taking a plane chessboard as a calibration tool, respectively shooting images by using a color camera and a depth camera of a depth camera, and respectively detecting the positions of angular points of the chessboard in each plane direction; projecting structured light to the checkerboard by using a Gray Code + Phase Shift encoding method, and decoding; calibrating internal parameters of the projector and the depth camera by using a calibretanemura method of OpenCV; and calibrating external parameters of the projector and the depth camera by using a stereocalibration method of OpenCV.
The selected depth camera is Azu rekinect DK and comprises 100 ten thousand pixel TOF depth cameras and 1200 ten thousand pixel RGB color high-definition cameras.
The external parameters of the depth camera are relative pose relations between the camera and a world coordinate system, and comprise a rotation vector R (a vector with the size of 1x3 or a rotation matrix 3x3) and a translation vector T (Tx, Ty, Tz).
The optical structure of the projector is the same as that of the depth camera, but the optical path is opposite, the depth camera is the mapping of a three-dimensional scene to a two-dimensional scene, and the projector is the mapping of a two-dimensional image to a three-dimensional space, so that in principle, the projector is used as a reverse depth camera to obtain the internal parameters and the external parameters of the projector.
The internal parameters of the projector are parameters related to the characteristics of the projector, and the parameters comprise:
focal length f, the distance from the focal point of the projector to the mapping plane; the adjacent pixel points dx and dy represent the number of units occupied by one pixel in the horizontal direction and one pixel in the vertical direction on the projection image respectively;
coordinates (u) of the origin of the image coordinate system in the pixel coordinate system0,v0) And the number of horizontal and vertical pixels representing the phase difference between the center pixel coordinate of the image and the image origin pixel coordinate.
The external parameters of the projector are relative pose relations between the projector and a world coordinate system, and comprise a rotation vector R (a vector with the size of 1x3 or a rotation matrix 3x3) and a translation vector T (Tx, Ty, Tz).
Because the depth camera cannot shoot the structured light content of the projector, the color camera is used as a medium for system calibration, the corresponding position of the corner point under the coordinate of the projector is calculated through the corner point position of the color camera, and then the relative pose relation between the coordinate system of the depth camera and the coordinate system of the projector is calculated through the corner point position of the coordinate system of the depth camera and the corresponding corner point position of the coordinate system of the projector.
In the step (3), the performing of pose correction on the digital model corresponding to the real object in the virtual experimental scene based on the pose data of the real object means that the data processor adjusts parameters and states of the real object in the virtual experimental scene according to the pose data of the real object in the actual experimental scene, and registers the pose data of the actual experimental scene and the pose data of the virtual experimental scene.
And projecting the adjusted virtual experiment scene into an actual experiment scene through a projector to realize virtual-real fusion presentation.
Compared with the prior art, the invention has the beneficial effects that:
1. the pose algorithm based on the object local features and the continuity hypothesis is provided, the local feature matching is applied to the rapid detection and 6-degree-of-freedom calculation of a real object, and the experimental scene of multi-person cooperation and multi-object parallel interaction is realized.
2. The method breaks through the constraint of a graphical user interface in the prior art, fully exerts the virtual-real integration of augmented reality, establishes an implicit interactive process based on a spatial metaphor and based on a physical space and a real object on the basis of human perception of physical environments such as the space, and realizes the complete consistency of interaction and presentation.
3. The space type real object interactive virtual experiment method based on projection has no interactive load, the interactive idea is easy to judge, the input and output spaces are not isolated, and the multi-user cooperative operation can be realized.
4. The projection-based spatial real object interactive virtual experiment system is easy to operate, intuitive in experiment phenomenon, vivid and lively in scene, and fully mobilizes learning interest and enthusiasm in the experiment operation process of students.
Drawings
FIG. 1 is a schematic diagram showing positions of an experiment table, a physical object, a computer and a projector in an actual experiment scene according to an embodiment;
FIG. 2 is a flowchart illustrating an embodiment;
FIG. 3 is a 3D print of an actual scene of an embodiment;
FIG. 4 is a diagram showing the effect of water flow when a kettle is watered in an actual scene of the embodiment;
FIG. 5 is a diagram showing the effect of water flow in changing the rotation position of the kettle in an actual scene of the embodiment.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention and are not intended to limit the invention in any way.
As shown in FIG. 1, a projection-based spatial real-object interactive virtual experiment system comprises
An experiment table 1;
the physical object 2 is obtained by three-dimensional modeling and printing through a 3D printer according to the object required by the experiment, and is placed on the experiment table 1;
the depth camera 3 captures and processes the depth data of the scene where the physical object is located in real time, and transmits the processed depth data to the computer 4;
and the computer 4 is used for constructing a virtual experimental scene according to the experimental content, wherein the virtual scene comprises a digital model corresponding to the real object 2, adjusting the position and orientation data of the digital model in the virtual scene in real time according to the position and orientation data of the real object 2, and transmitting the image in the virtual scene to the projector in real time.
And the projector 5 projects the adjusted virtual experiment scene onto the corresponding real object 2 and the experiment table 1 in real time, so that the virtual and real fusion presentation is realized.
The projection area of the projector 5 covers the movable range of the real object, and the field of view of the depth camera 3 covers a larger range than the projection area of the projector.
Examples
In this embodiment, a spatial real object interaction virtual experiment method based on projection is tested according to a flowchart of specific operation implementation of fig. 2 by taking a contour line of a middle school experiment as an example.
The contour line is abstract, a disordered two-dimensional map is used for imagining a complex three-dimensional terrain, and students with poor stereoscopic impression and few insights are abstract and have high mastering difficulty. For the content, the lesson mark requires that the student can "identify the peak, ridge, valley, judge the steepness of the slope, estimate the altitude and relative height" on the contour map.
1. According to an experiment target of a contour line of a middle school experiment, 3dmax software in a computer is used for constructing a water kettle and a mountain three-dimensional model containing knowledge points such as a mountain peak, a ridge, a valley, a saddle part and a cliff, and a real object is obtained by printing through a 3D printer; a unity3D development engine in a computer is used for constructing a contour virtual experiment scene, wherein the virtual experiment scene comprises a kettle and mountain model, a water flow simulation effect, drawing of contours and textures of the model.
2. The depth camera acquires depth data of an actual experimental scene, namely the vertical distance between a space point and a camera plane, and converts the depth data into point cloud data in real time according to internal parameters of the depth camera by using the following formula;
Figure BDA0002911714770000071
wherein dx, dy, u0,v0Point cloud coordinate P (X) in three-dimensional space as an internal parameter of a depth camerac,Yc,Zc) Can be represented by pixel point coordinates P in the depth imagei(u, v) calculated.
3. Directly acquiring point cloud data which contains a large amount of unnecessary data in a scene, processing the point cloud data by adopting a straight-through filtering algorithm, designating a dimension and a value range under the dimension, sequentially traversing the point cloud data, judging whether the value of the point cloud data on the designated dimension is in the value range, deleting points of which the value is not in the value range, and forming filtered point cloud data by the points left after traversing is finished, so that an experimental table and an invalid operation background outside the experimental table can be eliminated; and then, a voxel filtering algorithm is adopted to carry out downsampling on the point cloud of the effective operation area and carry out segmentation, and the segmentation method is a method combining an Euclidean algorithm and a region growing algorithm to obtain the physical point cloud data of the mountain and the kettle. The voxel filtering algorithm can reduce the number of point cloud data on the premise of ensuring that the original geometric structure of the point cloud data is not changed, and improve the speed of detecting a physical object;
4. when a user holds the kettle by hand, the hand influences the pose detection of a real object, so that local point cloud data is taken as real-time segmented target data for iterative registration by taking the point with the minimum Z coordinate of the kettle and mountain point cloud data as a reference;
5. due to noise in the point cloud data, even if the digital model of the physical object is kept static, the position tracked by the depth camera fluctuates randomly, the noise is amplified during movement prediction, the Kalman filtering algorithm is adopted for the segmented target data to reduce the noise of the physical object in the movement process, and the stability of attitude estimation is improved.
6. Because the ICP algorithm accuracy is greatly influenced by the initial pose of the real object, the convergence speed is low, the pose change of the real object between every two continuous frames is small, the initial relative poses of kettle and mountain target data and point cloud data converted by kettle and mountain three-dimensional models in a virtual scene are estimated by adopting a coarse registration algorithm at the initial operation of the system, and ICP fine registration is carried out on the basis to obtain the pose data of the kettle and the mountain in the actual scene under a depth camera coordinate system. In addition, a real object at a certain moment can not avoid large jitter in the interaction process, so that the algorithm needs to judge whether the large jitter occurs at a certain moment, if so, the coarse registration is performed again to find a better initial position and attitude estimation, and the ICP algorithm is used for fine registration, so that the accuracy of the registration precision in the whole operation process of the system can be ensured.
Wherein the coarse registration algorithm comprises:
(1) firstly, calculating direction information of a source point cloud (a point cloud set converted by a digital model) and a target point cloud (target point cloud data processed by a computer), namely extracting a normal vector;
(2) extracting Fast Point Feature Histograms (FPFH) features of the source point cloud and the target point cloud according to the normal information;
(3) randomly selecting a plurality of sample points P from the target point cloud, finding out points Q similar to the FPFH (field programmable gate flash) of the sample points of the target point cloud from the source point cloud, and forming a one-to-one corresponding relation between the points and the sample points;
(4) calculating a rigid transformation matrix M by using the corresponding point pairs, applying the transformation matrix to the target point cloud, wherein P 'is P multiplied by M, and calculating the registration error between P' and Q;
(5) and repeating the steps, comparing the obtained result with a preset threshold value for measuring errors to determine whether the matrix is the optimal transformation, and if so, changing the matrix into the initial relative pose of the kettle and mountain target data and the point cloud data obtained after the kettle and mountain three-dimensional model in the virtual scene is converted.
The fine registration algorithm includes:
(1) for each point M in the source point cloud M (the point cloud set after the digital model is converted)iSearching the corresponding point N with the nearest distance in the target point cloud N (the target point cloud data after being processed by the computer)iThe initial corresponding point pair is formed as the corresponding point of the point in the target point cloud;
(2) calculating a rotation matrix R and a translational vector T to minimize the mean square error between corresponding point sets;
(3) setting a point pair distance threshold epsilon and the maximum iteration times max of the algorithm, acting the transformation matrix obtained in the last step on the source point cloud M to obtain a new point cloud M ', calculating the distance error between M ' and N, if the error of two iterations is less than epsilon or the current iteration times is more than max, ending the iteration, otherwise updating the initially registered point set into M ' and N, repeating the steps until a convergence condition is met, and outputting the rotation matrix R and the translational vector T at the moment.
7. And (3) obtaining pose data between the depth camera and the projector coordinate system based on a structural light calibration method, and combining the pose data of the real object in the depth camera coordinate system obtained in the steps 1-6 to obtain the pose data of the real object in the projector coordinate system.
8. The computer establishes digital twin method description facing to virtual experiment, establishes a virtual camera rendering virtual scene consistent with internal parameters and external parameters of the projector in the virtual scene, maintains consistent physical and digital space, and converts the pose information of the kettle and the mountain model in the digital space to the coordinate system of the projector according to the real-time solved interactive object.
9. The computer simulates a corresponding virtual experiment effect in real time according to the pose and scene interaction semantics of the kettle solved in real time, namely, water flow and water splash in different states when the kettle is in different poses are projected to the experiment table and a corresponding real object by the projector, so that a space real object interaction virtual-real fusion presentation effect is realized.
Fig. 3 is a 3D print photo of a specific physical object in an embodiment, and as seen from fig. 3, the water jug and the mountain body printed in 3D are physical interactive objects, and a user can take the water jug from the laboratory table to water the physical mountain body and judge the terrain according to the state of water flowing to the surface of the mountain body.
Fig. 4 is a water flow effect diagram of watering of a kettle in an actual scene of the embodiment, as can be seen from fig. 4, a real object kettle is used for watering downwards, a basin is arranged at a place with accumulated water, a ridge is arranged at a place with a water flowing channel, a mountain peak is arranged at a place where water flows downwards, the ridge is arranged at a place where water splashes to two sides, and a cliff is arranged at a place where the water flows downwards in a direction close to the vertical direction. The terrain can be judged according to the state of water flowing to the surface of the mountain.
Fig. 5 is a water flow effect diagram of changing the rotation position of the kettle in an actual scene of the embodiment, and it can be seen from fig. 5 that the physical kettle is used for watering the ridge, water splashes to two sides, and water flowers are generated, which shows that the position and the posture of the kettle are changed, and the water flow rate and the state are correspondingly changed.

Claims (8)

1. A projection-based spatial real object interactive virtual experiment method is disclosed, wherein a system for realizing the method comprises a real object, a depth camera, a data processor and a projector, and the method comprises the following steps:
(1) the data processor constructs an actual experiment scene and a virtual experiment scene containing a real object;
(2) after the depth camera acquires the depth data of an actual experimental scene, converting the depth data into point cloud data in real time according to internal parameters of the depth camera;
(3) the data processor divides the point cloud data of the physical object from the point cloud data, determines the pose data of the physical object based on the point cloud data of the physical object, and corrects the pose of the digital model corresponding to the physical object in the virtual experimental scene based on the pose data of the physical object;
(4) and projecting the virtual experiment scene subjected to the pose correction of the digital model into an actual experiment scene by using the projector.
2. The method according to claim 1, wherein in step (2), the depth data is a vertical distance from a spatial point to a camera plane.
3. The method for the space-based physical interactive virtual experiment based on the projection of claim 1, wherein in the step (3), the method for determining the pose data of the physical object comprises the following steps:
the method comprises the steps of firstly, processing point cloud data by adopting a straight-through filtering algorithm, designating a dimension and a value range under the dimension, traversing the point cloud data in sequence, judging whether the value of the point cloud data on the designated dimension is in the value range, deleting points of which the value is not in the value range, and forming filtered point cloud data by remaining points after traversing;
secondly, down-sampling and segmenting the filtered point cloud data by adopting a voxel filtering algorithm to obtain multi-object point cloud data;
thirdly, processing the multi-object point cloud data, and taking the point with the minimum Z coordinate of each multi-object point cloud data as a reference, and taking local multi-object point cloud data as real-time segmented target data for iterative registration;
fourthly, registering parameters of the target data and point cloud data of the virtual experimental scene by adopting an iterative closest point algorithm to obtain pose data of the physical object in a depth camera coordinate system;
and fifthly, obtaining pose data between the depth camera and the projector coordinate system through camera calibration, and obtaining the pose data of the physical object by combining the pose data of the physical object in the depth camera coordinate system.
4. The projection-based spatial physical interaction virtual experiment method of claim 2, wherein in the second step, the filtered point cloud data is down-sampled by adopting a voxel filtering algorithm and is segmented, and the segmentation method is a method combining Euclidean distance and region growth.
5. The projection-based spatial physical interaction virtual experimental method of claim 2, wherein in the third step, the target data is obtained by further filtering with a kalman filter algorithm.
6. The projection-based spatial physical interaction virtual experiment method of claim 2, wherein in the fourth step, the real-time acquired point cloud data of the target data and the physical object data model are obtained by coarse registration using a sampling consistency registration algorithm of a fast point feature histogram.
7. The projection-based spatial physical interaction virtual experiment method as claimed in claim 6, wherein in the fourth step, at the beginning of system operation, the target data acquired in real time and the point cloud data in the virtual scene are obtained by performing ICP registration on the basis of a coarse registration algorithm.
8. The projection-based spatial real object interaction virtual experiment method of claim 2, wherein in the fifth step, in obtaining the pose between the depth camera and the projector coordinate system through camera calibration, the camera calibration method is a structured light-based camera calibration method.
CN202110088261.3A 2021-01-22 2021-01-22 Space type real object interaction virtual experiment method based on projection Pending CN113160421A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110088261.3A CN113160421A (en) 2021-01-22 2021-01-22 Space type real object interaction virtual experiment method based on projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110088261.3A CN113160421A (en) 2021-01-22 2021-01-22 Space type real object interaction virtual experiment method based on projection

Publications (1)

Publication Number Publication Date
CN113160421A true CN113160421A (en) 2021-07-23

Family

ID=76879115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110088261.3A Pending CN113160421A (en) 2021-01-22 2021-01-22 Space type real object interaction virtual experiment method based on projection

Country Status (1)

Country Link
CN (1) CN113160421A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114396944A (en) * 2022-01-18 2022-04-26 西安塔力科技有限公司 Autonomous positioning error correction method based on digital twinning
CN115439634A (en) * 2022-09-30 2022-12-06 如你所视(北京)科技有限公司 Interactive presentation method of point cloud data and storage medium
WO2023246530A1 (en) * 2022-06-20 2023-12-28 中兴通讯股份有限公司 Ar navigation method, and terminal and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114396944A (en) * 2022-01-18 2022-04-26 西安塔力科技有限公司 Autonomous positioning error correction method based on digital twinning
CN114396944B (en) * 2022-01-18 2024-03-22 西安塔力科技有限公司 Autonomous positioning error correction method based on digital twinning
WO2023246530A1 (en) * 2022-06-20 2023-12-28 中兴通讯股份有限公司 Ar navigation method, and terminal and storage medium
CN115439634A (en) * 2022-09-30 2022-12-06 如你所视(北京)科技有限公司 Interactive presentation method of point cloud data and storage medium
CN115439634B (en) * 2022-09-30 2024-02-23 如你所视(北京)科技有限公司 Interactive presentation method of point cloud data and storage medium

Similar Documents

Publication Publication Date Title
CN109003325B (en) Three-dimensional reconstruction method, medium, device and computing equipment
US11721067B2 (en) System and method for virtual modeling of indoor scenes from imagery
CN110799991B (en) Method and system for performing simultaneous localization and mapping using convolution image transformations
CN107292965B (en) Virtual and real shielding processing method based on depth image data stream
CN108986136B (en) Binocular scene flow determination method and system based on semantic segmentation
US11210804B2 (en) Methods, devices and computer program products for global bundle adjustment of 3D images
CN110288657B (en) Augmented reality three-dimensional registration method based on Kinect
CN113160421A (en) Space type real object interaction virtual experiment method based on projection
Asayama et al. Fabricating diminishable visual markers for geometric registration in projection mapping
CN112053447B (en) Augmented reality three-dimensional registration method and device
CN111401266B (en) Method, equipment, computer equipment and readable storage medium for positioning picture corner points
CN110941996A (en) Target and track augmented reality method and system based on generation of countermeasure network
WO2021048985A1 (en) Image processing device, image processing method, and program
CN112489193A (en) Three-dimensional reconstruction method based on structured light
CN111914790B (en) Real-time human body rotation angle identification method based on double cameras under different scenes
CN114935316B (en) Standard depth image generation method based on optical tracking and monocular vision
CN111899293B (en) Virtual and real shielding processing method in AR application
CN114863021A (en) Simulation data set analysis method and system based on three-dimensional reconstruction scene
Xiao et al. 3d object transfer between non-overlapping videos
Brunken et al. Incorporating Plane-Sweep in Convolutional Neural Network Stereo Imaging for Road Surface Reconstruction.
CN117593618B (en) Point cloud generation method based on nerve radiation field and depth map
CN117011493B (en) Three-dimensional face reconstruction method, device and equipment based on symbol distance function representation
CN112633300B (en) Multi-dimensional interactive image feature parameter extraction and matching method
Li et al. A real-time collision detection between virtual and real objects based on three-dimensional tracking of hand
Madaras et al. Position estimation and calibration of inertial motion capture systems using single camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination