CN117649505A - Augmented reality experiment teaching system guided by entity model - Google Patents

Augmented reality experiment teaching system guided by entity model Download PDF

Info

Publication number
CN117649505A
CN117649505A CN202311592193.XA CN202311592193A CN117649505A CN 117649505 A CN117649505 A CN 117649505A CN 202311592193 A CN202311592193 A CN 202311592193A CN 117649505 A CN117649505 A CN 117649505A
Authority
CN
China
Prior art keywords
pose
virtual
experiment
model
experimental
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311592193.XA
Other languages
Chinese (zh)
Inventor
刘威
肖文哲
徐晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202311592193.XA priority Critical patent/CN117649505A/en
Publication of CN117649505A publication Critical patent/CN117649505A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses an augmented reality experiment teaching system guided by a solid model, which belongs to the field of digital experiment teaching, and applies an augmented reality technology to the field of experiment teaching, so that the system has the advantages of low experiment teaching cost, rich experiment content, high experiment safety, strong experiment performability, strong interactivity and strong immersive experience, students perform the augmented reality experiment by operating solid objects, better practical experience is realized, fine experiment operation of the students is improved, and the use standard of the students on experimental instruments is enhanced. In addition, through the camera of arranging around the laboratory, student's action and virtual model's position state in the experimental process are tracked the monitoring by the all-round, can not influence the process of experiment because the camera field of vision or shelter from the problem, have strengthened the stability and the reliability of experiment.

Description

Augmented reality experiment teaching system guided by entity model
Technical Field
The invention belongs to the field of digital experiment teaching, and particularly relates to an augmented reality experiment teaching system guided by a solid model.
Background
In the context of modern education, there is a great deal of attention on how efficiently students practice knowledge in textbooks in experiments. Traditional experimental teaching requires that a school laboratory be equipped with a certain number of various experimental instruments to meet the demands of students. Students are limited to a specific class for experimental experience, and some more expensive experimental instruments are difficult to popularize, and the purchase cost and the maintenance cost are high. And for some dangerous experiment articles and instruments, the experiment teaching has potential safety hazard.
A solution that exists today is to use Virtual Reality (VR). At present, a mature virtual reality experiment system is put into use. And the students completely enter into a virtual experiment scene by wearing virtual reality equipment, operate virtual experiment devices and observe virtual experiment phenomena. Compared with the traditional experimental teaching method, students can participate in complex experiments on the spot, expensive or dangerous experimental instruments are not needed, and the safety risk and the resource cost are reduced. The virtual experiment can simulate different scenes and conditions with low cost, and brings about richer, safe and strong interactive experiment experience.
However, virtual experiment teaching systems still have some drawbacks. Students need to wear virtual reality helmets in the whole course in the experimental process, and both hands are through operating handles to carry out experiments, and equipment is comparatively heavy, uses inconveniently. For some experimental instruments, the operation and use requirements are strict and complex, the operation of students on the virtual model is basically finished by means of handles, the actual operation of the actual objects is not really performed, the use process of the actual experimental objects is difficult to feel, and the students are difficult to learn the use of the instruments effectively. Therefore, the practical process of the students in the experimental process is weakened, and the study experience of experimental physical instruments is not fine.
Augmented reality (Augmented Reality, AR for short) refers to superimposing virtual elements in the real world to enhance the perception and interactive experience of a user. In augmented reality, the user continues to perceive the real world, and the virtual elements interact with the real world. With the improvement of the computing capability of portable electronic products, the expected use of augmented reality will be wider and wider.
The experiment teaching system utilizing the augmented reality can retain the advantages of the virtual reality, increases the operation process of students on the real object, and has more practical experience. There is also present the teaching system of experiment of augmented reality to put forward, but in the experimentation target detection, virtual three-dimensional model's registration tracking is restricted to handle the image that fixed camera shot, and the field of vision is less to when the circumstances such as entity article is sheltered from appear, augmented reality information will lose, leads to the experiment unable normal clear. And the augmented reality experiment information is displayed through an external screen, so that immersive practice operation is weaker. Therefore, the current augmented reality experimental method and system have certain limitations.
Disclosure of Invention
Aiming at the defects or improvement demands of the prior art, the invention provides the entity model guided augmented reality experiment teaching system, which can enhance the practical experience and fine operation of students on experimental instruments and other objects in the experimental process by using the augmented reality technology while keeping the advantages of rich, safe, strong interactivity and the like of a virtual experiment system.
To achieve the above object, according to a first aspect of the present invention, there is provided a solid model-guided augmented reality experiment teaching system, comprising:
the physical model has the same appearance as the real experimental instrument in size and shape and has an identifier on the surface;
the three-dimensional registration module is used for identifying identifiers on the physical models acquired by the AR glasses cameras so as to determine virtual models of real experimental instruments corresponding to the physical models;
the processing module is used for superposing the corresponding virtual models of the identifiers on the identifiers according to the pose of the identifiers in the AR glasses camera so as to display the identifiers on the screen of the AR glasses;
the tracking module is used for calculating the real-time pose of each entity model according to the three-dimensional point cloud scene information of the laboratory so as to synchronously update the pose of the corresponding virtual model, and the processing module updates and renders the virtual model according to the pose information of the virtual model; the three-dimensional point cloud scene information is acquired through a plurality of depth cameras;
the action judging module is used for determining a physical model to which the preset hand action is applied according to the three-dimensional point cloud scene information when the AR glasses camera collects the preset hand action, and applying the preset hand action to a corresponding virtual model so as to update the display state of the virtual model;
the state discrimination module comprises:
the experimental state judging unit is used for updating the display state of the virtual model with the changed display state under the preset pose condition when the pose relation among the virtual models meets the preset pose condition;
and the experiment error judging unit is used for reminding when the error experiment operation is determined to occur according to the pose of the virtual model or the pose between the virtual models.
According to a second aspect of the present invention, there is provided a method of operating the solid model guided augmented reality experimental teaching system of the first aspect, comprising:
s1, when a student looks at a solid model on an experiment operation table, the three-dimensional registration module identifies identifiers on the solid models, and the processing module superimposes a corresponding virtual model on each identifier according to the pose of each identifier in an AR glasses camera so as to display the virtual model on a screen of the AR glasses;
s2, when the student operation entity model simulates real experiment operation, the tracking module calculates real-time pose of each entity model according to three-dimensional point cloud scene information of a laboratory so as to synchronously update the pose of a corresponding virtual model, and the processing module updates and renders the virtual model according to the pose information of the virtual model; when the AR glasses camera collects the preset hand motions, the motion judging module determines a physical model to which the preset hand motions are applied according to the three-dimensional point cloud scene information, and applies the preset hand motions to the corresponding virtual models so as to update the display states of the virtual models; the experimental state judging unit updates the display state of the virtual model with the changed display state under the preset pose condition when the pose relation among the virtual models meets the preset pose condition; the experiment error judging unit reminds when determining that error experiment operation occurs according to the pose of the virtual model or the pose between the virtual models;
s3, after the students complete the experiment, the experimental process recorded by the AR glasses cameras is saved.
In general, the above technical solutions conceived by the present invention, compared with the prior art, enable the following beneficial effects to be obtained:
the system provided by the invention applies the augmented reality technology to the field of experimental teaching, has the advantages of low experimental teaching cost, rich experimental content, high experimental safety, strong experimental performability, strong interactivity and strong immersive experience, and students perform the augmented reality experiment by operating physical objects, so that better practical experience is realized, fine experimental operation of the students is improved, and the use standard of the students to experimental instruments is enhanced. In addition, through the camera of arranging around the laboratory, student's action and virtual model's state in the experimental process are all-round detected, can not influence the process of experiment because the camera field of vision or shelter from the problem.
Drawings
Fig. 1 is a schematic structural diagram of an augmented reality experiment teaching system guided by a solid model according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of augmented reality glasses according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of an experimental environment provided in an embodiment of the present invention.
Fig. 4 is a diagram showing the composition of an object state discrimination module of an augmented reality experiment teaching system guided by a solid model according to an embodiment of the present invention.
Fig. 5 is a data transmission flow chart of the mockup-guided augmented reality experiment teaching system according to the embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
The embodiment of the invention provides an augmented reality experiment teaching system guided by a solid model, as shown in fig. 1, comprising:
and the physical model has the same appearance as the real experimental instrument in size and shape, and the surface is provided with an identifier.
Specifically, the entity model with the identifier is placed in a real experimental environment. The entity model with the identifier is used for simulating real experimental equipment, has similar appearance, different manufacturing materials, and is manufactured by using environment-friendly materials, so that the price is low. Students control the progress of the whole experiment by manipulating the solid model to complete the experiment.
Before the experiment, a plurality of real objects simulating real experimental instruments are prepared according to experiment requirements, the appearance shape of the size of the real object model is similar to that of the corresponding real experimental instrument, the real experimental instrument is imitated, and the real experimental instrument can be assembled and connected with the real experimental instrument. For example, the wire pattern can be connected to a power supply pattern and the test tube clamp pattern can clamp the test tube pattern. The solid model is light and safe, and is beneficial to experiment.
Different kinds of manual identifiers (i.e. identifiers) are pasted on different kinds of physical models. The manual identifier adopts a binary code design, and different binary codes show different black and white patterns. Each coded manual identifier corresponds to a class of laboratory instruments. In the experimental process, students operate the entity model with the identifier to perform experiments, just like operating a real experimental instrument.
The physical model with the identifier is basically manufactured by imitating the corresponding real experimental instrument in size and shape, so that students can operate the physical model with the identifier more realistically, and can more experience the experimental process and know the use specification of the instrument.
The three-dimensional registration module is used for identifying identifiers on all the entity models acquired by the AR glasses cameras to determine virtual three-dimensional models of real experimental instruments corresponding to the entity models, and overlapping the virtual three-dimensional models to the corresponding identifiers according to the pose of the identifiers in the AR glasses cameras.
Specifically, the three-dimensional registration module is used for identifying identifiers on the entity models, finding out the three-dimensional models of the virtual experimental devices corresponding to the entity models, and calculating the pose of the entity model identifiers on the augmented reality glasses scene camera. Whereby the virtual three-dimensional model is superimposed onto the identifier in that pose.
Preferably, the three-dimensional registration module uses a three-dimensional registration method based on manual identification in computer vision. According to the size of the physical object and the type of the corresponding virtual experimental instrument, pasting the manual identification with different sizes and different shape identification numbers on the physical model with the identifier for three-dimensional registration of the virtual object. The tracking registration error based on the manual identification is smaller, the time consumption is shorter, the algorithm complexity is lower, and the stability is high.
For a relatively flat solid model, such as a solid model simulating a wire, a manual identifier is printed on the cable label, and the cable label is looped around the solid model to perform three-dimensional registration.
The processing module is used for superposing the corresponding virtual three-dimensional model on each identifier according to the pose of each identifier in the AR glasses camera so as to display the virtual three-dimensional model on the screen of the AR glasses; the screen of the AR glasses is used for displaying an augmented reality picture superimposed with a virtual three-dimensional model.
Specifically, students wear the augmented reality glasses in the experimental process, and the screen of the AR glasses is used for rendering and displaying the augmented reality pictures overlapped with the virtual information on the glasses screen.
Preferably, the screen of the AR glasses is also used to display the experimental guidelines and the experimental notes before the start of the experiment.
Preferably, the diopter of AR glasses screen is adjustable, promptly to the student that myopia needs to wear glasses, can take off oneself's glasses when wearing the augmented reality glasses, only wear the augmented reality glasses, wear two glasses when avoiding the student experiment and cause inconvenience.
The student laboratory wears the augmented reality glasses, adjusts glasses border knob, adjusts according to needs parameters such as diopter, luminance, contrast of AR glasses screen, and the screen display virtual object (i.e. virtual three-dimensional model) of glasses is superimposed to the picture in the reality.
The real-time pose of each entity model is calculated according to the three-dimensional point cloud scene information of a laboratory, the pose of a virtual three-dimensional model of a real experimental instrument corresponding to each entity model is synchronously updated according to the real-time pose of each entity model, the processing module updates and renders the virtual model according to the pose information of the virtual model, and the real three-dimensional model is synchronously displayed through the screen of the AR glasses; the three-dimensional point cloud scene information is acquired through a plurality of depth cameras arranged around a laboratory;
preferably, the tracking module performs point cloud reconstruction according to three-dimensional point cloud scene information of a laboratory, and updates the pose of the registered virtual three-dimensional model in real time through point cloud fusion and point cloud registration.
Specifically, a depth camera is respectively arranged at four corners of an experimental scene (i.e. a laboratory), the camera is calibrated before an experiment, and internal and external parameters are calculated. And carrying out point cloud reconstruction on the whole experimental environment by using a depth camera. The depth camera is deployed on four corners of a laboratory ceiling, and is calibrated before an experiment, so that the camera can reconstruct a three-dimensional scene of the whole classroom through full-square point cloud.
The model tracking module (namely, the tracking module) carries out point cloud reconstruction on all entity models in the experimental environment in real time through cameras deployed around a laboratory, calculates the pose of the registered virtual model (namely, the virtual three-dimensional model) in real time through a point cloud fusion point cloud registration algorithm, and re-renders and displays.
Preferably, the global three-dimensional coordinate system of the laboratory is selected at one corner of the laboratory, so that all reconstructed point cloud coordinates can be ensured to be positive numbers, and subsequent calculation and point cloud tracking are facilitated.
The depth cameras deployed at four corners of the laboratory reconstruct the point cloud of the whole scene in real time. And mapping the depth image captured by the single depth camera into a three-dimensional space to generate a point cloud shot by the single camera. And in order to acquire the point clouds of the whole scene, the point clouds acquired by the four cameras are spliced. The model tracking module performs coordinate transformation on four point clouds under a camera coordinate system to change the four point clouds into a laboratory global coordinate system, and the pose of the depth camera under the global coordinate system and an internal reference matrix of the depth camera are calibrated before an experiment.
And performing point cloud filtering and preprocessing on the four point cloud data to remove noise points. And extracting point cloud normal vectors, curvature, color and the like. And matching by using the extracted features, aligning the point clouds obtained by the four cameras to a global coordinate system of a laboratory by using a point cloud registration algorithm (nearest neighbor matching, nearest point iterative algorithm and the like), and fusing the point clouds into the point clouds of the whole environment of the laboratory. This includes all the entity models with identifiers, as well as the student's three-dimensional point cloud data.
The four depth cameras transmit back to the shooting picture server in real time for processing, the real-time point cloud reconstruction of the entity model with the identifier is carried out, the pose of all entity models in the laboratory space is obtained by utilizing the instant positioning and mapping algorithm, the position and the direction of the virtual model corresponding to the entity models are updated, and the virtual model is rendered and displayed on the screen of the AR glasses again, namely the registered virtual model is tracked. In the experimental process, students reconstruct and track the pose of the entity model caused by the operation of the entity model by the depth camera, so as to re-render the corresponding virtual model. The server connected with the camera has strong computing power, ensures higher frame number, enables the virtual experiment model to smoothly change along with the operation of students on the entity model, and enables experiments to be smoothly carried out.
The action judging module is used for determining a physical model to which the preset hand action is applied according to the three-dimensional point cloud scene information when the AR glasses camera collects the preset hand action, and applying the preset hand action to a corresponding virtual model so as to update the display state of the virtual model; .
Specifically, the student action discriminating module (namely, the action discriminating module) is used for recognizing common actions and gestures of students in experiments and responding correspondingly.
The student action discriminating module can recognize common actions and set gestures of students in the experimental process. Student actions may be designed for use with corresponding physical object models, such as pouring liquids, pressing buttons, dial up, striking matches, squeezing droppers, and so forth. When a student does actions, the student action judging module needs to be in the field angle range of the augmented reality glasses camera, and the student action judging module processes the image shot by the augmented reality glasses camera, and the target detection action detection algorithm is matched with Kalman filtering to recognize the actions. Meanwhile, the point cloud of the hands is reconstructed by using the depth camera, the positions of the hands are accurately estimated, and the action executed by the students is judged to be operated on which experimental instrument. And then responding to the operation of the student action gestures on the experimental instrument to promote the experiment.
For example, a student twists a dial on a physical model of the multimeter to change the usage pattern of the multimeter. The student action discriminating module recognizes the twisting action and the physical multimeter model, calculates the rotation angle of the dial pointer according to the amplitude and the duration of the twisting action of the student, and changes the mode of the virtual multimeter virtual three-dimensional model. Meanwhile, the action discriminating module can also recognize simple student gestures. In the process of an experiment, students can view relevant teaching videos of the experiment. The resources are displayed on the screen of the AR glasses when the experiment guide is read, and students can control the video playing to pause through gestures, and the guide slides up and down to enter and exit and the like. The motion gestures of the students which can be recognized are designed and marked before experiments, and the students are trained through a deep learning framework to obtain a model.
The state discrimination module comprises:
the experimental state judging unit is used for updating the display state of the virtual model with the changed display state under the preset pose condition when the pose relation among the virtual models meets the preset pose condition; that is, the experimental state judging unit monitors and judges the pose relationship between the virtual experimental instruments, such as the placement position relationship (such as the placement sequence of chemical devices), the connection relationship (such as the connection of wires and a power supply), and updates the virtual three-dimensional model when the display state of the virtual three-dimensional model is judged to be changed according to the pose relationship between the virtual three-dimensional models
Specifically, the student operates the entity model to push the experiment to be carried out, so that the pose relation between the entity models is changed, the experiment state judging unit monitors the pose relation between the virtual models in real time, when the pose relation meets the preset pose condition, the virtual model with the changed display state under the preset pose condition is determined, and the display state of the virtual model is updated. A change in display state, for example: the alcohol lamp is ignited, the small bulb is lighted, the switch is closed, etc. If the experimental state judging unit determines that the pose relation meets the preset pose condition, updating the display state of the virtual model with the changed display state under the preset pose condition, triggering the behavior or animation of the corresponding virtual experimental instrument model, or rendering a new virtual model with the changed state.
Taking an alcohol lamp as an example, in the process that a student picks up a match model and puts the match model on the alcohol lamp model to ignite the alcohol lamp, the model tracking module tracks and updates the pose of the match in space, and the match virtual three-dimensional model gradually moves onto the virtual alcohol lamp three-dimensional model. The experimental state judging unit monitors the coordinate relation between the two virtual three-dimensional models and the point cloud data of the two virtual three-dimensional models, judges that the match ignites the alcohol lamp when the distance between the three-dimensional model of the match and the three-dimensional model of the alcohol lamp is smaller than a set threshold value and the point cloud is intersected to a large extent, namely, determines that the virtual model with changed state is the virtual model of the alcohol lamp according to the preset pose condition, and the experimental state judging unit script controls the ignition behavior of the virtual three-dimensional model of the alcohol lamp and displays the ignition behavior on an AR (augmented reality) glasses screen as a combustion effect; that is, the display effect of the virtual model of the alcohol burner is updated, and the three-dimensional flame effect is added to the virtual model of the alcohol burner, which can be understood as removing the virtual model of the alcohol burner without flame, and re-rendering the virtual model of the alcohol burner with flame at the original position.
As a further preferred mode, the experimental state judging unit monitors the pose relation and the association relation between the virtual models in real time, and when the pose relation meets the preset pose condition and the association relation meets the preset association condition, the virtual model with the display state changed under the preset pose condition and the preset association is determined, and the display state of the virtual model is updated.
Taking a circuit experiment as an example, after the pose relation between each component virtual module and the lead virtual model is determined according to the pose relation, the connection between each component virtual module and the lead virtual model is determined, and then the correct connection between components is required, namely the connection relation or connection sequence (namely the association relation) between the virtual models is correct, so that the condition that the circuit can be communicated and work correctly can be determined, and a bulb in the circuit can be lightened. That is, when the pose relation between the virtual models meets the preset pose condition and the association relation meets the preset association condition, the virtual model with the display state changed under the preset pose condition and the preset association is determined, and the display state is updated. Other types of experiments are the same.
Specifically, when students are connected with a simple bulb power supply circuit, an experimental state judging unit detects coordinate relations between devices in the circuit such as a power supply, a bulb, a switch anode and a switch cathode and point cloud data of the two, and when the distance between the device virtual model and the wire virtual model is smaller than a set threshold value and the point cloud is overlapped to a large extent, the devices in the circuit are judged to be connected with the wires; then, the experimental state judging unit detects whether the connection sequence between each component virtual module and the lead virtual model accords with a preset association condition, and based on the connection sequence, the experimental state judging unit can traverse all the connected electronic component virtual models and detect the state of the whole circuit. When the experimental state judging unit determines that the pose relation between the component virtual model and the lead virtual model meets the preset pose condition and the association relation meets the preset association condition, the whole circuit is determined to be correctly connected, the bulb is judged to be lighted, namely, the virtual model with the changed state is determined to be the bulb virtual model according to the preset pose condition and the preset association condition, so that the virtual model is displayed as a lighting effect on an AR (augmented reality) glasses screen.
For all the modeled three-dimensional models, the method has own unique distinguishing conditions and response methods, and the adaptive entity models and behaviors, and the development is completed before the experiment begins. The preset pose conditions required to be met by the pose relations among various virtual three-dimensional models and the virtual models with the display states changed when the pose relations meet the preset pose conditions can be preset according to actual conditions; or, the preset pose condition, the preset association condition and the virtual model with the display state changed when the pose relation meets the preset pose condition and the association relation meets the preset association condition, which are required to be met by the pose relations among various virtual three-dimensional models, can be preset according to actual conditions.
In the experimental process, as students operate physical objects, the pose state of the virtual three-dimensional model is changed. And the experimental state judging unit judges the display state of the virtual three-dimensional model according to the real-time pose relation between the virtual three-dimensional models. The model tracking module calculates the pose relation between the virtual three-dimensional objects in real time, and the experimental state judging unit compares the coordinates and the point cloud data and judges whether the pose relation between the coordinates and the point cloud data meets the preset pose condition.
For example, students place a physical model of a chemical instrument on a physical model like a pallet balance or an iron stand. The model tracking module updates and calculates the pose of each object in real time under a laboratory coordinate system. In the object interrelationship discriminating unit, the Y-axis coordinate Y of the center of the virtual three-dimensional model A 1 Y-axis coordinate Y with the center of three-dimensional model B 2 Satisfy |y 1 -y 2 |≤(l 1 +l 2 ) And/2+θ, then determine that model A and model B are placed above and below. Wherein l 1 And l 2 And (3) setting the threshold value for the height of the three-dimensional model, wherein θ is the set threshold value, so as to prevent the situation that the state discrimination condition cannot be perfectly met due to errors in pose calculation, so that unnecessary clamping of an experimental process is caused, and the experience of an experiment is influenced. The experimental state judging unit controls the virtual reality engine to perform model transformation so that two virtual models are combined into a whole, rendering and displaying are performed on the screen of the AR glasses, and the virtual reality engine is used for generating the virtual reality modelAnd then one of the entity models is moved, and the two entity models are combined into a whole virtual model to be moved together and rendered.
Or when the two experimental objects have a connection relationship, the experimental state judging unit judges whether the two objects are connected or not, and makes corresponding response to change the display state of the virtual experimental instrument.
For example, when a student connects a circuit, the student operates an entity of a corresponding lead to insert one end of the corresponding electronic component (for example, a power supply) into the corresponding positive electrode or negative electrode, and as the student moves the entity, the virtual lead three-dimensional model continuously approaches the three-dimensional power model. When the student inserts one end of the wire pattern into one pole of the power supply pattern. The experimental state judging unit judges whether the augmented reality glasses camera image is connected or not by means of image recognition of the augmented reality glasses camera and combining point cloud data. If it is determined that the two are connected, similarly, the experimental state determination unit controls the virtual reality engine to perform model transformation, combines the two virtual models into a whole, and changes the power supply to an on state. Similarly, other devices and other interrelationships are also similar ways of operation and discrimination.
And the experimental error judging unit is used for reminding when the error experimental operation is determined to occur according to the pose of each virtual three-dimensional model.
Specifically, the experimental error judging unit judges whether errors occur in the process of operating the real object by the position and posture of each virtual three-dimensional model, such as the position and posture of the virtual three-dimensional model, the position and posture relation among the virtual three-dimensional models and the like through the reconstruction of the point cloud of the entity model by the depth cameras deployed around and the detection of the action of the student. And if the corresponding prompt is found to be carried out through the voice interaction module.
In the experimental process, the experimental error judging unit monitors the pose and the state of all the virtual three-dimensional models, and when some obvious errors occur, the position and the state of all the virtual three-dimensional models can be detected and the students can be reminded through the voice interaction module. For example, the rendered virtual experiment device is also presented on the display module in an incorrect pose due to incorrect placement of the solid model; the placing connection relationship between the physical objects is incorrect, and the situations of upside down, reversed positive and negative connection and the like occur, so that the false mutual relationship of the virtual experiment devices is caused. When the conditions occur, the experimental error judging unit can accurately detect the conditions and output corresponding audio frequency to prompt students through the voice prompt module.
For example, if the student places the solid model of the measuring cylinder on the solid model of the alcohol lamp which is ignited, the experimental error judging unit detects the coordinates of the two virtual models, and when the x-coordinate and the z-coordinate of the two virtual objects are similar and only have a larger difference in the y-coordinate, the experimental error judging unit detects an error and enables the voice prompt module to report error information. The experimental error judging unit can set and edit the detected errors and corresponding response audios before the experiment is needed.
Preferably, the system further comprises:
and the voice prompt module is used for carrying out voice prompt when error experiment operation occurs.
Preferably, the system further comprises:
and the voice interaction module is used for collecting and recognizing the voice of the AR glasses wearer, recognizing the keywords therein and responding.
Specifically, the voice interaction module plays voice through the augmented reality glasses speaker to guide students to conduct experiments. And meanwhile, the keywords spoken by the students can be identified, and corresponding responses are made according to the keywords.
Preferably, the augmented reality glasses are provided with a directional audio system besides high-quality directional speakers, so that the voice interaction module only collects and recognizes sounds emitted by the wearer of the glasses, is not influenced by environmental noise and other students speaking, and meets the requirement of simultaneous experiments of multiple people.
The teacher may set the content of the student's spoken keywords and the response of the corresponding keywords prior to the experiment. And after the voice interaction module detects that the student wearing the glasses speaks the corresponding keywords, the voice interaction module recognizes the keywords and correspondingly feeds back according to the settings. Preferably, the keywords include a name of an experimental instrument, a name of an experiment, and an on-screen control instruction of the AR glasses;
when the keyword identified by the voice interaction module is the name of the experimental instrument, displaying the introduction video of the experimental instrument through the screen of the AR glasses;
when the keywords identified by the voice interaction module are experiment names, the teaching video of the experiment is displayed on the screen of the AR glasses.
The student voice interaction module (namely the voice interaction module) recognizes the voice of the augmented reality glasses wearer through the directional audio system and filters external voice interference. The student can speak preset keywords, and after the recognition module recognizes the keywords, the student voice interaction module searches corresponding resources from the database and responds correspondingly.
For example, students speak the name of the laboratory instrument. After the voice recognition module recognizes, the introduction video of the experimental instrument is displayed on the screen of the AR glasses, and students can watch the introduction video to learn. For another example, the student speaks the name of the experiment, such as "voltammetric resistance", and after the recognition of the voice recognition module, the teaching video of the experiment is also displayed on the screen of the AR glasses. Students can quickly acquire relevant knowledge through a voice recognition module so as to promote the experimental process. In addition, the local host side may be controlled by a student voice recognition module. Similar to a cell phone voice assistant, students speak some functional keywords. For example, "turn down" the brightness of the screen of the AR glasses will decrease, "turn up the sound," the volume of the voice prompt module will increase, etc.
Preferably, the action discriminating module and the state discriminating module are integrated in a server connected with the depth camera;
the three-dimensional registration module is integrated in the server or a smart phone which is connected with the AR glasses and has a dp output function;
the voice prompt module and the voice interaction module are integrated in the AR glasses or the smart phone which is connected with the AR glasses and has a dp output function.
Specifically, students wear the augmented reality glasses in the experimental process, and are connected with the intelligent mobile phone with the dp output function in a wired manner. The student wears the augmented reality glasses and carries out experimental operation, and is light swift, and the cell-phone is convenient for the student to carry in the experiment as the host computer.
Preferably, the smart phone connected with the glasses is developed and installed with corresponding experimental mobile phone application, the wireless connection server database is used for carrying out some experimental related operations by the mobile phone application, and three-dimensional registration and voice interaction are carried out.
Preferably, the system further comprises:
the database is used for storing virtual three-dimensional models corresponding to various identifiers, introduction videos of various experimental instruments and teaching videos of various experiments.
In particular, the database module is used to store video, voice and virtual three-dimensional objects required for the experiment.
Preferably, the depth camera deployed around the experimental environment is connected with a laboratory server, and the server comprises an object state judging module and a student action identifying module, so that relatively complex calculation is facilitated, interaction with a database is facilitated, and resources in the database are used.
Preferably, the local host computer with laboratory server side is connected through WIFI, satisfies the demand that many students carry out the experiment simultaneously, can hold the student of at least one class.
Correspondingly, when the three-dimensional registration module processes the picture shot by the camera of the augmented reality glasses and identifies the manual identifier on all the solid models in the picture of the camera, the virtual three-dimensional model corresponding to the manual identifier can be searched in the local host, and if the three-dimensional model corresponding to the identifier is not available in the local host, the three-dimensional model corresponding to the identifier is downloaded from the server database to the host memory. The three-dimensional registration module processes the picture shot by the augmented reality camera, calculates the pose of each artificial identifier in a camera coordinate system, then renders the corresponding three-dimensional model on the artificial identifier of the corresponding entity model according to the pose, namely, the virtual experimental instrument is superimposed in the real space, and the picture with the integrated virtual and real functions is displayed on the screen of the AR glasses.
In summary, the entity model guided augmented reality experimental system provided by the invention comprises: the scene reconstruction modules of the depth cameras are deployed around and are used for reconstructing the whole experimental scene through the full-square site cloud; entity models with identifiers (including models of experimental devices, devices and the like, and can be connected and assembled like real devices); the three-dimensional registration module registers the virtual three-dimensional model to the real space by identifying an identifier on the physical object; the processing module is used for superposing the corresponding virtual models of the identifiers on the identifiers according to the pose of the identifiers in the AR glasses camera so as to display the identifiers on the screen of the AR glasses; the model tracking module calculates the pose of all virtual models in real time through point cloud tracking and fusion, and updates and renders the virtual models; and the voice interaction module guides the student to conduct experiments through voice and correct errors. And can recognize the voice of the student to respond correspondingly. And an object state judging module. The method comprises the steps of judging the state of a physical object according to the coordinate relation between the space pose of the virtual object and feeding back the state; the student action judging module is used for identifying student gestures and behavior actions; the system provided by the invention has the advantages of immersive, rich, safe, strong interactivity and the like of the virtual reality experiment system, and meanwhile, the practice experience and fine operation of students on experimental instruments in the experimental process are enhanced by using the augmented reality technology.
The embodiment of the invention provides an operation method of the entity model guided augmented reality experiment teaching system, which comprises the following steps:
s1, when students look at the solid models on the experiment operation table, the three-dimensional registration module identifies identifiers on the solid models, and the processing module superimposes the corresponding virtual models on the identifiers according to the pose of the identifiers in the AR glasses cameras so as to enable the identifiers to be displayed on the screens of the AR glasses.
Specifically, firstly, placing a solid model used in the experiment in an experiment environment, pasting a manual identifier, and storing experiment-related video and audio resources in a database at a server side. And carrying out pose measurement and internal reference calibration on depth cameras around the laboratory. Then, the student wears the augmented reality glasses, and the glasses are connected with the smart phone as a local host computer and are connected with a laboratory server through a wireless network. The experimental guide and related notes are displayed on the screen of the AR glasses, and the students watch under the guidance of the voice prompt of the voice interaction module. When a student looks at the operation desk, the three-dimensional registration module recognizes the manual identifier on the entity model, and the processing module superimposes the virtual experimental instrument on a screen displayed on the AR glasses in reality.
S2, when the student operation entity model simulates real experiment operation, the tracking module calculates real-time pose of each entity model according to three-dimensional point cloud scene information of a laboratory so as to synchronously update the pose of a corresponding virtual model, and the processing module updates and renders the virtual model according to the pose information of the virtual model; when the AR glasses camera collects the preset hand motions, the motion judging module determines a physical model to which the preset hand motions are applied according to the three-dimensional point cloud scene information, and applies the preset hand motions to the corresponding virtual models so as to update the display states of the virtual models; the experimental state judging unit updates the display state of the virtual model with the changed display state under the preset pose condition when the pose relation among the virtual models meets the preset pose condition; the experiment error judging unit reminds when determining that an error experiment operation occurs according to the pose of the virtual model or the pose between the virtual models.
Specifically, the student operation entity model simulates real experiment operation to perform augmented reality experiments, and the model tracking module rebuilds the three-dimensional scene update rendering virtual experimental instrument in real time. In the experimental process, a student operates a solid model experiment like a real experiment, and meanwhile, an object state judging module and a student action identifying module monitor the state of an experimental object and the action of the student in real time, make corresponding response in real time based on the state and the action of a virtual experimental instrument, and superimpose the state and the action of the virtual experimental instrument in real time to push the experimental process in reality.
The students frequently perform man-machine interaction in the experiment, and corresponding resources and prompts are acquired through the voice interaction module. The experiment error identification unit monitors errors in the experiment in real time and reminds students through the voice prompt module.
S3, after the students complete the experiment, the experimental process recorded by the AR glasses cameras is saved.
Specifically, students complete experiments, and the whole experimental process is recorded by the augmented reality glasses cameras, stored in the local host computer and uploaded to the server database for teachers and students to review.
It will be readily appreciated by those skilled in the art that the foregoing description is merely a preferred embodiment of the invention and is not intended to limit the invention, but any modifications, equivalents, improvements or alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. An augmented reality experiment teaching system guided by a solid model, comprising:
the physical model has the same appearance as the real experimental instrument in size and shape and has an identifier on the surface;
the three-dimensional registration module is used for identifying identifiers on the physical models acquired by the AR glasses cameras so as to determine virtual models of real experimental instruments corresponding to the physical models;
the processing module is used for superposing the corresponding virtual models of the identifiers on the identifiers according to the pose of the identifiers in the AR glasses camera so as to display the identifiers on the screen of the AR glasses;
the tracking module is used for calculating the real-time pose of each entity model according to the three-dimensional point cloud scene information of the laboratory so as to synchronously update the pose of the corresponding virtual model, and the processing module updates and renders the virtual model according to the pose information of the virtual model; the three-dimensional point cloud scene information is acquired through a plurality of depth cameras;
the action judging module is used for determining a physical model to which the preset hand action is applied according to the three-dimensional point cloud scene information when the AR glasses camera collects the preset hand action, and applying the preset hand action to a corresponding virtual model so as to update the display state of the virtual model;
the state discrimination module comprises:
the experimental state judging unit is used for updating the display state of the virtual model with the changed display state under the preset pose condition when the pose relation among the virtual models meets the preset pose condition;
and the experiment error judging unit is used for reminding when the error experiment operation is determined to occur according to the pose of the virtual model or the pose between the virtual models.
2. The system according to claim 1, wherein the experimental state discrimination unit monitors the pose relationship and the association relationship between the virtual models in real time, determines the virtual model whose display state is changed under the preset pose condition and the preset association when the pose relationship satisfies the preset pose condition and the association relationship satisfies the preset association condition, and updates the display state thereof.
3. The system of claim 1 or 2, further comprising:
and the voice prompt module is used for carrying out voice prompt when error experiment operation occurs.
4. The system as recited in claim 3, further comprising:
and the voice interaction module is used for collecting and recognizing the voice of the AR glasses wearer, recognizing the keywords therein and responding.
5. The system of claim 3, wherein the keywords comprise laboratory instrument names, laboratory names, and AR glasses control instructions;
when the keyword identified by the voice interaction module is the name of the experimental instrument, displaying the introduction video of the experimental instrument on a screen of the AR glasses;
and when the keyword identified by the voice interaction module is an experiment name, displaying the teaching video of the experiment on a screen of the AR glasses.
6. The system of claim 4 or 5, wherein the tracking module, the motion discrimination module, and the status discrimination module are all integrated in a server connected to the depth camera;
the three-dimensional registration module, the voice prompt module and the voice interaction module are integrated in the AR glasses or the smart phone which is connected with the AR glasses and has a dp output function.
7. The system as recited in claim 6, further comprising:
the database is used for storing virtual three-dimensional models corresponding to various identifiers, introduction videos of various experimental instruments and teaching videos of various experiments.
8. The system of claim 1, wherein the tracking module performs point cloud reconstruction based on laboratory scene information, and updates the pose of the registered virtual model in real time through point cloud fusion and point cloud registration.
9. The system of claim 1, wherein the AR glasses are further configured to display an experimental guideline and experimental-related notes prior to the start of an experiment.
10. A method of operation of the mockup-guided augmented reality experimental teaching system of any one of claims 1-9, comprising:
s1, when a student looks at a solid model on an experiment operation table, the three-dimensional registration module identifies identifiers on the solid models, and the processing module superimposes a corresponding virtual model on each identifier according to the pose of each identifier in an AR glasses camera so as to display the virtual model on a screen of the AR glasses;
s2, when the student operation entity model simulates real experiment operation, the tracking module calculates real-time pose of each entity model according to three-dimensional point cloud scene information of a laboratory so as to synchronously update the pose of a corresponding virtual model, and the processing module updates and renders the virtual model according to the pose information of the virtual model; when the AR glasses camera collects the preset hand motions, the motion judging module determines a physical model to which the preset hand motions are applied according to the three-dimensional point cloud scene information, and applies the preset hand motions to the corresponding virtual models so as to update the display states of the virtual models; the experimental state judging unit updates the display state of the virtual model with the changed display state under the preset pose condition when the pose relation among the virtual models meets the preset pose condition; the experiment error judging unit reminds when determining that error experiment operation occurs according to the pose of the virtual model or the pose between the virtual models;
s3, after the students complete the experiment, the experimental process recorded by the AR glasses cameras is saved.
CN202311592193.XA 2023-11-27 2023-11-27 Augmented reality experiment teaching system guided by entity model Pending CN117649505A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311592193.XA CN117649505A (en) 2023-11-27 2023-11-27 Augmented reality experiment teaching system guided by entity model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311592193.XA CN117649505A (en) 2023-11-27 2023-11-27 Augmented reality experiment teaching system guided by entity model

Publications (1)

Publication Number Publication Date
CN117649505A true CN117649505A (en) 2024-03-05

Family

ID=90044422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311592193.XA Pending CN117649505A (en) 2023-11-27 2023-11-27 Augmented reality experiment teaching system guided by entity model

Country Status (1)

Country Link
CN (1) CN117649505A (en)

Similar Documents

Publication Publication Date Title
CN106254848B (en) A kind of learning method and terminal based on augmented reality
Oufqir et al. ARKit and ARCore in serve to augmented reality
CN107066082B (en) Display methods and device
CN106363637B (en) A kind of quick teaching method of robot and device
CN106371593B (en) A kind of Projection interactive drills to improve one's handwriting system and its implementation
CN106409063A (en) Robot painting teaching method and device and robot
CN106355153A (en) Virtual object display method, device and system based on augmented reality
CN109191940B (en) Interaction method based on intelligent equipment and intelligent equipment
CN109215413A (en) A kind of mold design teaching method, system and mobile terminal based on mobile augmented reality
CN104575132A (en) Interaction education method and system based on natural image recognition and reality augmentation technology
CN104102412A (en) Augmented reality technology-based handheld reading equipment and reading method thereof
US9076345B2 (en) Apparatus and method for tutoring in convergence space of real and virtual environment
CN103488292B (en) The control method of a kind of three-dimensional application icon and device
CN106652590A (en) Teaching method, teaching recognizer and teaching system
CN113011723A (en) Remote equipment maintenance system based on augmented reality
CN109739353A (en) A kind of virtual reality interactive system identified based on gesture, voice, Eye-controlling focus
CN106409033A (en) Remote teaching assisting system and remote teaching method and device for system
CN104933278B (en) A kind of multi-modal interaction method and system for disfluency rehabilitation training
CN105929938A (en) Information processing method and electronic device
CN106446857A (en) Information processing method and device of panorama area
CN113470190A (en) Scene display method and device, equipment, vehicle and computer readable storage medium
US20210174690A1 (en) Ar-based supplementary teaching system for guzheng and method thereof
CN117649505A (en) Augmented reality experiment teaching system guided by entity model
CN116682293A (en) Experiment teaching system based on augmented reality and machine vision
CN106708266A (en) AR action correction projection method and system based on binocular gesture recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination