CN109545003B - Display method, display device, terminal equipment and storage medium - Google Patents

Display method, display device, terminal equipment and storage medium Download PDF

Info

Publication number
CN109545003B
CN109545003B CN201811580594.2A CN201811580594A CN109545003B CN 109545003 B CN109545003 B CN 109545003B CN 201811580594 A CN201811580594 A CN 201811580594A CN 109545003 B CN109545003 B CN 109545003B
Authority
CN
China
Prior art keywords
teaching
user
information
teaching model
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811580594.2A
Other languages
Chinese (zh)
Other versions
CN109545003A (en
Inventor
乔伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Calorie Information Technology Co ltd
Original Assignee
Beijing Calorie Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Calorie Information Technology Co ltd filed Critical Beijing Calorie Information Technology Co ltd
Priority to CN201811580594.2A priority Critical patent/CN109545003B/en
Publication of CN109545003A publication Critical patent/CN109545003A/en
Application granted granted Critical
Publication of CN109545003B publication Critical patent/CN109545003B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Abstract

The invention discloses a display method, a display device, terminal equipment and a storage medium. The method comprises the following steps: after a teaching instruction is monitored, displaying a pre-constructed teaching model at a teaching position of an augmented reality scene; acquiring position information of a user; determining an adjusted teaching model according to the position information and the teaching model; and displaying the adjusted teaching model at the teaching position. By the method, the teaching model can be adjusted according to the user position information, so that the user can observe the teaching model from multiple visual angles, and the demonstration effect of the teaching model is effectively improved. The technique with augmented reality combines with the body-building course, can promote the interest of body-building, has promoted the experience of user's study.

Description

Display method, display device, terminal equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of augmented reality, in particular to a display method, a display device, terminal equipment and a storage medium.
Background
With the improvement of living standard, people pay more and more attention to their own health index, and people often choose to build up body in spare time to build up body. Therefore, the online private education service is widely popularized. The body-building personal education can guide the body building of the user in a video teaching mode, so that the user can not go out to achieve the body-building effect.
However, the current online fitness course mainly takes a two-dimensional video as a main part, cannot provide motion demonstration for a user well, and is not beneficial to the learning and simulation of the user on the motion, so that the training effect of the user is poor, the user is injured even in the process of exercise, and the learning experience of the user is greatly reduced.
Disclosure of Invention
The embodiment of the invention provides a display method, a display device, terminal equipment and a storage medium, which can effectively improve the demonstration effect of a teaching model.
In a first aspect, an embodiment of the present invention provides a display method, including:
after a teaching instruction is monitored, displaying a pre-constructed teaching model at a teaching position of an augmented reality scene;
acquiring position information of a user;
determining an adjusted teaching model according to the position information and the teaching model;
and displaying the adjusted teaching model at the teaching position.
In a second aspect, an embodiment of the present invention further provides a display device, including:
the system comprises a primary display module, a display module and a display module, wherein the primary display module is used for displaying a pre-constructed teaching model at a teaching position of an augmented reality scene after monitoring a teaching instruction;
the acquisition module is used for acquiring the position information of a user;
the determining module is used for determining the adjusted teaching model according to the position information and the teaching model;
and the display module is used for displaying the adjusted teaching model at the teaching position.
In a third aspect, an embodiment of the present invention further provides a terminal device, including:
one or more processors;
storage means for storing one or more programs;
the one or more programs are executed by the one or more processors, so that the one or more processors implement the display method provided by the embodiment of the present invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the display method provided by the embodiment of the present invention.
The embodiment of the invention provides a display method, a display device, terminal equipment and a storage medium, wherein a pre-constructed teaching model is displayed at a teaching position of an augmented reality scene after a teaching instruction is monitored; secondly, acquiring the position information of the user; then determining an adjusted teaching model according to the position information and the teaching model; and finally, displaying the adjusted teaching model at the teaching position. Utilize above-mentioned technical scheme to adjust the teaching model according to user positional information, the teaching position of teaching model is unchangeable all the time, and terminal equipment can adjust the teaching model according to user positional information, like the angle and/or the size of adjustment teaching model to effectual demonstration effect that has promoted the teaching model makes the user can observe the teaching model from many visual angles. In addition, the technique of reality augmentation is combined with the body-building course, the interest of body-building can be promoted, and the experience of user study is promoted.
Drawings
Fig. 1 is a schematic flowchart of a display method according to an embodiment of the present invention;
fig. 2a is a schematic flowchart of a display method according to a second embodiment of the present invention;
FIG. 2b is a schematic diagram of a teaching model provided in the second embodiment;
FIG. 2c is a schematic diagram of an adjusted teaching model according to the second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a display device according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1 is a schematic flowchart of a display method according to an embodiment of the present invention, where the method is applicable to a teaching mode display during teaching, and the method can be executed by a display device, where the device can be implemented by software and/or hardware and is generally integrated on a terminal device, where the terminal device includes but is not limited to: mobile phones, computers, personal digital assistants, and the like.
The application scenario of the embodiment can be used for the user to learn the fitness course by observing the teaching model. Specifically, as shown in fig. 1, a display method according to a first embodiment of the present invention includes the following steps:
s101, after a teaching instruction is monitored, displaying a pre-constructed teaching model at a teaching position of an augmented reality scene.
In this embodiment, the tutorial instructions may be understood as commands for triggering tutoring. A teaching model may be understood as a model for giving lessons. An augmented reality scene may be understood as an environment in which an augmented reality display is performed. The teaching position can be understood as the position of the teaching model display.
The embodiment can display the teaching model at the teaching position of the augmented reality scene so as to enable the user to learn the teaching content. The augmented reality scene may be constructed in advance through the terminal device, or may be presented by combining the terminal device with the augmented reality product, which is not limited herein.
The teaching instruction triggering means is not limited, and the teaching instruction can be triggered by clicking a corresponding key by a user or by reaching preset learning time. The teaching model may be a three-dimensional mannequin, which may be a three-dimensional human deformable model. The model may be a model pre-constructed by the terminal device, and how to construct the teaching model is not limited here as long as the teaching model can teach according to a preset instruction.
After the teaching instruction is monitored, the pre-constructed teaching model is displayed in the augmented reality scene, so that a user can conveniently observe the teaching model, and the technical problem of poor simulation effect caused by the fact that the user only observes the two-dimensional video can be effectively solved.
How to determine the teaching position in this step is not limited, and the teaching position may be selected by the user or the terminal device. If the teaching instruction is monitored, the terminal equipment acquires the position selected by the user as the teaching position, or the terminal equipment detects the augmented reality scene to determine the teaching position after the teaching instruction is monitored.
S102, obtaining the position information of the user.
In the present embodiment, the location information may be understood as spatial location information of the user. This step may track the location of the user in the augmented reality scene.
After the teaching model is displayed at the teaching position of the augmented reality scene, the position information of the user can be acquired in the step, so that the teaching model can be adjusted based on the position information of the user, the user can observe the teaching model more comprehensively, and the learning effect of the user is improved.
The specific means for acquiring the location information of the user is not limited herein, and those skilled in the art may select a suitable means to determine the location information of the user according to actual situations. In this step, the position information of the user can be directly obtained through the position acquisition device, or the position information of the user can be determined through analyzing the image acquired in real time and shot by the user. The position acquisition means may be regarded as a device capable of acquiring the position of the user. Such as a position sensor, a GPS locator or an inertial measurement unit.
S103, determining the adjusted teaching model according to the position information and the teaching model.
After the position information is acquired, the teaching model can be adjusted based on the position information of the user in the step. The adjusting mode of the teaching model in this step may include: and adjusting the angle and/or size of the teaching model. The adjusted teaching model can only adjust the angle and/or the size of the teaching model.
Specifically, after the position information is acquired, the relative position of the user with respect to the teaching model can be determined according to the position information and the teaching model, and the teaching model can be adjusted based on the relative position to present the corresponding angle to each user, thereby realizing the display of different angles of the teaching model.
And S104, displaying the adjusted teaching model at the teaching position.
After the adjusted teaching model is determined, the adjusted teaching model can be continuously displayed at the teaching position in the step for the user to observe. It will be appreciated that displaying the adjusted teaching model in the teaching position can be considered to be the world coordinates at which the teaching model is fixed, i.e., the teaching model is placed at a fixed location in the augmented reality environment.
It can be understood that, if the position information of the user is changed, the angle and/or the size of the adjusted teaching model are/is changed correspondingly, and accordingly, the augmented reality environment presented in front of the eyes of the user is reconstructed correspondingly.
According to the display method provided by the embodiment of the invention, firstly, after a teaching instruction is monitored, a pre-constructed teaching model is displayed at a teaching position of an augmented reality scene; secondly, acquiring the position information of the user; then determining an adjusted teaching model according to the position information and the teaching model; and finally displaying the adjusted teaching model at the teaching position. By the aid of the method, the teaching model can be adjusted according to the user position information, the teaching position of the teaching model is always unchanged, and the terminal equipment can adjust the teaching model according to the user position information, such as adjusting the angle and/or size of the teaching model, so that the demonstration effect of the teaching model is effectively improved, and a user can observe the teaching model from multiple visual angles. In addition, the technique of reality augmentation is combined with the body-building course, the interest of body-building can be promoted, and the experience of user study is promoted.
Example two
Fig. 2a is a schematic flow chart of a display method according to a second embodiment of the present invention, and the second embodiment is optimized based on the above embodiments. In this embodiment, the determining the adjusted teaching model according to the position information and the teaching model is further embodied as: determining gaze information based on the location information and the teaching model; and determining an adjusted teaching model based on the gazing information.
Further, the optimization of the embodiment further includes: acquiring action control parameters;
and adjusting the three-dimensional action of the teaching model displayed at the teaching position based on the action control parameter.
On the basis of the above optimization, the optimization of this embodiment further includes: after the starting instruction is monitored, acquiring initial image information of the external environment; extracting feature data of the initial image information; and constructing an augmented reality scene corresponding to the external environment according to the characteristic data.
Further, the embodiment displays the pre-constructed teaching model at the teaching position of the augmented reality scene, and optimizes the teaching model as follows: identifying the initial image information and determining a position to be released;
determining a teaching position in the augmented reality scene based on the position to be released;
displaying a pre-constructed teaching model at the teaching location.
Further, the embodiment optimizes the acquisition of the location information of the user as follows: acquiring actual measurement image information of an external environment;
determining the position information of the user according to the actual measurement image information and the initial image information
Further, the embodiment of the present invention further optimizes the following steps: acquiring attitude information of the user monitored by an inertial measurement unit;
and correcting the position information of the user according to the attitude information. Please refer to the first embodiment for a detailed description of the present embodiment.
As shown in fig. 2a, a display method according to a second embodiment of the present invention includes the following steps:
s201, after a starting instruction is monitored, initial image information of the external environment is obtained.
In the present embodiment, the start instruction may be understood as a command for starting learning. The external environment may be understood as the environment in which the user is located. The initial image information may be understood as an image of the external environment acquired by the terminal device when the augmented reality scene is constructed. The initial image information may include at least two images.
When the user learns, the terminal device may first monitor the start instruction to enter the teaching environment, and then monitor the teaching instruction to start teaching. In this embodiment, the triggering manner of the start instruction is not limited, for example, the start instruction may be triggered by the user clicking a start button, or triggered when a preset learning time is reached.
After the start instruction is monitored, the embodiment may first construct a learning environment, that is, construct an augmented reality scene. When an augmented reality scene is constructed, initial image information of an external environment can be obtained firstly in the step. The acquisition of the initial image information of the external environment is not limited herein, for example, the terminal device may acquire the external environment information through the image acquisition device. More specifically, the terminal device may start the camera to obtain the initial image information of the external environment.
And S202, extracting the characteristic data of the initial image information.
In this embodiment, the feature data may be understood as position information of the marker in the initial image information. The identifier in the initial image information is not limited herein, and the identifier may be determined based on the actually acquired initial image information. The identifier may be understood as a position reference of the initial image information. The positioning effect can be realized through the position information of the marker in the initial image information at different moments.
In this step, the initial image information may be identified to extract feature data, and a person skilled in the art may determine a corresponding means according to an actual situation without limiting a specific processing means.
S203, constructing an augmented reality scene corresponding to the external environment according to the characteristic data.
After the feature data in the initial image information is extracted, the obtained feature data can be matched to construct a spatial conversion relationship between the initial image information, so that an augmented reality scene is constructed. Such as determining a spatial transformation relationship based on feature data corresponding to the same feature point of the same identifier.
And S204, after the teaching instruction is monitored, identifying the initial image information and determining the position to be released.
In this embodiment, the position to be placed may be understood as a position where a teaching model selected from an augmented reality scene is placed.
The triggering means of the teaching instruction is not limited, and may be triggered by the user or by default by the terminal device. For example, the user triggers a teaching instruction by clicking a corresponding key or the terminal triggers the teaching instruction after the augmented reality scene is constructed.
The position to be put in is not limited in the step, and if the position to be put in can be a plane. Therefore, after the teaching instruction is monitored, image recognition can be carried out on the initial image information so as to recognize a plane in the augmented reality environment.
S205, determining a teaching position in the augmented reality scene based on the position to be put in.
After the position to be put in is identified, the corresponding teaching position in the augmented reality scene can be determined based on the determined position to be put in. It can be understood that the position to be dropped may be regarded as a position selected from the initial image information, and this step may select a corresponding position in the augmented reality scene as the teaching position based on the position to be dropped.
The specific determination means is not limited here, and the teaching position corresponding to the position to be delivered can be determined based on the spatial transformation relation in the augmented reality environment constructed in advance.
And S206, displaying the pre-constructed teaching model at the teaching position.
After the teaching position is determined, the step can display the pre-constructed teaching model at the teaching position for the user to observe. Specifically, this step may set the coordinate information of the teaching model based on the world coordinates of the teaching position.
And S207, acquiring actual measurement image information of the external environment.
In this embodiment, the measured image information may be understood as an image of the external environment obtained after the augmented reality scene is constructed. This measured image information can be used to spatially locate the user.
After the teaching model is displayed in the augmented reality scene, the embodiment can acquire the position information of the user. Specifically, this step may first obtain actual measurement image information of the external environment to locate the user.
The step is not limited to a specific means for acquiring the actual image information, and may be acquired by an image acquisition device. More specifically, the terminal device can call the rear camera to acquire the actual measurement image information of the external environment.
And S208, determining the position information of the user according to the actual measurement image information and the initial image information.
After the measured image information is obtained, the step can compare and analyze the measured image information and the initial image information to determine the position information of the user. The location information of the user is determined, e.g., by performing a location analysis of the same marker in the measured image information and the initial image information.
It can be understood that the scenario of the present embodiment may be: a user holds a terminal device, such as a mobile phone, and image acquisition of external environment information is carried out through a rear camera of the terminal device, so that spatial positioning of the user is achieved.
The specific content of the position information is not limited here, and the position information may be world coordinates, angle information, or speed information of the user.
And S209, acquiring the attitude information of the user monitored by the inertial measurement unit.
In this embodiment, the inertial measurement unit may be understood as a device that measures user position information. The attitude information may include acceleration information and/or angular velocity information.
The inertial measurement unit may include three single-axis accelerometers and three single-axis gyroscopes, the accelerometers may detect acceleration signals of the user in the world coordinate system, and the gyroscopes may detect angular velocity signals of the user in the world coordinate system. Therefore, the terminal equipment in the step can calculate the posture of the user by acquiring the posture information of the user under the world coordinate system, such as the acceleration and the accelerated speed, monitored by the inertial measurement unit, so that the space positioning is realized.
And S210, correcting the position information of the user according to the attitude information.
The embodiment can acquire the position information of the user in a mode of fusing the image and the inertial measurement unit.
Specifically, this step may use the attitude information to perform error cancellation on the determined position information. Specifically, the correction means may not be limited, and the specific means of correction may be determined according to the specific content of the position information.
If the position information includes acceleration and angular velocity, the position information of the user may be corrected by a weighted average method. If the position information is coordinate information, the terminal device may store a preset corresponding relationship between the posture information and the position information. And realizing the correction of the user position information based on the corresponding relation and the posture information.
S211, determining gazing information based on the position information and the teaching model.
In this embodiment, the gazing information may be understood as observation information of the teaching model by the user. The gaze information includes a gaze angle and/or a gaze depth, etc., and is not limited herein. Wherein the gaze angle is capable of determining the current location of the user gazing at the teaching model. The gaze depth can determine the distance the user is currently from the instructional model. The present embodiment is able to determine the relative position of the user and the instructional model based on the gaze information.
After the position information of the user is obtained, the step may determine the gazing information of the user on the teaching model based on the position information to obtain the adjusted teaching model.
It can be understood that, in this embodiment, the teaching model displayed at the teaching position can be refreshed in real time according to the position information of the user. In determining the adjusted teaching model, the determination of the gazing information may be performed based on the position information of the user and the teaching model adjusted last time.
And S212, determining the adjusted teaching model based on the gazing information.
After the gazing information of the user is obtained, the corresponding adjusted teaching model can be selected from the model database based on the gazing information.
Fig. 2b shows a schematic diagram of the teaching model provided in the second embodiment. As shown in fig. 2b, the teaching model 21 is a three-dimensional variability human body model, and the teaching model 21 can be a teaching demonstration of a fitness course performed by a user.
Fig. 2c is a schematic diagram illustrating an adjusted teaching model according to the second embodiment of the present invention. As shown in fig. 2c, the adjusted teaching model 22 may be determined based on the location information and teaching model 21. Specifically, after the user moves from right in front of teaching model 21 to ninety degrees on the left of teaching model 21, the determined gazing information may be the right ninety degrees direction, i.e., the right side, of gazing teaching model 21 based on the acquired position information of the user and teaching model 21. Here, the clockwise direction is assumed to be a positive direction. Based on this gaze information, an adjusted tutorial model 22 can be obtained as shown in fig. 2 c.
And S213, acquiring the motion control parameters.
In this embodiment, the motion control parameter may be understood as a parameter for controlling the teaching model to move. The teaching model herein is not limited to the teaching model before adjustment or the teaching model after adjustment.
Specifically, the step may obtain preset motion control parameters, where the motion control parameters may be determined according to the teaching course selected by the user, and a specific determination manner is not limited here.
And S214, adjusting the three-dimensional motion of the teaching model displayed at the teaching position based on the motion control parameter.
After the terminal equipment acquires the motion control parameters, the three-dimensional motion of the teaching model displayed at the teaching position at present can be adjusted based on the motion control parameters, so that the teaching model can make the specified motion in the teaching course, and the effect of motion demonstration is achieved.
The execution sequence of S213 and S214 in this embodiment is not limited, and this embodiment may acquire the motion control parameters in real time to adjust the motion of the teaching model displayed at the teaching position.
S215, displaying the adjusted teaching model at the teaching position.
After the adjusted teaching model is determined, the adjusted teaching model can be continuously displayed at the teaching position in the step for the user to observe. It can be understood that, in this embodiment, the speed of refreshing the teaching model at the teaching position is greater than a certain value, and due to persistence of vision, the teaching model observed by the user is a teaching model whose angle is adjusted in real time according to the user position information. When a user watches a scene, a light signal needs to be transmitted into brain nerves for a short time, after the action of light is finished, a visual image does not disappear immediately, the residual vision is called 'afterimage', and the phenomenon of vision can be called visual persistence.
The following is an exemplary description of embodiments of the invention:
the embodiment of the invention combines the augmented reality technology, the position tracking and the three-dimensional human body deformable model for the body-building action guidance, so that a user can watch the action demonstration of the three-dimensional human body model, namely the virtual body-building coach, from a plurality of angles, and the user can be ensured to simulate the action comprehensively, accurately and without dead angles. The technical problems that in the prior art, when a user watches a two-dimensional video to learn, a coach watching the video from one angle can only do action demonstration, action information cannot be obtained from multiple angles at the same time, learning simulation of the action information is influenced, and the user learns nonstandard actions, which may cause poor training effect or injury of the user, are effectively solved.
Specifically, image data are collected by means of a video image of a mobile phone camera in the using environment of a user, and then feature point detection and matching are performed on the basis of the image data to calculate the spatial conversion relation between the images, so that an augmented reality scene is constructed. And after the augmented reality scene is constructed, detecting a plane in the augmented reality scene, and placing the three-dimensional human body deformable model at the position of the plane for a user to observe. In the process of observing and learning by a user, the three-dimensional human body deformable model can be tracked through image data of a mobile phone camera and sensor data of an Inertial Measurement Unit (IMU). The augmented reality scene can be constructed in real time based on image data of a mobile phone camera and data monitored by an inertia measurement unit, and the position of the three-dimensional human body deformable model in the space is tracked, so that a user can observe the three-dimensional human body deformable model from a corresponding camera visual angle in real time.
The terminal equipment acquires the action control parameters of the three-dimensional human body deformable model and changes the three-dimensional action of the three-dimensional human body deformable model so that the three-dimensional human body deformable model can complete the action specified in the course, and a user can observe the three-dimensional human body deformable model from the camera view angle to achieve the action demonstration effect.
The display method provided by the second embodiment of the invention embodies the operations of determining the adjusted teaching model, displaying the pre-constructed teaching model and acquiring the position information of the user, and also optimizes the operations of adjusting the three-dimensional action, constructing the augmented reality scene and correcting the position information of the user. By the aid of the method, an augmented reality scene can be constructed based on initial image information captured by a user in real time, and then a teaching position is determined based on the augmented reality scene. In the learning process of a user, the adjusted teaching model is determined based on the watching information determined in real time, so that the user can observe the teaching model from different angles, and a more real action demonstration is brought to the user. In addition, the teaching model that shows in teaching position department can carry out the adjustment of three-dimensional action according to action control parameter to teaching to user's developments, through combining together augmented reality technique, space positioning technique and teaching model and be used for the body-building action demonstration, the user can be in service environment through changing different visual angles, realizes that different angles watch the different health positions of teaching model, can help the user to acquire more action information, reach better study imitation effect, finally obtain better body-building experience.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a display device according to a third embodiment of the present invention, where the display device is suitable for teaching model display during teaching, where the display device can be implemented by software and/or hardware and is generally integrated on a terminal device.
As shown in fig. 3, the apparatus includes: a primary display module 31, an acquisition module 32, a determination module 33 and a display module 34;
the primary display module 31 is configured to monitor a teaching instruction, and display a pre-constructed teaching model at a teaching position of an augmented reality scene;
an obtaining module 32, configured to obtain location information of a user;
a determining module 33, configured to determine an adjusted teaching model according to the position information and the teaching model;
a display module 34, configured to display the adjusted teaching model at the teaching position.
In this embodiment, the device first monitors a teaching instruction through the primary display module 31, and then displays a pre-constructed teaching model at a teaching position of an augmented reality scene; secondly, the position information of the user is obtained through the obtaining module 32; then, determining an adjusted teaching model according to the position information and the teaching model through a determining module 33; finally, the adjusted teaching model is displayed at the teaching position through the display module 34.
This embodiment provides a display device, can adjust teaching model according to user positional information, teaching model's teaching position is unchangeable all the time, and terminal equipment can adjust teaching model according to user positional information, like the angle and/or the size of adjustment teaching model to effectual demonstration effect that promotes teaching model makes the user can follow the multi-view observation teaching model. In addition, the technique of reality augmentation is combined with the body-building course, the interest of body-building can be promoted, and the experience of user study is promoted.
Further, the determining module 33 is specifically configured to: determining gaze information based on the location information and the teaching model; and determining an adjusted teaching model based on the gazing information.
On the basis of the optimization, the display device optimizes and includes:
the action adjusting module is used for acquiring action control parameters;
and adjusting the three-dimensional action of the teaching model displayed at the teaching position based on the action control parameter.
Based on the technical scheme, the display device is optimized to comprise:
a scene construction module to: after the starting instruction is monitored, acquiring initial image information of the external environment;
extracting feature data of the initial image information;
and constructing an augmented reality scene corresponding to the external environment according to the characteristic data.
Further, the rendering module 31 is specifically configured to: identifying the initial image information and determining a position to be released;
determining a teaching position in the augmented reality scene based on the position to be released;
displaying a pre-constructed teaching model at the teaching location.
Further, the obtaining module 32 is specifically configured to:
acquiring actual measurement image information of an external environment;
and determining the position information of the user according to the actual measurement image information and the initial image information.
Further, the display device, the optimization includes:
a correction module to: acquiring attitude information of the user monitored by an inertial measurement unit;
and correcting the position information of the user according to the attitude information.
The display device can execute the display method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present invention. As shown in fig. 4, a terminal device provided in the fourth embodiment of the present invention includes: one or more processors 41 and storage 42; the processor 41 in the terminal device may be one or more, and one processor 41 is taken as an example in fig. 4; storage 42 is used to store one or more programs; the one or more programs are executed by the one or more processors 41, so that the one or more processors 41 implement the display method according to any one of the embodiments of the present invention.
The terminal device may further include: an input device 43 and an output device 44.
The processor 41, the storage device 42, the input device 43 and the output device 44 in the terminal equipment may be connected by a bus or other means, and the connection by the bus is exemplified in fig. 4.
The storage device 42 in the terminal device is used as a computer-readable storage medium for storing one or more programs, which may be software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the display method provided in one or two embodiments of the present invention (for example, the modules in the display device shown in fig. 3 include the first display module 31, the obtaining module 32, the determining module 33, and the display module 34). The processor 41 executes various functional applications and data processing of the terminal device by executing software programs, instructions and modules stored in the storage device 42, that is, implements the display method in the above-described method embodiment.
The storage device 42 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, the storage 42 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, storage 42 may further include memory located remotely from processor 41, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 43 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. The output device 44 may include a display device such as a display screen.
And, when the one or more programs included in the above-mentioned terminal device are executed by the one or more processors 41, the programs perform the following operations:
after a teaching instruction is monitored, displaying a pre-constructed teaching model at a teaching position of an augmented reality scene;
acquiring position information of a user;
determining an adjusted teaching model according to the position information and the teaching model;
and displaying the adjusted teaching model at the teaching position.
EXAMPLE five
An embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program is used, when executed by a processor, to execute a display method, where the method includes:
after a teaching instruction is monitored, displaying a pre-constructed teaching model at a teaching position of an augmented reality scene;
acquiring position information of a user;
determining an adjusted teaching model according to the position information and the teaching model;
and displaying the adjusted teaching model at the teaching position.
Optionally, the program may be further configured to perform a display method provided in any of the embodiments of the present invention when executed by a processor.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. A computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take a variety of forms, including, but not limited to: an electromagnetic signal, an optical signal, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (8)

1. A display method, comprising:
after a teaching instruction is monitored, displaying a pre-constructed teaching model at a teaching position of an augmented reality scene, wherein the teaching model is a three-dimensional variability human body model and is used for teaching demonstration of a fitness course for a user;
acquiring position information of a user, wherein the position information comprises world coordinates, angle information or speed information of the user;
acquiring attitude information of the user monitored by an inertial measurement unit;
correcting the position information of the user according to the attitude information;
determining gazing information based on the position information and the teaching model, wherein the gazing information comprises a gazing angle and a gazing depth, the gazing angle is used for determining the position of the user which gazes at the teaching model currently, and the gazing depth is used for determining the distance between the user and the teaching model currently;
determining the adjusted teaching model based on the watching information determined in real time, so that the user can observe the teaching model from different angles;
and displaying the adjusted teaching model at the teaching position.
2. The method of claim 1, further comprising:
acquiring action control parameters;
and adjusting the three-dimensional action of the teaching model displayed at the teaching position based on the action control parameter.
3. The method of claim 1, prior to listening for instructional instructions, further comprising:
after the starting instruction is monitored, acquiring initial image information of the external environment;
extracting feature data of the initial image information;
and constructing an augmented reality scene corresponding to the external environment according to the characteristic data.
4. The method of claim 3, wherein displaying the pre-built instructional model at an instructional location for the augmented reality scene comprises:
identifying the initial image information and determining a position to be released;
determining a teaching position in the augmented reality scene based on the position to be released;
displaying a pre-constructed teaching model at the teaching location.
5. The method of claim 3, wherein the obtaining the location information of the user comprises:
acquiring actual measurement image information of an external environment;
and determining the position information of the user according to the actual measurement image information and the initial image information.
6. A display device, comprising:
the system comprises a primary display module, a body building module and a body building module, wherein the primary display module is used for displaying a pre-constructed teaching model at a teaching position of an augmented reality scene after monitoring a teaching instruction, and the teaching model is a three-dimensional variability human body model and is used for teaching demonstration of a body building course for a user;
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring position information of a user, acquiring gesture information of the user monitored by an inertial measurement unit and correcting the position information of the user according to the gesture information, and the position information comprises world coordinates, angle information or speed information of the user;
the determining module is used for determining gazing information based on the position information and the teaching model, determining an adjusted teaching model based on the gazing information determined in real time, and enabling a user to observe the teaching model at different angles, wherein the gazing information comprises a gazing angle and a gazing depth, the gazing angle is used for determining the position of the user gazing the teaching model at present, and the gazing depth is used for determining the distance between the user and the teaching model at present;
and the display module is used for displaying the adjusted teaching model at the teaching position.
7. A terminal device, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a display method as claimed in any one of claims 1-5.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the display method according to any one of claims 1 to 5.
CN201811580594.2A 2018-12-24 2018-12-24 Display method, display device, terminal equipment and storage medium Active CN109545003B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811580594.2A CN109545003B (en) 2018-12-24 2018-12-24 Display method, display device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811580594.2A CN109545003B (en) 2018-12-24 2018-12-24 Display method, display device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109545003A CN109545003A (en) 2019-03-29
CN109545003B true CN109545003B (en) 2022-05-03

Family

ID=65856692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811580594.2A Active CN109545003B (en) 2018-12-24 2018-12-24 Display method, display device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109545003B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111323007B (en) * 2020-02-12 2022-04-15 北京市商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium
CN111818265B (en) * 2020-07-16 2022-03-04 北京字节跳动网络技术有限公司 Interaction method and device based on augmented reality model, electronic equipment and medium
CN112530219A (en) * 2020-12-14 2021-03-19 北京高途云集教育科技有限公司 Teaching information display method and device, computer equipment and storage medium
CN113656624B (en) * 2021-10-18 2022-02-08 深圳江财教育科技有限公司 Teaching equipment control method and system based on augmented reality and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108428375A (en) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 A kind of teaching auxiliary and equipment based on augmented reality

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9558581B2 (en) * 2012-12-21 2017-01-31 Apple Inc. Method for representing virtual information in a real environment
CN103337079A (en) * 2013-07-09 2013-10-02 广州新节奏智能科技有限公司 Virtual augmented reality teaching method and device
CN103345357A (en) * 2013-07-31 2013-10-09 关鸿亮 Method for realizing automatic street view display based on mobile equipment sensor
US10134296B2 (en) * 2013-10-03 2018-11-20 Autodesk, Inc. Enhancing movement training with an augmented reality mirror
US10297082B2 (en) * 2014-10-07 2019-05-21 Microsoft Technology Licensing, Llc Driving a projector to generate a shared spatial augmented reality experience
US10127725B2 (en) * 2015-09-02 2018-11-13 Microsoft Technology Licensing, Llc Augmented-reality imaging
WO2017066373A1 (en) * 2015-10-14 2017-04-20 Surgical Theater LLC Augmented reality surgical navigation
TWI576787B (en) * 2016-02-05 2017-04-01 黃宇軒 Systems and applications for generating augmented reality images
US20180053338A1 (en) * 2016-08-19 2018-02-22 Gribbing Oy Method for a user interface
CN106355153B (en) * 2016-08-31 2019-10-18 上海星视度科技有限公司 A kind of virtual objects display methods, device and system based on augmented reality
CN108427498A (en) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 A kind of exchange method and device based on augmented reality
CN108510592B (en) * 2017-02-27 2021-08-31 亮风台(上海)信息科技有限公司 Augmented reality display method of real physical model
CN107622524A (en) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 Display methods and display device for mobile terminal
CN107833283A (en) * 2017-10-30 2018-03-23 努比亚技术有限公司 A kind of teaching method and mobile terminal
CN107833238B (en) * 2017-11-14 2020-05-01 京东方科技集团股份有限公司 Maximum connected domain marking method, target tracking method and augmented reality/virtual reality device
CN107977080B (en) * 2017-12-05 2021-03-30 北京小米移动软件有限公司 Product use display method and device
CN107943301A (en) * 2017-12-18 2018-04-20 快创科技(大连)有限公司 Experiencing system is viewed and admired in a kind of house-purchase based on AR technologies
WO2019127320A1 (en) * 2017-12-29 2019-07-04 深圳前海达闼云端智能科技有限公司 Information processing method and apparatus, cloud processing device, and computer program product
CN107961524A (en) * 2017-12-29 2018-04-27 武汉艺术先生数码科技有限公司 Body-building game and training system based on AR
CN108198044B (en) * 2018-01-30 2021-01-26 京东数字科技控股有限公司 Commodity information display method, commodity information display device, commodity information display medium and electronic equipment
CN108520552A (en) * 2018-03-26 2018-09-11 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108553888A (en) * 2018-03-29 2018-09-21 广州汉智网络科技有限公司 Augmented reality exchange method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108428375A (en) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 A kind of teaching auxiliary and equipment based on augmented reality

Also Published As

Publication number Publication date
CN109545003A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN109545003B (en) Display method, display device, terminal equipment and storage medium
US11238666B2 (en) Display of an occluded object in a hybrid-reality system
JP2022000640A (en) Information processing device, information processing method, and information processing program
KR101756792B1 (en) System for monitoring and controlling head mount display type virtual reality contents
CN107066082B (en) Display methods and device
CN105807931B (en) A kind of implementation method of virtual reality
CN109313812A (en) Sharing experience with context enhancing
EP3109744A1 (en) Information processing device, information processing method and program
CN109219955A (en) Video is pressed into
JP7339386B2 (en) Eye-tracking method, eye-tracking device, terminal device, computer-readable storage medium and computer program
US20130194402A1 (en) Representing visual images by alternative senses
US20200110560A1 (en) Systems and methods for interfacing with a non-human entity based on user interaction with an augmented reality environment
CN113946211A (en) Method for interacting multiple objects based on metauniverse and related equipment
CN105763829A (en) Image processing method and electronic device
CN109166365A (en) The method and system of more mesh robot language teaching
CN105824417B (en) human-object combination method adopting virtual reality technology
WO2020253716A1 (en) Image generation method and device
CN111589138A (en) Action prediction method, device, equipment and storage medium
US20210245368A1 (en) Method for virtual interaction, physical robot, display terminal and system
CN114067087A (en) AR display method and apparatus, electronic device and storage medium
CN110262662A (en) A kind of intelligent human-machine interaction method
CN114296627B (en) Content display method, device, equipment and storage medium
KR20200079748A (en) Virtual reality education system and method for language training of disabled person
Ruvolo Considering spatial cognition of blind travelers in utilizing augmented reality for navigation
CN113283402B (en) Differential two-dimensional fixation point detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant