CN111752391A - Virtual interaction method and computer readable storage medium - Google Patents
Virtual interaction method and computer readable storage medium Download PDFInfo
- Publication number
- CN111752391A CN111752391A CN202010621501.7A CN202010621501A CN111752391A CN 111752391 A CN111752391 A CN 111752391A CN 202010621501 A CN202010621501 A CN 202010621501A CN 111752391 A CN111752391 A CN 111752391A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- information
- teaching aid
- user
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 34
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000002452 interceptive effect Effects 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims abstract description 5
- 238000009877 rendering Methods 0.000 claims description 24
- 230000003595 spectral effect Effects 0.000 claims description 14
- 239000013598 vector Substances 0.000 claims description 13
- 230000009471 action Effects 0.000 claims description 10
- 235000018185 Betula X alpestris Nutrition 0.000 claims description 6
- 235000018212 Betula X uliginosa Nutrition 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 4
- 230000003213 activating effect Effects 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 8
- 230000001953 sensory effect Effects 0.000 abstract description 5
- 230000019771 cognition Effects 0.000 abstract description 4
- 230000003190 augmentative effect Effects 0.000 description 10
- 102000003712 Complement factor B Human genes 0.000 description 2
- 108090000056 Complement factor B Proteins 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000007087 memory ability Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2323—Non-hierarchical techniques based on graph theory, e.g. minimum spanning trees [MST] or graph cuts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Biology (AREA)
- Discrete Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a virtual interaction method and a computer readable storage medium, belonging to the technical field of virtual interaction, comprising the steps of carrying out video acquisition and processing on a real environment through a camera device; the user can freely input the object to be identified, further freely edit the content of the input object to be identified, and the user can also move the entity teaching aid with the identification information into the shooting range of the camera device; identifying identification information and spatial orientation information of the entity teaching aid; displaying a 3D virtual object corresponding to the entity teaching aid or the user free editing content on a display device; through the spatial movement of the identified object, the rotation is interacted, and then the interactive experience is completed. The real environment and the virtual object can be overlaid to the same scene in real time, more vivid sensory experience is provided for a user, and the teaching effect is improved by means of the instinct of human cognition on the three-dimensional space.
Description
Technical Field
The invention belongs to the technical field of virtual interaction, and particularly relates to a virtual interaction method and a computer readable storage medium.
Background
AR (Augmented Reality) is one of scientific research hotspots in recent years. Augmented reality, also called mixed reality, applies virtual information to the real world by computer technology, with real environment and virtual objects superimposed in real time onto the same picture or space. Augmented reality provides information that is generally different from what is perceived by humans. The method not only shows the information of the real world, but also displays the virtual information at the same time, and the two kinds of information are mutually supplemented and superposed. In visual augmented reality, a user can see the real world around it by using a head-mounted display to multiply and combine the real world with computer graphics. Augmented reality generates virtual objects which do not exist in a real environment by means of a computer graphics technology and a visualization technology, accurately places the virtual objects in the real environment through a sensing technology, integrates the virtual objects with the real environment by means of a display device, and presents a new environment with real sensory effect to a user. Therefore, the augmented reality system has the new characteristics of virtual-real combination, real-time interaction and three-dimensional registration. Augmented reality utilizes the computer to generate a lifelike virtual environment of sight, hearing, strength, touch, motion and other feelings, and enables a user to be immersed in the environment through various sensing devices, so that the user and the environment can directly carry out natural interaction. However, no precedent exists for introducing the augmented reality technology into the multimedia teaching.
At present, a plurality of virtual reality cases exist in the market, for example, the invention patent with the application number of 201310275818.X discloses a three-dimensional interactive learning system and method based on augmented reality, wherein points on a video stream are obtained through a scanning card and displayed. In addition, in the scientific education field, students basically acquire education information in textbooks through textbooks, so that the knowledge acquiring process of the students becomes dry and tedious, if the contents in the textbooks are displayed in the real world, the students are eager to learn, and the contents in the textbooks become vivid and flexible during learning.
However, the interactive game in the prior art can only identify the designated card, and the identified content is not editable by the user, so that the interactive effect is not good. The main body is as follows: at present, the virtual reality scheme is not limited to children teaching, but also can be used for entertainment interaction, similar to entertainment receiving software, and the prior art schemes are that fixed objects are input, fixed patterns are displayed after the objects are scanned, and mobile phone software is required to be used together with fixed objects provided by merchants (or other terms) for functions, such as: in the prior art, a card is scanned, the card and software must be used together to achieve an experience effect, the card cannot be carried about, the software downloaded in a mobile phone has no other use, and the software resource of a mobile phone memory is wasted.
Disclosure of Invention
The invention aims to solve the technical problem that interactive games in the background technology can only identify and provide appointed cards
A virtual interaction method and a computer-readable storage medium are provided, which can superimpose a real environment and a virtual object onto the same scene in real time, provide more vivid sensory experience for a user, and improve teaching effects by using the instinct of human cognition on three-dimensional space.
The invention adopts the following technical scheme for solving the technical problems:
a virtual interaction method specifically comprises the following steps;
step 1, video acquisition and processing are carried out on a real environment through a camera device, and the real environment is displayed on a display device;
step 2, the user can freely input the object to be identified, further freely edit the content of the input object to be identified, and the user can also move the entity teaching aid with the identification information into the shooting range of the camera device;
step 3, automatically tracking and identifying the entity teaching aid, and identifying identification information and spatial orientation information of the entity teaching aid;
step 4, displaying the 3D virtual object corresponding to the entity teaching aid or the user free editing content on a display device;
step 5, providing three-dimensional information hidden in the real background image by a user through interaction, so that the 3D virtual object can directly interact with the three-dimensional information, and simultaneously activating the display of a three-dimensional model and animation; through the spatial movement of the identified object, the rotation is interacted, and then the interactive experience is completed.
As a further preferable scheme of the virtual interaction method of the present invention, the step 1 is specifically as follows:
step 1.1, separating the background and the foreground of the collected video data, marking a pixel area of the outline of a moving object in a foreground image, carrying out three-dimensional reconstruction on the foreground image, analyzing the moving object after the three-dimensional reconstruction to obtain action semantic information, and obtaining image three-dimensional coordinate data and action semantic information;
step 1.2, three-dimensional rendering and three-dimensional interaction are carried out on the three-dimensional coordinate data and the action semantic information obtained in the step 1.1, and a matrix of distance change among three-dimensional feature point sets in a motion sequence is constructed;
and step 1.3, obtaining the characteristic points of the moving object in the motion sequence according to the distance change matrix, and further displaying the real environment through a display device.
As a further preferable scheme of the virtual interaction method of the present invention, in step 2, the free editing is specifically as follows:
step 2.1, receiving a multimedia data editing instruction sent by a user, wherein the multimedia data editing instruction comprises multimedia data to be edited and identification information of the user;
step 2.2, acquiring preference style information of the user from a local Cascading Style Sheet (CSS) according to the identification information, wherein the preference style information describes style setting information of the user on different types of data in the multimedia data to be edited;
and 2.3, editing the multimedia data to be edited according to the preference style information.
As a further preferable scheme of the virtual interaction method of the present invention, in step 1.2, the three-dimensional rendering specifically includes: acquiring the three-dimensional model corresponding to the identification information, rendering the three-dimensional model to generate a corresponding virtual object, and placing the virtual object at a corresponding position in the video image for displaying according to the spatial orientation information of the entity teaching aid;
the three-dimensional rendering module takes a real-time rendering frame rate as a reference, the camera device acquires an image when rendering M frames, and the image is processed by the identification module and the direction calculation module; in a first period from the 0 th frame to the N x M th frame, the orientation calculation module obtains the spatial orientation information of the entity teaching aid for N times; the three-dimensional rendering module does not activate any of the virtual objects during the first period; and in a second period from the NM frame to the 2N M frame, the three-dimensional rendering module constructs N times of Bezier curves by using the result of the previous N times of visual capture, so that the spatial orientation information of the entity teaching aid in any frame is estimated and acts on the virtual object.
As a further preferred scheme of the virtual interaction method of the present invention, in step 3, the entity teaching aid is automatically tracked and identified, and identification information and spatial orientation information of the entity teaching aid are identified, specifically as follows: the two-dimensional space orientation information of the entity teaching aid in the entity teaching aid space coordinate system; and converting the two-dimensional space orientation information into three-dimensional space orientation information of a space coordinate system of the camera device according to the calibration parameters of the camera device.
As a further preferable scheme of the virtual interaction method of the present invention, in step 1.3, a spectral clustering method is used to obtain moving object feature points in a motion sequence:
step 1.31, inputting pictures, then delimiting a moving window, enabling the moving window to translate in the horizontal or vertical direction, and dividing each input picture into a plurality of blocks;
step 1.32, carrying out color histogram statistics on the obtained blocked HSV color space, and extracting color characteristic vectors;
step 1.33, the obtained color characteristic vector of each image is used as the input of spectral clustering to obtain the spectral clustering result of each image and obtain the label value of the corresponding color vector;
step 1.34, using a BIRCH classification tree to classify the color feature vectors marked by label values in step 1.33;
step 1.35, integrating the result of spectral clustering by using the result of the BIRCH classification tree;
and step 1.36, marking the integrated label value by different colors to obtain a picture segmentation result.
A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement steps of a virtual interaction method.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
the real environment and the virtual object can be overlaid to the same scene in real time, more vivid sensory experience is provided for the user, meanwhile, the teaching effect is improved by using the instinct of human cognition on the three-dimensional space, and the identified content can be edited by the user, so that the interaction effect is better;
the invention introduces the augmented reality technology into the multimedia teaching device, superimposes the real environment and the virtual object into the same scene in real time, supplements and superimposes the two kinds of information mutually, brings a brand new experience of real sensory effect to the user, and improves the learning ability and the memory ability of the user by utilizing the instinct of human cognition on the three-dimensional space, thereby improving the teaching effect. Better, the user can remove the entity teaching aid at will, and the system will carry out position tracking to the entity teaching aid to control virtual object and carry out synchronous display according to real space's action, in order to reach the purpose of freely removing by user control virtual object, produce abundant mutual experience, realize that the edutainment is in the effect of happy.
Drawings
FIG. 1 is a flow chart of a method of virtual interaction in accordance with the present invention;
FIG. 2 is a flow chart of the present invention for video capture and processing of a real environment by a camera device;
FIG. 3 is a free-form editing flow diagram of the present invention;
FIG. 4 is a flowchart of obtaining feature points of a moving object in a motion sequence by using a spectral clustering method according to the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings:
the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A virtual interaction method, as shown in fig. 1, specifically includes the following steps;
step 1, video acquisition and processing are carried out on a real environment through a camera device, and the real environment is displayed on a display device;
step 2, the user can freely input the object to be identified, further freely edit the content of the input object to be identified, and the user can also move the entity teaching aid with the identification information into the shooting range of the camera device;
step 3, automatically tracking and identifying the entity teaching aid, and identifying identification information and spatial orientation information of the entity teaching aid;
step 4, displaying the 3D virtual object corresponding to the entity teaching aid or the user free editing content on a display device;
step 5, providing three-dimensional information hidden in the real background image by a user through interaction, so that the 3D virtual object can directly interact with the three-dimensional information, and simultaneously activating the display of a three-dimensional model and animation; through the spatial movement of the identified object, the rotation is interacted, and then the interactive experience is completed.
As shown in fig. 2, the step 1 is specifically as follows:
step 1.1, separating the background and the foreground of the collected video data, marking a pixel area of the outline of a moving object in a foreground image, carrying out three-dimensional reconstruction on the foreground image, analyzing the moving object after the three-dimensional reconstruction to obtain action semantic information, and obtaining image three-dimensional coordinate data and action semantic information;
step 1.2, three-dimensional rendering and three-dimensional interaction are carried out on the three-dimensional coordinate data and the action semantic information obtained in the step 1.1, and a matrix of distance change among three-dimensional feature point sets in a motion sequence is constructed;
and step 1.3, obtaining the characteristic points of the moving object in the motion sequence according to the distance change matrix, and further displaying the real environment through a display device.
In step 2, as shown in fig. 3, the free editing is specifically as follows:
step 2.1, receiving a multimedia data editing instruction sent by a user, wherein the multimedia data editing instruction comprises multimedia data to be edited and identification information of the user;
step 2.2, acquiring preference style information of the user from a local Cascading Style Sheet (CSS) according to the identification information, wherein the preference style information describes style setting information of the user on different types of data in the multimedia data to be edited;
and 2.3, editing the multimedia data to be edited according to the preference style information.
Information on personal editing preferences of a user, i.e., an editor, i.e., preference Style information is stored in advance in a Cascading Style Sheet (hereinafter referred to as CSS) local to the editor. The preference style information describes style setting information of different types of data in the multimedia data to be edited by the user, for example, the editor prefers to edit text characters by adopting styles of sones and four, and prefers to set title characters by adopting styles of black and three, and the like. Therefore, after the editor receives the multimedia data editing instruction, the preference style information of the user is acquired from the CSS according to the identification information of the user contained in the instruction, and the multimedia data to be edited is edited according to the preference style information.
Furthermore, after the editor finishes editing, the editor can send the multimedia data edited by the editor to the server for storage. It should be noted that, in this embodiment, the multimedia data sent to the server does not include the preference style information of the editor, for example, the multimedia data is uploaded to the server in a certain default format for storage
In step 1.2, the three-dimensional rendering is specifically as follows: acquiring the three-dimensional model corresponding to the identification information, rendering the three-dimensional model to generate a corresponding virtual object, and placing the virtual object at a corresponding position in the video image for displaying according to the spatial orientation information of the entity teaching aid;
the three-dimensional rendering module takes a real-time rendering frame rate as a reference, the camera device acquires an image when rendering M frames, and the image is processed by the identification module and the direction calculation module; in a first period from the 0 th frame to the N x M th frame, the orientation calculation module obtains the spatial orientation information of the entity teaching aid for N times; the three-dimensional rendering module does not activate any of the virtual objects during the first period; and in a second period from the NM frame to the 2N M frame, the three-dimensional rendering module constructs N times of Bezier curves by using the result of the previous N times of visual capture, so that the spatial orientation information of the entity teaching aid in any frame is estimated and acts on the virtual object.
In step 3, carry out automatic tracking and discernment to the entity teaching aid, discern the identification information and the space orientation information of entity teaching aid, specifically as follows: the two-dimensional space orientation information of the entity teaching aid in the entity teaching aid space coordinate system; and converting the two-dimensional space orientation information into three-dimensional space orientation information of a space coordinate system of the camera device according to the calibration parameters of the camera device.
In step 1.3, as shown in fig. 4, the moving object feature points in the motion sequence are obtained by using a spectral clustering method:
step 1.31, inputting pictures, then delimiting a moving window, enabling the moving window to translate in the horizontal or vertical direction, and dividing each input picture into a plurality of blocks;
step 1.32, carrying out color histogram statistics on the obtained blocked HSV color space, and extracting color characteristic vectors;
step 1.33, the obtained color characteristic vector of each image is used as the input of spectral clustering to obtain the spectral clustering result of each image and obtain the label value of the corresponding color vector;
step 1.34, using a BIRCH classification tree to classify the color feature vectors marked by label values in step 1.33;
A. scanning one feature vector marked by a label value obtained by spectral clustering, and recursively descending to a leaf node from a root node according to a minimum distance principle;
B. judging whether the CF item closest to the new data in the leaf node can absorb the data point (judging whether the indexes of the CF item and the CF item of the new data point are less than T, if the diameter D is less than a threshold value T);
C. if the diameters D of the CF entries and the new data point CF entries are smaller than a threshold value, and the leaf node still has space for accepting the CF entries, namely the CF number of the leaf node is smaller than a branching factor B which represents the maximum CF number accommodated by the tree node, adding the CF to a CF list of the leaf node, and updating all CF information on the path from the root node to the leaf node; if the diameters D of the CF entries and the new data point CF entries are less than the threshold value, and the CF number of the leaf node is less than the branching factor B, splitting the leaf node: selecting all CF entries of the leaf node and the two farthest CF entries in the new data point CF as seed nodes, taking the two nodes as new child nodes of a father node of an original node, redistributing the remaining CF entries into the new leaf node according to the principle of minimum distance, deleting the original leaf node, updating the tree, increasing one layer of tree height when the root node is split, and repeating the step A if the diameters D of the CF entries and the new data point CF entries are larger than a threshold value;
D. taking the obtained leaf node result as a classification result;
step 1.35, integrating the result of spectral clustering by using the result of the BIRCH classification tree;
and step 1.36, marking the integrated label value by different colors to obtain a picture segmentation result.
A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement steps of a virtual interaction method.
The above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the protection scope of the present invention. While the embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.
Claims (7)
1. A virtual interaction method is characterized in that: the method specifically comprises the following steps;
step 1, video acquisition and processing are carried out on a real environment through a camera device, and the real environment is displayed on a display device;
step 2, the user can freely input the object to be identified, further freely edit the content of the input object to be identified, and the user can also move the entity teaching aid with the identification information into the shooting range of the camera device;
step 3, automatically tracking and identifying the entity teaching aid, and identifying identification information and spatial orientation information of the entity teaching aid;
step 4, displaying the 3D virtual object corresponding to the entity teaching aid or the user free editing content on a display device;
step 5, providing three-dimensional information hidden in the real background image by a user through interaction, so that the 3D virtual object can directly interact with the three-dimensional information, and simultaneously activating the display of a three-dimensional model and animation; through the spatial movement of the identified object, the rotation is interacted, and then the interactive experience is completed.
2. The virtual interaction method of claim 1, wherein: the step 1 is specifically as follows:
step 1.1, separating the background and the foreground of the collected video data, marking a pixel area of the outline of a moving object in a foreground image, carrying out three-dimensional reconstruction on the foreground image, analyzing the moving object after the three-dimensional reconstruction to obtain action semantic information, and obtaining image three-dimensional coordinate data and action semantic information;
step 1.2, three-dimensional rendering and three-dimensional interaction are carried out on the three-dimensional coordinate data and the action semantic information obtained in the step 1.1, and a matrix of distance change among three-dimensional feature point sets in a motion sequence is constructed;
and step 1.3, obtaining the characteristic points of the moving object in the motion sequence according to the distance change matrix, and further displaying the real environment through a display device.
3. The virtual interaction method of claim 1, wherein: in step 2, the free editing is specifically as follows:
step 2.1, receiving a multimedia data editing instruction sent by a user, wherein the multimedia data editing instruction comprises multimedia data to be edited and identification information of the user;
step 2.2, acquiring preference style information of the user from a local Cascading Style Sheet (CSS) according to the identification information, wherein the preference style information describes style setting information of the user on different types of data in the multimedia data to be edited;
and 2.3, editing the multimedia data to be edited according to the preference style information.
4. The virtual interaction method of claim 2, wherein: in step 1.2, the three-dimensional rendering is specifically as follows: acquiring the three-dimensional model corresponding to the identification information, rendering the three-dimensional model to generate a corresponding virtual object, and placing the virtual object at a corresponding position in the video image for displaying according to the spatial orientation information of the entity teaching aid;
the three-dimensional rendering module takes a real-time rendering frame rate as a reference, the camera device acquires an image when rendering M frames, and the image is processed by the identification module and the direction calculation module; in a first period from the 0 th frame to the N x M th frame, the orientation calculation module obtains the spatial orientation information of the entity teaching aid for N times; the three-dimensional rendering module does not activate any of the virtual objects during the first period; and in a second period from the NM frame to the 2N M frame, the three-dimensional rendering module constructs N times of Bezier curves by using the result of the previous N times of visual capture, so that the spatial orientation information of the entity teaching aid in any frame is estimated and acts on the virtual object.
5. The virtual interaction method of claim 1, wherein: in step 3, carry out automatic tracking and discernment to the entity teaching aid, discern the identification information and the space orientation information of entity teaching aid, specifically as follows: the two-dimensional space orientation information of the entity teaching aid in the entity teaching aid space coordinate system; and converting the two-dimensional space orientation information into three-dimensional space orientation information of a space coordinate system of the camera device according to the calibration parameters of the camera device.
6. The virtual interaction method of claim 1, wherein: in step 1.3, the moving object feature points in the motion sequence are obtained by adopting a spectral clustering method:
step 1.31, inputting pictures, then delimiting a moving window, enabling the moving window to translate in the horizontal or vertical direction, and dividing each input picture into a plurality of blocks;
step 1.32, carrying out color histogram statistics on the obtained blocked HSV color space, and extracting color characteristic vectors;
step 1.33, the obtained color characteristic vector of each image is used as the input of spectral clustering to obtain the spectral clustering result of each image and obtain the label value of the corresponding color vector;
step 1.34, using a BIRCH classification tree to classify the color feature vectors marked by label values in step 1.33;
step 1.35, integrating the result of spectral clustering by using the result of the BIRCH classification tree;
and step 1.36, marking the integrated label value by different colors to obtain a picture segmentation result.
7. A computer readable storage medium, storing one or more programs, the one or more programs being executable by one or more processors for performing the steps of the virtual interaction method as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010621501.7A CN111752391A (en) | 2020-06-30 | 2020-06-30 | Virtual interaction method and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010621501.7A CN111752391A (en) | 2020-06-30 | 2020-06-30 | Virtual interaction method and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111752391A true CN111752391A (en) | 2020-10-09 |
Family
ID=72680246
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010621501.7A Pending CN111752391A (en) | 2020-06-30 | 2020-06-30 | Virtual interaction method and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111752391A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114935973A (en) * | 2022-04-11 | 2022-08-23 | 北京达佳互联信息技术有限公司 | Interactive processing method, device, equipment and storage medium |
CN117104179A (en) * | 2023-02-16 | 2023-11-24 | 荣耀终端有限公司 | Vehicle control system, method, electronic equipment and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103366610A (en) * | 2013-07-03 | 2013-10-23 | 熊剑明 | Augmented-reality-based three-dimensional interactive learning system and method |
CN105159900A (en) * | 2014-06-13 | 2015-12-16 | 北大方正集团有限公司 | Multimedia data editing method and editor |
CN108154157A (en) * | 2017-12-06 | 2018-06-12 | 西安交通大学 | It is a kind of based on integrated quick Spectral Clustering |
CN108417116A (en) * | 2018-03-23 | 2018-08-17 | 四川科华天府科技有限公司 | A kind of intelligent classroom designing system and design method of visual edit |
CN111198616A (en) * | 2020-03-11 | 2020-05-26 | 广州志胜游艺设备有限公司 | Virtual scene generation method applied to interactive projection game |
-
2020
- 2020-06-30 CN CN202010621501.7A patent/CN111752391A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103366610A (en) * | 2013-07-03 | 2013-10-23 | 熊剑明 | Augmented-reality-based three-dimensional interactive learning system and method |
CN105159900A (en) * | 2014-06-13 | 2015-12-16 | 北大方正集团有限公司 | Multimedia data editing method and editor |
CN108154157A (en) * | 2017-12-06 | 2018-06-12 | 西安交通大学 | It is a kind of based on integrated quick Spectral Clustering |
CN108417116A (en) * | 2018-03-23 | 2018-08-17 | 四川科华天府科技有限公司 | A kind of intelligent classroom designing system and design method of visual edit |
CN111198616A (en) * | 2020-03-11 | 2020-05-26 | 广州志胜游艺设备有限公司 | Virtual scene generation method applied to interactive projection game |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114935973A (en) * | 2022-04-11 | 2022-08-23 | 北京达佳互联信息技术有限公司 | Interactive processing method, device, equipment and storage medium |
CN117104179A (en) * | 2023-02-16 | 2023-11-24 | 荣耀终端有限公司 | Vehicle control system, method, electronic equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11790589B1 (en) | System and method for creating avatars or animated sequences using human body features extracted from a still image | |
US20190220983A1 (en) | Image matting using deep learning | |
Amin et al. | Comparative study of augmented reality SDKs | |
JP2019528544A (en) | Method and apparatus for producing video | |
CN103916621A (en) | Method and device for video communication | |
CN111638784B (en) | Facial expression interaction method, interaction device and computer storage medium | |
Mokhtar et al. | Development of mobile-based augmented reality colouring for preschool learning | |
CN111752391A (en) | Virtual interaction method and computer readable storage medium | |
WO2023197780A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
CN113838158B (en) | Image and video reconstruction method and device, terminal equipment and storage medium | |
CN117333645A (en) | Annular holographic interaction system and equipment thereof | |
CN111655148B (en) | Heart type analysis method based on augmented reality and intelligent equipment | |
Alshi et al. | Interactive augmented reality-based system for traditional educational media using marker-derived contextual overlays | |
Pikula et al. | FlexComb: a facial landmark-based model for expression combination generation | |
Liu | Light image enhancement based on embedded image system application in animated character images | |
Wang et al. | AI Promotes the Inheritance and Dissemination of Chinese Boneless Painting——Research on Design Practice from Interdisciplinary Collaboration | |
Zeng et al. | Design and Implementation of Virtual Real Fusion Metaverse Scene Based on Deep Learning | |
Jain | Attention-guided algorithms to retarget and augment animations, stills, and videos | |
Umbelino | street ar t Development of an Augmented Reality Application in the Context of Street Art | |
Satpati | Animation Techniques and Trends in Digital Media | |
CN112560556A (en) | Action behavior image generation method, device, equipment and storage medium | |
CN117504296A (en) | Action generating method, action displaying method, device, equipment, medium and product | |
Li et al. | Dynamic Adjustment and CAD Real-time Rendering Algorithm for Advertising Art Design based on Machine Vision | |
Xia | Application of sensor imaging technology based on image fusion algorithm in sculpture design | |
CN117788670A (en) | Digital artistic creation auxiliary system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |