CN113129454A - Virtual form display system and method based on artificial intelligence - Google Patents
Virtual form display system and method based on artificial intelligence Download PDFInfo
- Publication number
- CN113129454A CN113129454A CN202110447871.8A CN202110447871A CN113129454A CN 113129454 A CN113129454 A CN 113129454A CN 202110447871 A CN202110447871 A CN 202110447871A CN 113129454 A CN113129454 A CN 113129454A
- Authority
- CN
- China
- Prior art keywords
- model
- scene
- user
- module
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 18
- 238000013461 design Methods 0.000 claims abstract description 34
- 230000003993 interaction Effects 0.000 claims abstract description 24
- 238000005516 engineering process Methods 0.000 claims abstract description 10
- 238000009877 rendering Methods 0.000 claims abstract description 7
- 230000000007 visual effect Effects 0.000 claims description 13
- 230000002452 interceptive effect Effects 0.000 claims description 12
- 230000001276 controlling effect Effects 0.000 claims description 8
- 230000001105 regulatory effect Effects 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 7
- 238000013519 translation Methods 0.000 claims description 7
- 238000003780 insertion Methods 0.000 claims description 4
- 230000037431 insertion Effects 0.000 claims description 4
- 238000011835 investigation Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000019771 cognition Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/205—3D [Three Dimensional] animation driven by audio data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Abstract
The invention discloses a virtual form display system and method based on artificial intelligence, relates to the technical field of virtual scene display, and aims to solve the problems that the existing virtual form display system has poor product reality degree expression effect and cannot accurately find user interest. The virtual display modeling module: modeling and rendering the whole model through 3 Dmax; the multistage information setting module: belongs to the functional division and main realization links of a virtual display modeling module; a virtual display design module: randomly combining the models in the three-dimensional model library according to subjective requirements; a human-computer interaction terminal: when the user moves to the exhibition position, a small window is popped up to play the graphic and text information of the exhibition position; model operation management system: organizing each scene and the attributes thereof into a graph through a three-dimensional model scene graph, and using an OSG to represent the scenes by using a hierarchical structure; a personalized decision module: and in the process of displaying the three-dimensional model of the scene, contents with humanization pertinence are spread by means of media means and computer technology.
Description
Technical Field
The invention relates to the technical field of virtual scene display, in particular to a virtual form display system and method based on artificial intelligence.
Background
With the continuous progress of digital information, 3D modeling and rendering technologies are rapidly developed, so virtual scene display is gradually applied in multiple fields, virtual scene display forms the laying of space by using VR technology, virtual scene display takes the sense of substitution and interactivity as the important factors, and is the key to improve virtual scene display, wherein the scene model is obtained by computer analysis, simulates and expresses the actual condition of human sense, thereby improving the actual experience of the user to the product, solving the problem of space utilization efficiency by the current virtual scene display, and when general product manufacturers participate in various product exhibition and sales meetings, when the products cannot be normally placed due to limited space, the products are displayed through a virtual scene and are automatically switched or arranged in an independent space, the effect of showing products can be achieved, and meanwhile, the impression of a user can be deepened through the unique showing means of virtual scene showing.
However, existing virtual form presentation systems have some drawbacks: firstly, bidirectional interactive communication cannot be performed, a user can only operate according to own requirements, and the system cannot accurately find user interests; secondly, the virtual imaging display effect is poor, the reality degree of the product cannot be expressed under the continuous adjustment of a user, and therefore the artificial intelligence-based virtual form display system and method are provided.
Disclosure of Invention
The invention aims to provide a virtual form display system and method based on artificial intelligence, and aims to solve the problems that the virtual form display system provided in the background technology has poor product reality expression effect and cannot accurately find user interest.
In order to achieve the purpose, the invention provides the following technical scheme: a virtual form display system and method based on artificial intelligence comprises a virtual scene display system, wherein the virtual scene display system comprises a virtual display modeling module, a multilevel information setting module, a virtual display design module, a human-computer interaction terminal, a model operation management system and a personalized decision module, and the method comprises the following steps:
the virtual display modeling module: modeling and rendering the whole model through 3Dmax, and reading position information through an XML file;
the multistage information setting module: belongs to the functional division and main realization links of a virtual display modeling module;
a virtual display design module: randomly combining the models in the three-dimensional model library according to subjective requirements for a subjective operation program of a user;
a human-computer interaction terminal: when a user moves to an exhibition position, a small window is popped up to play the graphic and text information of the exhibition position for a computer display screen, and operation navigation is provided for the user;
model operation management system: inputting position information of a model file into a system through an XML file, analyzing a 3DS file by adopting an OSG (open source graph), organizing each scene and attributes thereof into a graph through a three-dimensional model scene graph, expressing the scene by using a hierarchical structure through the OSG, and forming basic units by using nodes in the scene;
a personalized decision module: and in the process of displaying the three-dimensional model of the scene, contents with humanization pertinence are spread by means of media means and computer technology.
Preferably, the virtual display modeling module is composed of a three-dimensional scene design, a three-dimensional model design and a sound design, wherein the three-dimensional scene design comprises a production model, a texture mapping and a production animation, and the three-dimensional scene design and the three-dimensional model design are modeled and rendered by 3 Dmax.
Preferably, the multi-level information setting module comprises navigation information setting, multimedia information setting and system control setting, and is respectively connected with the virtual display design module and the personalized decision-making module, wherein:
setting navigation information: the navigation information setting is connected with the human-computer interaction terminal to provide an operation guide for a user;
multimedia information setting: regulating and controlling images and audio frequency of the scene three-dimensional model;
and (3) system control setting: and regulating and controlling the projection lamp and the sound involved.
Preferably, the virtual display design module comprises a three-dimensional scene insertion module and a three-dimensional model selection module, a user operates the human-computer interaction terminal, the three-dimensional scene insertion module preferentially inserts the three-dimensional scene, and then the three-dimensional scene selection module selects a proper three-dimensional model for combination according to the three-dimensional scene.
Preferably, the model operation management system includes a capture model, a change model, a visual orientation, a model node, and a model form, wherein:
capturing a model: selecting a three-dimensional scene model for a user at a human-computer interaction terminal;
change model: forming the change of the three-dimensional scene model by an interactive instruction sent by a user through a human-computer interactive terminal, wherein the interactive instruction comprises translation, rotation and scaling;
visual positioning: calculating the horizontal position of each frame in the three-dimensional model of the scene, mapping the horizontal position to the coordinates of the plane image, and triggering corresponding image-text explanation by the movement of the user visual angle, namely the image collector;
model nodes: the OSG represents a scene by using a hierarchical structure, and nodes in the scene form a basic unit;
model morphology: and adjusting the brightness and the texture of the adjusted three-dimensional model of the scene in detail.
Preferably, the personalized decision module includes a user operating area, a programming module, and a navigation information setting, wherein:
a user operation area: the method comprises the following steps of belonging to a virtual display design module, and recording a three-dimensional scene and a three-dimensional model inserted by a user;
a program design module: receiving operation feedback of a user to form an interest record, sending opinion feedback to the user, forming a feedback record according to the opinion of the user, learning to generate interest guide recommendation, and displaying the interest guide recommendation in navigation information setting.
Preferably, the visual positioning may be navigated using the manipulator control perspective provided by the OSG.
The method based on artificial intelligence virtual form display comprises the following steps:
step one, establishing a virtual display model, establishing a three-dimensional scene, a three-dimensional model and an audio file matched with the scene model by adopting 3Dmax, and reading position information of the three-dimensional scene and the three-dimensional model through an XML file;
secondly, operating by a user at a human-computer interaction terminal, adding a three-dimensional scene in a virtual display design module according to the navigation information setting and according to the preference, and adding a three-dimensional model in the selected scene to form a whole, wherein the multimedia information setting regulates and controls the image and the audio of the three-dimensional scene model, and the system control setting regulates and controls the projection lamp and the sound which participate;
when the three-dimensional scene model is changed, sending translation, rotation and scaling instructions to a model operation management system through a human-computer interaction terminal, capturing and positioning the model after positioning the model, changing the model according to corresponding instructions by using a viewpoint position, outputting the model through an OSG (open source gateway) to form a model node in the scene, and adjusting the brightness and texture of the model node;
and step four, in the process of operating the scene three-dimensional model by the user, specific contents are fed back to the program design module, the user interest direction is added, learning decision is generated according to the personal interest record in core data set by the navigation information and is sent to the program design module, meanwhile, after the user operation is finished, a user operation area sends feedback investigation to the user, the user can feed back according to the actual situation of the scene three-dimensional model, the feedback information is also sent to the program design module, in the core data set by the navigation information, the feedback record is subjected to statistical innovation, and finally, the interest record is combined with the feedback record to provide brand-new interest navigation for the user.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention adopts an intelligent interactive learning system, the specific content can be fed back to a program design module in the process of operating a scene three-dimensional model by a user so as to add the direction of interest of the user, learning decision can be generated according to personal interest records in core data set by navigation information and is sent to the program design module, meanwhile, after the user operation is finished, a user operation area can send feedback investigation to the user, the user can feed back according to the actual situation of the scene three-dimensional model, the feedback information is also sent to the program design module, the feedback records are subjected to statistical innovation in the core data set by the navigation information, finally, the interest records are combined with the feedback records so as to provide brand new interest navigation for the user, the interest degree of the user in a required product is further improved, and through the mode, the media means and the computer technology are utilized in the display process of the scene three-dimensional model to accelerate and deepen subjective understanding and spread the product with the help of the media means and has humanized and targeted content.
2. The method comprises the steps of establishing a three-dimensional scene, a three-dimensional model and an audio file matched with the scene model by adopting 3Dmax, wherein the three-dimensional scene comprises a making model, a texture mapping and a making animation, establishing a virtual display modeling after the integral establishment is finished, reading position information of the three-dimensional scene and the three-dimensional model through an XML file, operating by a user at a human-computer interaction terminal, adding the three-dimensional scene into a virtual display design module according to the preference of the user, adding the three-dimensional model into a selected scene to form an integral body, wherein multimedia information is used for regulating and controlling the image and the audio of the three-dimensional model of the scene, and a system control is used for regulating and controlling a projection lamp and a sound involved, so that immersive virtual scene display is provided for the user, and the cognition of the user on a product.
3. The method comprises the steps of inputting position information of a model file into a system through an XML file, analyzing the 3DS file by adopting an OSG, sending an instruction to a model operation management system through a human-computer interaction terminal by a user, dividing the adjustment mode of the model into translation, rotation and scaling, capturing and positioning after the user positions the model, changing the model according to a corresponding instruction by using a viewpoint position, outputting the model through the OSG to form a model node in a scene, adjusting the brightness and texture of the model node to improve the viewing experience of the user, calculating the horizontal position of each frame in a three-dimensional model of the scene in an actual picture, mapping the horizontal position to an explanation coordinate of a plane image, and triggering a corresponding image and text by using the visual angle of the user, namely the movement of an image acquirer. By the technology, each scene and the attributes thereof are organized into the image through the three-dimensional model scene graph, the OSG expresses the scene by using the hierarchical structure, and nodes in the scene form a basic unit, so that the scene can be perfectly organized and managed on one hand, and continuous hierarchical details of the three-dimensional model of the scene are realized by rendering the field on the other hand.
Drawings
FIG. 1 is a schematic diagram of a virtual scene display system of the present invention;
FIG. 2 is a block diagram of a virtual display modeling module according to the present invention;
FIG. 3 is a block diagram of the model operations management system of the present invention;
fig. 4 is a schematic diagram of the intelligent interactive learning system of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Referring to fig. 1-4, an embodiment of the present invention is shown: a virtual form display system and method based on artificial intelligence comprises a virtual scene display system, wherein the virtual scene display system comprises a virtual display modeling module, a multilevel information setting module, a virtual display design module, a human-computer interaction terminal, a model operation management system and a personalized decision module, and the method comprises the following steps:
the virtual display modeling module: modeling and rendering the whole model through 3Dmax, and reading position information through an XML file;
the multistage information setting module: belongs to the functional division and main realization links of a virtual display modeling module;
a virtual display design module: randomly combining the models in the three-dimensional model library according to subjective requirements for a subjective operation program of a user;
a human-computer interaction terminal: when a user moves to an exhibition position, a small window is popped up to play the graphic and text information of the exhibition position for a computer display screen, and operation navigation is provided for the user;
model operation management system: inputting position information of a model file into a system through an XML file, analyzing a 3DS file by adopting an OSG (open source graph), organizing each scene and attributes thereof into a graph through a three-dimensional model scene graph, expressing the scene by using a hierarchical structure through the OSG, and forming basic units by using nodes in the scene;
a personalized decision module: and in the process of displaying the three-dimensional model of the scene, contents with humanization pertinence are spread by means of media means and computer technology.
By the technology, each scene and the attributes thereof are organized into the image through the three-dimensional model scene graph, the OSG expresses the scene by using the hierarchical structure, and nodes in the scene form a basic unit, so that the scene can be perfectly organized and managed on one hand, and continuous hierarchical details of the three-dimensional model of the scene are realized by rendering the field on the other hand.
Furthermore, the virtual display modeling module is composed of three-dimensional scene design, three-dimensional model design and sound design, wherein the three-dimensional scene design comprises model making, texture mapping and animation making, the three-dimensional scene design and the three-dimensional model design are both modeled and rendered by 3Dmax, immersive virtual scene display is provided for a user, and the cognition of the user on a product is improved.
Further, the multi-level information setting module comprises navigation information setting, multimedia information setting and system control setting, and is respectively connected with the virtual display design module and the personalized decision-making module, wherein:
setting navigation information: the navigation information setting is connected with the human-computer interaction terminal to provide an operation guide for a user;
multimedia information setting: regulating and controlling images and audio frequency of the scene three-dimensional model;
and (3) system control setting: and regulating and controlling the projection lamp and the sound involved.
Furthermore, the virtual display design module comprises a three-dimensional scene insertion module and a three-dimensional model selection module, a user operates the human-computer interaction terminal, the three-dimensional scene is preferentially inserted, and then a proper three-dimensional model is selected according to the three-dimensional scene for combination.
Further, the model operation management system comprises a capture model, a change model, a visual positioning, model nodes and model forms, wherein:
capturing a model: selecting a three-dimensional scene model for a user at a human-computer interaction terminal;
change model: forming the change of the three-dimensional scene model by an interactive instruction sent by a user through a human-computer interactive terminal, wherein the interactive instruction comprises translation, rotation and scaling;
visual positioning: calculating the horizontal position of each frame in the three-dimensional model of the scene, mapping the horizontal position to the coordinates of the plane image, and triggering corresponding image-text explanation by the movement of the user visual angle, namely the image collector;
model nodes: the OSG represents a scene by using a hierarchical structure, and nodes in the scene form a basic unit;
model morphology: and adjusting the brightness and the texture of the adjusted three-dimensional model of the scene in detail.
Further, the personalized decision module comprises a user operation area, a program design module and navigation information setting, wherein:
a user operation area: the method comprises the following steps of belonging to a virtual display design module, and recording a three-dimensional scene and a three-dimensional model inserted by a user;
a program design module: receiving operation feedback of a user to form an interest record, sending opinion feedback to the user, forming a feedback record according to the opinion of the user, learning to generate interest guide recommendation, and displaying the interest guide recommendation in navigation information setting.
Further, the visual positioning may be navigated using the manipulator control perspective provided by the OSG.
The method based on artificial intelligence virtual form display comprises the following steps:
step one, establishing a virtual display model, establishing a three-dimensional scene, a three-dimensional model and an audio file matched with the scene model by adopting 3Dmax, and reading position information of the three-dimensional scene and the three-dimensional model through an XML file;
secondly, operating by a user at a human-computer interaction terminal, adding a three-dimensional scene in a virtual display design module according to the navigation information setting and according to the preference, and adding a three-dimensional model in the selected scene to form a whole, wherein the multimedia information setting regulates and controls the image and the audio of the three-dimensional scene model, and the system control setting regulates and controls the projection lamp and the sound which participate;
when the three-dimensional scene model is changed, sending translation, rotation and scaling instructions to a model operation management system through a human-computer interaction terminal, capturing and positioning the model after positioning the model, changing the model according to corresponding instructions by using a viewpoint position, outputting the model through an OSG (open source gateway) to form a model node in the scene, and adjusting the brightness and texture of the model node;
and step four, in the process of operating the scene three-dimensional model by the user, specific contents are fed back to the program design module, the user interest direction is added, learning decision is generated according to the personal interest record in core data set by the navigation information and is sent to the program design module, meanwhile, after the user operation is finished, a user operation area sends feedback investigation to the user, the user can feed back according to the actual situation of the scene three-dimensional model, the feedback information is also sent to the program design module, in the core data set by the navigation information, the feedback record is subjected to statistical innovation, and finally, the interest record is combined with the feedback record to provide brand-new interest navigation for the user.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Claims (8)
1. The utility model provides a virtual form display system based on artificial intelligence, includes virtual scene display system which characterized in that: the virtual scene display system comprises a virtual display modeling module, a multilevel information setting module, a virtual display design module, a human-computer interaction terminal, a model operation management system and an individualized decision-making module, wherein:
the virtual display modeling module: modeling and rendering the whole model through 3Dmax, and reading position information through an XML file;
the multistage information setting module: belongs to the functional division and main realization links of a virtual display modeling module;
a virtual display design module: randomly combining the models in the three-dimensional model library according to subjective requirements for a subjective operation program of a user;
a human-computer interaction terminal: when the user moves to the exhibition position, a small window is popped up to play the graphic and text information of the exhibition position;
model operation management system: inputting position information of a model file into a system through an XML file, analyzing a 3DS file by adopting an OSG (open source graph), organizing each scene and attributes thereof into a graph through a three-dimensional model scene graph, expressing the scene by using a hierarchical structure through the OSG, and forming basic units by using nodes in the scene;
a personalized decision module: and in the process of displaying the three-dimensional model of the scene, contents with humanization pertinence are spread by means of media means and computer technology.
2. The artificial intelligence based virtual form presentation system of claim 1, wherein: the virtual display modeling module is composed of three-dimensional scene design, three-dimensional model design and sound design.
3. The artificial intelligence based virtual form presentation system of claim 1, wherein: the multi-level information setting module comprises navigation information setting, multimedia information setting and system control setting, and is respectively connected with the virtual display design module and the personalized decision-making module, wherein:
setting navigation information: the navigation information setting is connected with the human-computer interaction terminal to provide an operation guide for a user;
multimedia information setting: regulating and controlling images and audio frequency of the scene three-dimensional model;
and (3) system control setting: and regulating and controlling the projection lamp and the sound involved.
4. The artificial intelligence based virtual form presentation system of claim 1, wherein: the virtual display design module comprises a three-dimensional scene insertion module and a three-dimensional model selection module, a user operates the human-computer interaction terminal, the three-dimensional scene is preferentially inserted, and then a proper three-dimensional model is selected according to the three-dimensional scene for combination.
5. The artificial intelligence based virtual form presentation system of claim 1, wherein: the model operation management system comprises a capture model, a change model, a visual positioning and a model node, wherein:
capturing a model: selecting a three-dimensional scene model for a user at a human-computer interaction terminal;
change model: forming the change of the three-dimensional scene model by an interactive instruction sent by a user through a human-computer interactive terminal, wherein the interactive instruction comprises translation, rotation and scaling;
visual positioning: calculating the horizontal position of each frame in the three-dimensional model of the scene, mapping the horizontal position to the coordinates of the plane image, and triggering corresponding image-text explanation by the movement of the user visual angle, namely the image collector;
model nodes: the OSG represents a scene using a hierarchical structure, and nodes in the scene constitute basic units.
6. The artificial intelligence based virtual form presentation system of claim 3, wherein: the personalized decision-making module comprises a user operation area, a program design module and navigation information setting, wherein:
a program design module: receiving operation feedback of a user to form an interest record, sending opinion feedback to the user, forming a feedback record according to the opinion of the user, learning to generate interest guide recommendation, and displaying the interest guide recommendation in navigation information setting.
7. The artificial intelligence based virtual form presentation system of claim 5, wherein: the visual positioning may be navigated using the manipulator control perspective provided by the OSG.
8. The artificial intelligence based virtual form presentation method according to any one of claims 1 to 7, characterized in that: the method comprises the following steps:
step one, establishing a virtual display model, establishing a three-dimensional scene, a three-dimensional model and an audio file matched with the scene model by adopting 3Dmax, and reading position information of the three-dimensional scene and the three-dimensional model through an XML file;
secondly, operating at a human-computer interaction terminal by a user, adding a three-dimensional scene in a virtual display design module according to the navigation information according to the preference, and adding a three-dimensional model in the selected scene to form a whole;
when the three-dimensional scene model is changed, sending translation, rotation and scaling instructions to a model operation management system through a human-computer interaction terminal, capturing and positioning the model, changing the model according to corresponding instructions by using a viewpoint position, and outputting the model through an OSG (open service gateway) to form a model node in the scene;
and step four, in the process of operating the scene three-dimensional model by the user, specific contents are fed back to the program design module, the user interest direction is added, learning decision is generated according to the personal interest record in the core data set by the navigation information and is sent to the program design module, meanwhile, after the user operation is finished, a user operation area sends feedback investigation to the user, the user can feed back according to the actual situation of the scene three-dimensional model, the feedback information is also sent to the program design module, and in the core data set by the navigation information, the feedback record is subjected to statistical innovation, so that brand-new interest navigation is provided for the user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110447871.8A CN113129454A (en) | 2021-04-25 | 2021-04-25 | Virtual form display system and method based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110447871.8A CN113129454A (en) | 2021-04-25 | 2021-04-25 | Virtual form display system and method based on artificial intelligence |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113129454A true CN113129454A (en) | 2021-07-16 |
Family
ID=76780316
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110447871.8A Pending CN113129454A (en) | 2021-04-25 | 2021-04-25 | Virtual form display system and method based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113129454A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104867142A (en) * | 2015-05-14 | 2015-08-26 | 中国科学院深圳先进技术研究院 | Navigation method based on three-dimensional scene |
US20180113610A1 (en) * | 2014-05-12 | 2018-04-26 | The Research Foundation For The State University Of New York | Gang migration of virtual machines using cluster-wide deduplication |
CN108388585A (en) * | 2017-12-21 | 2018-08-10 | 广东鸿威国际会展集团有限公司 | A kind of 3-D imaging system and method for providing a user viewing plan |
CN109741459A (en) * | 2018-11-16 | 2019-05-10 | 成都生活家网络科技有限公司 | Room setting setting method and device based on VR |
CN110018742A (en) * | 2019-04-03 | 2019-07-16 | 北京八亿时空信息工程有限公司 | A kind of network virtual touring system and its construction method |
CN112184881A (en) * | 2020-09-15 | 2021-01-05 | 南京南瑞继保工程技术有限公司 | Multi-level overall process monitoring method for power equipment |
-
2021
- 2021-04-25 CN CN202110447871.8A patent/CN113129454A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180113610A1 (en) * | 2014-05-12 | 2018-04-26 | The Research Foundation For The State University Of New York | Gang migration of virtual machines using cluster-wide deduplication |
CN104867142A (en) * | 2015-05-14 | 2015-08-26 | 中国科学院深圳先进技术研究院 | Navigation method based on three-dimensional scene |
CN108388585A (en) * | 2017-12-21 | 2018-08-10 | 广东鸿威国际会展集团有限公司 | A kind of 3-D imaging system and method for providing a user viewing plan |
CN109741459A (en) * | 2018-11-16 | 2019-05-10 | 成都生活家网络科技有限公司 | Room setting setting method and device based on VR |
CN110018742A (en) * | 2019-04-03 | 2019-07-16 | 北京八亿时空信息工程有限公司 | A kind of network virtual touring system and its construction method |
CN112184881A (en) * | 2020-09-15 | 2021-01-05 | 南京南瑞继保工程技术有限公司 | Multi-level overall process monitoring method for power equipment |
Non-Patent Citations (2)
Title |
---|
孙道军等: "《管理案例教学实务指南》", 31 July 2015 * |
韩瞳: "《策略产品经理实践》", 31 July 2020 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11151890B2 (en) | 5th-generation (5G) interactive distance dedicated teaching system based on holographic terminal and method for operating same | |
CN105765990B (en) | Method, system and computer medium for distributing video content over a distributed network | |
US20130218542A1 (en) | Method and system for driving simulated virtual environments with real data | |
KR101669897B1 (en) | Method and system for generating virtual studio image by using 3-dimensional object modules | |
CN101438579A (en) | Adaptive rendering of video content based on additional frames of content | |
Sheppard et al. | Advancing interactive collaborative mediums through tele-immersive dance (TED) a symbiotic creativity and design environment for art and computer science | |
CN114363712A (en) | AI digital person video generation method, device and equipment based on templated editing | |
KR20200134575A (en) | System and method for ballet performance via augumented reality | |
CN109032339A (en) | A kind of method and system that real-time intelligent body-sensing is synchronous | |
KR20110045719A (en) | Animation production method, computer readable medium in which program for executing the method is stored and animation production system in online using the method | |
Zhang et al. | The Application of Folk Art with Virtual Reality Technology in Visual Communication. | |
US9620167B2 (en) | Broadcast-quality graphics creation and playout | |
KR20090000729A (en) | System and method for web based cyber model house | |
KR20160136833A (en) | medical education system using video contents | |
CN112017264A (en) | Display control method and device for virtual studio, storage medium and electronic equipment | |
Carraro et al. | Techniques for handling video in virtual environments | |
CN113129454A (en) | Virtual form display system and method based on artificial intelligence | |
US20020188460A1 (en) | System and method for interactive research | |
KR102155345B1 (en) | Making system for video using blocking and method therefor | |
Kardan et al. | Virtual cinematography of group scenes using hierarchical lines of actions | |
Hongyi et al. | The conversion of the production mode of film green screen visual effects in the setting of 5G technology | |
Lugmayr et al. | E= MC2+ 1: a fully digital, collaborative, high-definition (HD) production from scene to screen | |
KR20090126450A (en) | Scenario-based animation service system and method | |
CN103440876B (en) | Asset management during production of media | |
CN110430454B (en) | Multi-device real-time interactive display method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210716 |
|
RJ01 | Rejection of invention patent application after publication |