CN107507484B - A method of based on virtual and real multi-person synergy practice teaching - Google Patents

A method of based on virtual and real multi-person synergy practice teaching Download PDF

Info

Publication number
CN107507484B
CN107507484B CN201710706698.2A CN201710706698A CN107507484B CN 107507484 B CN107507484 B CN 107507484B CN 201710706698 A CN201710706698 A CN 201710706698A CN 107507484 B CN107507484 B CN 107507484B
Authority
CN
China
Prior art keywords
virtual
model
curved surface
person
full
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710706698.2A
Other languages
Chinese (zh)
Other versions
CN107507484A (en
Inventor
刘志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Teng Monkey Technology Co Ltd
Original Assignee
Guangzhou Teng Monkey Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Teng Monkey Technology Co Ltd filed Critical Guangzhou Teng Monkey Technology Co Ltd
Priority to CN201710706698.2A priority Critical patent/CN107507484B/en
Publication of CN107507484A publication Critical patent/CN107507484A/en
Application granted granted Critical
Publication of CN107507484B publication Critical patent/CN107507484B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of based on the virtual method with real multi-person synergy practice teaching, including following rate-determining steps: (1) reality teaches pattern course, the virtual scene signal that system is issued according to student required for different students selects respectively generate corresponding virtual scene respectively;(2) system is completed to create full virtual person model in virtual scene after generating bottom surface number virtual portrait model, then textures rendering according to personage's type that different students select;(3) action signal that student issues its full virtual person model, system are converted into the movement of corresponding full virtual person model;(4) the different full virtual person models in identical virtual scene are realized that reality teaches pattern task of course is completed in collaboration by system.Multi-person synergy practice teaching is realized using the environment that this method can fictionalize real training, has the characteristics that development cost is low, maintenance is simple, reality teaches pattern is abundant, interactivity is strong.

Description

A method of based on virtual and real multi-person synergy practice teaching
Technical field
The present invention relates to virtually with educational reality field, more specifically more particularly to it is a kind of based on it is virtual with it is real The method of multi-person synergy practice teaching.
Background technique
With greatly developing for society, country attaches great importance to the knowledge expertise real training aspect of student.But many School faces the insufficient predicament of funds associated therewith, therefore in practice process, and many practice teachings and real gap are larger, and student cannot Sufficiently taken exercise.And in many high-risk and extreme practice teachings, there are dangerous big, the real training materials of real training largely to waste, Practical training equipment is easy situations such as scrapping, the pressure in terms of not only causing safety and funds to school in this way, but also is easy to cause The study of student is out of condition.Therefore, it designs a set of extremely urgent with real practice teaching method based on virtual.Traditional void It is quasi- that there are many shortcomings with real practice teaching method: for example, underlying hardware facility Meteorological is high, later maintenance work Amount is big, and reality teaches pattern is limited, and interactivity is not strong, lacks multi-person synergy cooperative mechanism etc..It would therefore be highly desirable to develop a kind of development cost Low, maintenance is simple, reality teaches pattern is abundant, interactivity is strong more people coordinate practice teaching method.
Summary of the invention
The purpose of the present invention is to provide a kind of based on the virtual method with real multi-person synergy practice teaching, utilizes this The environment that method can fictionalize real training realizes multi-person synergy practice teaching, has that development cost is low, maintenance is simple, reality teaches pattern Feature abundant, interactivity is strong.
The technical solution adopted by the invention is as follows:
A method of based on virtual and real multi-person synergy practice teaching, including following rate-determining steps:
(1) reality teaches pattern course required for different students selects respectively issues creation virtual scene signal, system root Corresponding virtual scene is generated respectively according to the virtual scene signal that different students issues is received;
(2) personage's type that system is selected according to different students is accordingly given birth in the virtual scene that step (1) generates respectively At bottom surface number virtual portrait model, then respectively to completion after bottom surface number virtual portrait model pinup picture generated rendering in virtual field Full virtual person model is created to the selected figure kind's type of each student in scape;
(3) action signal that the full virtual person model that student creates it issues, the movement letter that system receives The movement for the full virtual person model that corresponding student is created is converted into after number;
(4) the full virtual person model that system is created the different students in identical virtual scene realizes collaboration Complete reality teaches pattern task of course.
Preferably, in the step (1), include the following steps: when generating virtual scene
(1.1) simple virtual scene model is first generated;
(1.2) spatial position of each object in virtual scene is determined;
(1.3) object model is generated by the spatial position of each object in the virtual scene determined, obtains complete void Quasi- scene.
Preferably, in the step (1.3), when generating object model by the spatial position for the object determined, first By the situation of change of the jobbie curved surface of each model in a distance change function representation virtual scene, in the object mould The distance of point to the object curved surface inside the object curved surface is all set as negative by each object curved surface of type, will be outside the object curved surface The distance of point to the object curved surface in portion is set as just, and the distance change function is as follows:
Wherein, d (p, s)=infd (p, q),
P indicates that the point inside object curved surface, s indicate that the object curved surface before variation, θ indicate the point set inside object curved surface It closes, d (p, s) indicates the distance of the point to curved surface of object curved surface, and q indicates the object curved surface after variation;
The global change of the object model, the localized variation function are indicated by a localized variation function formula again Formula is as follows:
Wherein, ctFor variation coefficient, d (pi, qj) it is the new variation distance of object curved surface, n indicates the interior of object curved surface variation The sum of portion's point, N indicate the curved surface sum of object curved surface variation.
Preferably, in the step (2), when being rendered to bottom surface number virtual portrait model pinup picture generated, according to View position locating for bottom surface number virtual portrait model generated and the threshold value comparison set judge whether to interpolation arithmetic, If view position locating for bottom surface number virtual portrait model generated is greater than threshold values, do not need to carry out interpolation arithmetic, if institute When view position locating for the bottom surface number virtual portrait model of generation is equal to or less than threshold values, then need to carry out interpolation arithmetic.
Preferably, the interpolation arithmetic is the grid four edges midpoint according to bottom surface number virtual portrait model generated Height value carry out derivation, derivation formula is as follows:
Wherein, y indicates height value, and x indicates the abscissa at midpoint, δ expression parameter, value 0.6;
If variable d are as follows:
Actual parameter A calculation formula is as follows:
A2=log M (| y |)-log d-A,
True compensating parameter B calculation formula is as follows:
After finding out actual parameter and the true compensating parameter of modeling, by gained actual parameter and true compensating parameter to giving birth to At bottom surface number virtual portrait model model again after obtain full virtual person model.
Preferably, in the step (3), the movement that the full virtual person model that student creates it issues is believed It number include walk mobile control signal and the mobile control signal of arm.
Preferably, in the step (4), system is created the different students in identical virtual scene complete When whole virtual portrait model moves in virtual scene, according to the mobile side of each full virtual person model in the position To, movement speed and traveling time, the coordinate position at next moment is predicted, and update before not receiving transmission of location information The location information of the full virtual person model.
Preferably, used to push away when predicting the coordinate position at each full virtual person model next moment It is as follows to calculate formula:
Xn=Xn-1+dRn-1sinθn-1,
Yn=Yn-1+dRn-1cosθn-1,
θnn-1+dθn,
Wherein, (Xn, Yn) indicate position coordinates of the full virtual person model at the n-th moment, dRn-1Indicate full virtual people Object model is from the (n-1)th moment to the distance change at the n-th moment, θn-1Indicate full virtual person model from the (n-1)th moment to n-th The angle change at moment, d are apart from displacement coefficient, value 0.5.
Preferably, in the step (4), ground, equipment object and each full virtual person model is equal It is set as floor attribute.
Compared with prior art, the device have the advantages that are as follows:
Of the invention is a kind of based on the virtual method with real multi-person synergy practice teaching, and different students selects respectively Required reality teaches pattern course issues creation virtual scene signal, the virtual field that system is issued according to different students is received Scape signal generates corresponding virtual scene respectively;Personage's type that system is selected according to different students is respectively in the virtual field of generation Bottom surface number virtual portrait model is accordingly generated in scape, then respectively to complete after bottom surface number virtual portrait model pinup picture generated rendering Full virtual person model is created to the selected figure kind's type of each student in virtual scene;Student creates it complete The action signal that whole virtual portrait model issues, be converted into after the action signal that system receives corresponding student created it is complete The movement of virtual portrait model;The full virtual person model that system is created the different students in identical virtual scene Realize that reality teaches pattern task of course is completed in collaboration.Using it is of the invention it is a kind of based on virtual with real multi-person synergy practice teaching Method can fictionalize real training environment realize multi-person synergy practice teaching, in virtual scene and full virtual person model Simpler compared with conventional processing technique, the higher calculating of operation efficiency is more acurrate, and generation development cost is lower, and maintenance is also simpler;System System can select to generate virtual scene acquisition real training according to the content of real training study course for student, and the content of courses is richer, interaction Property is stronger.
Specific embodiment
With reference to embodiment, technical solution of the present invention is described in further detail, but do not constituted pair Any restrictions of the invention.
Of the invention is a kind of based on the virtual method with real multi-person synergy practice teaching, including following rate-determining steps:
(1) reality teaches pattern course required for different students selects respectively issues creation virtual scene signal, system root Generate corresponding virtual scene respectively according to the virtual scene signal that different student issues is received, such as: ground, building, Object and Practical training equipment etc., these elements are the basic contents of structure teaching real training scene.
Include the following steps: when generating virtual scene
(1.1) simple virtual scene model is first generated, i.e., is first generated such as ground of the basic element in virtual scene etc..
(1.2) spatial position of each object in virtual scene is determined.
(1.3) object model is generated by the spatial position of each object in the virtual scene determined, obtains complete void Quasi- scene.In terms of virtual scene generation, when existing technological side is to sufficiently complex shape and structure, generally require processing compared with Large-scale data usually can not disposably complete the foundation of entire scene.Traditional method is carried out at piecemeal to scene Reason improves the efficiency of modeling to improve calculating speed, and of the invention is a kind of real with real multi-person synergy teaching based on virtual The method of instruction proposes a kind of rebuilding series mode based on point, and the method can be formed the curved surface localized variation of object model Overall structure, by carrying out linearization process to the distance of variation, to improve the calculating of the efficiently and accurately of data.
Wherein, when generating object model by the spatial position for the object determined, a distance change function table is first passed through The situation of change for showing the jobbie curved surface of each model in virtual scene all should in each object curved surface of the object model The distance of point to the object curved surface inside object curved surface is set as negative, by the point outside the object curved surface to the object curved surface away from From being set as just, the distance change function is as follows:
Wherein, d (p, s)=infd (p, q),
P indicates that the point inside object curved surface, s indicate that the object curved surface before variation, θ indicate the point set inside object curved surface It closes, d (p, s) indicates the distance of the point to curved surface of object curved surface, and q indicates the object curved surface after variation;
The global change of the object model, the localized variation function are indicated by a localized variation function formula again Formula is as follows:
Wherein, ctFor variation coefficient, d (pi, qj) it is the new variation distance of object curved surface, n indicates the interior of object curved surface variation The sum of portion's point, N indicate the curved surface sum of object curved surface variation.
Usually when carrying out the global change of object model, all variation functions must all be calculated, meter at this time Calculation amount is very huge.When therefore, using localized variation function calculation method of the invention, biggish it can reduce needed for calculating Magnitude.
(2) personage's type that system is selected according to different students is accordingly given birth in the virtual scene that step (1) generates respectively At bottom surface number virtual portrait model, then respectively to completion after bottom surface number virtual portrait model pinup picture generated rendering in virtual field Full virtual person model is created to the selected figure kind's type of each student in scape.Each virtual portrait model is multi-person synergy reality Highly important part in environment is instructed, of the invention is a kind of based in the virtual method with real multi-person synergy practice teaching Virtual portrait model is the model generated using real person's information collection, improves the sense of reality.
Traditional virtual portrait model generating method is to be initially formed the raw informations such as the skeleton, face, figure of personage, then Initial model is formed, the view position and light further according to user change show the generation of texture, shape, shadow.Such as The result of fruit processing does not meet threshold value, then the duplicate data that read from memory is needed to form new threedimensional model, it is this Method is easy to cause and reads data from memory repeatedly, increases and calculates the time, requires calculated performance very high.One kind of the invention It is next simple to use top displacement algorithm in order to improve calculating speed for method based on virtual and real multi-person synergy practice teaching Change calculating process.After the raw informations such as skeleton, face, the figure for completing personage, bottom surface number virtual portrait model is generated.To institute When the bottom surface number virtual portrait model pinup picture rendering of generation, according to view position locating for bottom surface number virtual portrait model generated Interpolation arithmetic is judged whether to the threshold value comparison set, if the position of visual angle locating for bottom surface number virtual portrait model generated When setting greater than threshold values, then do not need to carry out interpolation arithmetic, if view position locating for bottom surface number virtual portrait model generated etc. When threshold values, then need to carry out interpolation arithmetic.Interpolation arithmetic can reduce the number for reading memory, when reducing calculating Between.
The interpolation arithmetic is the elevation according to the grid four edges midpoint of bottom surface number virtual portrait model generated Value carries out derivation, and derivation formula is as follows:
Wherein, y indicates height value, and x indicates the abscissa at midpoint, δ expression parameter, value 0.6;
If variable d are as follows:
Actual parameter A calculation formula is as follows:
A2=log M (| y |)-log d-A,
True compensating parameter B calculation formula is as follows:
After finding out actual parameter and the true compensating parameter of modeling, by gained actual parameter and true compensating parameter to giving birth to At bottom surface number virtual portrait model model again after obtain full virtual person model.Utilize bottom surface number virtual portrait model The midpoint of grid four edges come ask modeling actual parameter and true compensating parameter, to improve the validity of model.Wherein, it builds The actual parameter A of mould indicates the complex situations and fine degree of modeling, and the smaller then complexity of parameter value is higher, more finely;Very Real compensating parameter B indicates the fluctuating situation of model surface, and value more large surface fluctuating is more, thus closer to real-world object.
(3) action signal that the full virtual person model that student creates it issues, the movement letter that system receives The movement for the full virtual person model that corresponding student is created is converted into after number.The full virtual personage that student creates it The action signal that model issues includes walk mobile control signal and the mobile control signal of arm.
Wherein, after system receives mobile control signal on foot, in simulation full virtual person model normal walking movement When, as soon as it is a circulation that it is mobile, which to be moved to left foot next time, for the left foot that complete virtual portrait model is arranged, constantly circulation is completed Road movement.The space displacement of full virtual person model uses triangulation location mode, determines position by calculating the number of circulation The distance of shifting realizes that walking for full virtual person model is moved.
After system receives the mobile control signal of arm, when carrying out training operation, the arm of full virtual person model It can be moved back and forth according to the content of real training.The displacement of full virtual person model uses triangulation location mode, by calculating arm The angle lifted determines the shift length of height.
(4) the full virtual person model that system is created the different students in identical virtual scene realizes collaboration Complete reality teaches pattern task of course.When the full virtual person model number in identical virtual scene increases, the connection of system Number can sharply increase, and the pressure of such server can significantly increase, and be easy to cause the collapse of server, different users occur Show out of step conditions.There are many solutions to solve the above problems at present, such as local lag technology, backrush technology, calculates Positioning etc., but there is the problems such as computationally intensive, calculating speed is slower in these technologies, one kind of the invention is based on virtually and existing The method of real multi-person synergy practice teaching uses displacement prediction technology.
The full virtual person model that system is created the different students in identical virtual scene is in virtual scene When middle mobile, according to the spatial coordinate location of full virtual person model, according to each full virtual person model in the position Moving direction, movement speed and the traveling time set predict the coordinate position at next moment, and are not receiving transmission location letter The location information of the full virtual person model is updated before breath.Can ensure in this way user before receiving more new information, Its user reduces the pressure of server woth no need to transmission location information repeatedly.
When predicting the coordinate position at each full virtual person model next moment, used prediction equation is such as Under:
Xn=Xn-1+dRn-1sinθn-1,
Yn=Yn-1+dRn-1cosθn-1,
θnn-1+dθn,
Wherein, (Xn, Yn) indicate position coordinates of the full virtual person model at the n-th moment, dRn-1Indicate full virtual people Object model is from the (n-1)th moment to the distance change at the n-th moment, θn-1Indicate full virtual person model from the (n-1)th moment to n-th The angle change at moment, d are apart from displacement coefficient, value 0.5.
System has cooperateed with the full virtual person model realization that the different students in identical virtual scene are created When at reality teaches pattern task of course, need for the objects such as ground, equipment and each full virtual person model to be disposed as Floor attribute.Floor attribute is that full virtual person model is avoided to occur passing through wall, pass through equipment and full virtual personage Situations such as mutually being passed through between model.
Real training can be fictionalized using a kind of method based on virtual and real multi-person synergy practice teaching of the invention Environment realize multi-person synergy practice teaching, it is simpler compared with conventional processing technique in virtual scene and full virtual person model Single, the higher calculating of operation efficiency is more acurrate, and generation development cost is lower, and maintenance is also simpler;System can be for student according to reality The content of study course is instructed to select to generate virtual scene and obtain real training, the content of courses is richer, and interactivity is stronger.
The foregoing is merely presently preferred embodiments of the present invention, all made any within the scope of the spirit and principles in the present invention Modifications, equivalent substitutions and improvements etc., should all be included in the protection scope of the present invention.

Claims (7)

1. a kind of based on the virtual method with real multi-person synergy practice teaching, which is characterized in that including following rate-determining steps:
(1) reality teaches pattern course required for different students selects respectively issues creation virtual scene signal, and system is according to connecing It receives the virtual scene signal that different students issues and generates corresponding virtual scene respectively;
(2) personage's type that system is selected according to different students accordingly generates in the virtual scene that step (1) generates low respectively Face number virtual portrait model, then respectively to completion after bottom surface number virtual portrait model pinup picture generated rendering in virtual scene Full virtual person model is created to the selected figure kind's type of each student;
(3) action signal that student creates it full virtual person model issues, after the action signal that system receives It is converted into the movement for the full virtual person model that corresponding student is created;
(4) the full virtual person model that system is created the different students in identical virtual scene realizes that collaboration is completed Reality teaches pattern task of course;
In the step (1), include the following steps: when generating virtual scene
(1.1) simple virtual scene model is first generated;
(1.2) spatial position of each object in virtual scene is determined;
(1.3) object model is generated by the spatial position of each object in the virtual scene determined, obtains complete virtual field Scape;
In the step (1.3), when generating object model by the spatial position for the object determined, a distance is first passed through The situation of change for changing the jobbie curved surface of each model in function representation virtual scene, in each object of the object model The distance of point to the object curved surface inside the object curved surface is all set as negative by curved surface, by the point outside the object curved surface to the object The distance of body curved surface is set as just, and the distance change function is as follows:
Wherein, d (p, s)=infd (p, q),
P indicates that the point inside object curved surface, s indicate that the object curved surface before variation, θ indicate the point set inside object curved surface, d (p, s) indicates the distance of the point to curved surface of object curved surface, and q indicates the object curved surface after variation;
The global change of the object model, the localized variation function formula are indicated by a localized variation function formula again It is as follows:
Wherein, ctFor variation coefficient, d (pi, qj) it is the new variation distance of object curved surface, n indicates the internal point of object curved surface variation Sum, N indicate object curved surface variation curved surface sum.
2. according to claim 1 a kind of based on the virtual method with real multi-person synergy practice teaching, feature exists In in the step (2), when being rendered to bottom surface number virtual portrait model pinup picture generated, according to bottom surface generated View position locating for number virtual portrait model judges whether to interpolation arithmetic with the threshold value comparison set, if generated low When view position locating for the number virtual portrait model of face is greater than threshold values, then do not need to carry out interpolation arithmetic, if bottom surface number generated When view position locating for virtual portrait model is equal to or less than threshold values, then need to carry out interpolation arithmetic.
3. according to claim 2 a kind of based on the virtual method with real multi-person synergy practice teaching, feature exists In the interpolation arithmetic is carried out according to the height value at the grid four edges midpoint of bottom surface number virtual portrait model generated Derivation, derivation formula are as follows:
Wherein, y indicates height value, and x indicates the abscissa at midpoint, δ expression parameter, value 0.6;
If variable d are as follows:
Actual parameter A calculation formula is as follows:
A2=logM (| y |)-logd-A,
True compensating parameter B calculation formula is as follows:
After finding out actual parameter and the true compensating parameter of modeling, by gained actual parameter and true compensating parameter to generated Bottom surface number virtual portrait model obtains full virtual person model after modeling again.
4. according to claim 1 a kind of based on the virtual method with real multi-person synergy practice teaching, feature exists In in the step (3), the action signal that the full virtual person model that student creates it issues includes walking to move Dynamic control signal and the mobile control signal of arm.
5. according to claim 1 a kind of based on the virtual method with real multi-person synergy practice teaching, feature exists In, in the step (4), full virtual personage's mould that system is created the different students in the identical virtual scene When type moves in virtual scene, according to each full virtual person model moving direction in the position, movement speed and Traveling time predicts the coordinate position at next moment, and full virtual people is updated before not receiving transmission of location information The location information of object model.
6. according to claim 5 a kind of based on the virtual method with real multi-person synergy practice teaching, feature exists In when predicting the coordinate position at each full virtual person model next moment, used prediction equation is as follows:
Xn=Xn-1+dRn-1sinθn-1,
Yn=Yn-1+dRn-1cosθn-1,
θnn-1+dθn,
Wherein, (Xn, Yn) indicate position coordinates of the full virtual person model at the n-th moment, dRn-1Indicate full virtual personage mould Type is from the (n-1)th moment to the distance change at the n-th moment, θn-1Indicate full virtual person model from the (n-1)th moment to the n-th moment Angle change, d is apart from displacement coefficient, value 0.5.
7. according to claim 1 a kind of based on the virtual method with real multi-person synergy practice teaching, feature exists In, in the step (4), by ground, equipment object and each full virtual person model be disposed as floor belong to Property.
CN201710706698.2A 2017-08-17 2017-08-17 A method of based on virtual and real multi-person synergy practice teaching Active CN107507484B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710706698.2A CN107507484B (en) 2017-08-17 2017-08-17 A method of based on virtual and real multi-person synergy practice teaching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710706698.2A CN107507484B (en) 2017-08-17 2017-08-17 A method of based on virtual and real multi-person synergy practice teaching

Publications (2)

Publication Number Publication Date
CN107507484A CN107507484A (en) 2017-12-22
CN107507484B true CN107507484B (en) 2019-06-25

Family

ID=60691710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710706698.2A Active CN107507484B (en) 2017-08-17 2017-08-17 A method of based on virtual and real multi-person synergy practice teaching

Country Status (1)

Country Link
CN (1) CN107507484B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335515B (en) * 2019-06-06 2022-09-20 艾普工华科技(武汉)有限公司 Immersive collaborative interactive virtual simulation teaching system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2871606B1 (en) * 2004-06-09 2006-10-06 Giat Ind Sa TRAINING SYSTEM FOR THE OPERATION, USE OR MAINTENANCE OF A WORK ENVIRONMENT
CN103246765A (en) * 2013-04-24 2013-08-14 胡松伟 Developing method for equipping virtual training platform
CN104503442B (en) * 2014-12-25 2017-06-27 中国人民解放军军械工程学院 Armament equipment fault diagnosis training method
CN104575150B (en) * 2015-01-15 2016-11-09 广东电网有限责任公司教育培训评价中心 The method and apparatus of many people online cooperation and system for electric analog training
CN105894570A (en) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 Virtual reality scene modeling method and device
CN106774949A (en) * 2017-03-09 2017-05-31 北京神州四达科技有限公司 Collaborative simulation exchange method, device and system
CN107025819B (en) * 2017-06-20 2019-03-26 大连海事大学 A kind of boat deck crane virtual training system and its working method

Also Published As

Publication number Publication date
CN107507484A (en) 2017-12-22

Similar Documents

Publication Publication Date Title
CN104318617B (en) A kind of three-dimensional geography scene simulation method of Virtual emergency drilling
CN102708227B (en) SPH (smoothed particle hydrodynamics) algorithm-based simulation method and simulation system of process of breaking dam by flood
Liu Three-dimensional visualized urban landscape planning and design based on virtual reality technology
CN102663827A (en) Three-dimensional dynamic whole-process simulation method for storm surge and flood routing in complex flooding areas
CN103440037B (en) Real-time interaction virtual human body motion control method based on limited input information
CN107590853A (en) A kind of high validity methods of exhibiting of architecture ensemble earthquake
Cordonnier et al. Sculpting mountains: Interactive terrain modeling based on subsurface geology
CN105336003A (en) Three-dimensional terrain model real-time smooth drawing method with combination of GPU technology
CN104123747B (en) Multimode touch-control three-dimensional modeling method and system
CN105760581B (en) A kind of valley Renovation and planning emulation mode and system based on OSG
CN107665269B (en) Rapid crowd evacuation simulation method and device based on geographic information
CN109001736A (en) Radar echo extrapolation method based on deep space-time prediction neural network
CN103310478B (en) A kind of method that diversified virtual crowd generates
CN107480826A (en) The application of powerline ice-covering early warning three dimension system based on GIS
CN108717729A (en) A kind of online method for visualizing of landform multi-scale TIN of the Virtual earth
CN109989751A (en) A kind of cross-platform long-range real time kinematics tracking system of three machine of fully mechanized mining and method
CN101527051A (en) Method for rendering sky based on atmospheric scattering theory and device thereof
CN103793552A (en) Real-time dynamic generating method for local particle spring model with deformed soft tissues
CN115292789A (en) Digital building quantity generation method based on form type in urban design
CN107507484B (en) A method of based on virtual and real multi-person synergy practice teaching
CN104517299B (en) Method for restoring and resimulating physical video fluid driving model
Qi Computer aided design simulation of 3D garden landscape based on virtual reality
CN106909730A (en) Three-dimensional model building emulation mode and system based on homotopy mapping algorithm
CN106339095B (en) A kind of climbing body-building device based on digital earth
Berwaldt et al. Procedural generation of favela layouts on arbitrary terrains

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant