CN105487653B - Realize the method and system of virtual reality scenario - Google Patents
Realize the method and system of virtual reality scenario Download PDFInfo
- Publication number
- CN105487653B CN105487653B CN201510778146.3A CN201510778146A CN105487653B CN 105487653 B CN105487653 B CN 105487653B CN 201510778146 A CN201510778146 A CN 201510778146A CN 105487653 B CN105487653 B CN 105487653B
- Authority
- CN
- China
- Prior art keywords
- image
- database
- read
- parts
- virtual reality
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention discloses a kind of method and system for realizing virtual reality scenario, the method, including:By all parts in human dissection, according to the common three kinds of displayings pattern of medicine, horizontal, mistake shape, coronal three directions acquire high-definition image;The various displaying patterns of all parts have 72 images, by every image wipe background, and every image naming number is awarded, for needing the part for generating interaction to mark off zone location, the region marked off is individually plucked out using the form for scratching figure, around uses Transparent color;Based on unity softwares, the interfaces ui of software are packaged into client-side program, put together with resource packet, identified and read by database by the form that load marshalling is read using database.
Description
Technical field
The present invention relates to a kind of method and system for realizing virtual reality scenario.
Background technology
Virtual reality is the synthesis of multiple technologies, including real-time three-dimensional computer graphics techniques, wide-angle stereo display technique,
Feedback, stereo, network transmission, voice input and output skill are felt to the tracking technique and tactile of observer's head, eye and hand, power
Art etc..
In comparison, the thing that graph image is not too difficult is generated using computer model.If there is accurate enough
Model, and have time enough, we can generate the exact image of various objects under different illumination conditions, but here
Key be real-time.Such as in flight simulation system, the refreshing of image is quite important, while the requirement to picture quality is also very
Height, adds extremely complex virtual environment, and problem just becomes extremely difficult.
The universal time that people looks around, since the position of two eyes is different, obtained image is slightly different, these images exist
Fusion is got up in brain, is formed a whole scene about world around, is included distance in this scene
Information.Certainly, range information can also be obtained by other methods, for example, the distance of eyes focal length, article size comparison etc..
In VR systems, binocular stereo vision has played great role.The different images that two eyes of user are seen are point
It does not generate, is shown on different displays.After some systems use individual monitor, but user takes special glasses,
One eye eyeball can only see odd-numbered frame image, and another eyes can only see even frame image, between odd, even frame it is different also
It is that parallax just produces three-dimensional sense.
The tracking of user's (head, eye):In artificial environment, each object relative to system coordinate system there are one position
With posture, and user is also such.The scene that user sees is determined by the position of user and the direction of head (eye).
Track the virtual reality headgear of head movement:In traditional computer graphics techniques, the change of visual field is to pass through
Come what is realized, the vision system and motion perception system of user are separation, and are changed using head tracking for mouse or keyboard
The visual angle of image can connect between the vision system and motion perception system of user, feel more true to nature.Another is excellent
Point is that user can not only remove epistemic context by binocular stereo vision, but also can remove environment of observation by the movement on head.
In the interaction of user and computer, keyboard and mouse are current most common tools, but three dimensions is come
It says, they are all unsuitable.In three dimensions because there are six degree of freedom, we are difficult to find out the intuitive method of comparison mouse
Target plane motion is mapped to the arbitrary motion of three dimensions.Now, there are some equipment that can provide six-freedom degree, such as
3Space digitizers and SpaceBall Spatial Spheres etc..The more excellent equipment of other performance is data glove and data
Clothing.
Invention content
In view of the above-mentioned problems, the present invention provides a kind of method and system for realizing virtual reality scenario.
In order to achieve the above objectives, the method that the present invention realizes virtual reality scenario, including:
All parts in human dissection are acquired according to medicine three kinds of displaying pattern levels, sagittal, coronal three directions
Image, the wherein all parts in human dissection refer to:206 pieces of bones, 639 pieces of muscle;
The various displaying patterns of all parts have 72 images, by every image wipe background, and award every image life
Name number, it is for needing the part for generating interaction to mark off zone location, the region marked off is independent using the form for scratching figure
It plucks out, around uses Transparent color;
Based on unity softwares, the interfaces ui of software are packaged into client by the form that load marshalling is read using database
Program is put together with resource packet, is identified and is read by database;
The pixel of the acquisition image is not less than 2048*2048;
On the spinning device by the placement of human dissection component, the high-definition image of a face is acquired every 5 degree;
The image that pixel is 2048 has corresponding textual annotation, voice notes, figure using 72 marshallings, each object of organizing into groups
Example annotation, the process rotated at 72 will appear interactive mark every 6, form, generate corresponding in such a way that piecemeal is spelled and added
Interaction display.
In order to achieve the above objectives, the system that the present invention realizes virtual reality scenario, including:High-definition image collecting unit, figure
Layout processing unit, the importing engine of picture interact processing unit, wherein
High-definition image collecting unit, for all parts in human dissection to be shown pattern for common three kinds according to medicine
Level, sagittal, coronal three directions acquire high-definition image;The pixel of the acquisition image is not less than 2048*2048;
The layout processing unit of image is used for every image wipe background, and awards every image naming number, for
It needs the part for generating interaction to mark off zone location, the region marked off is individually plucked out using the form for scratching figure, is around made
Use Transparent color;
On the spinning device by the placement of human dissection component, the high-definition image of a face is acquired every 5 degree;
The image that pixel is 2048 has corresponding textual annotation, voice notes, figure using 72 marshallings, each object of organizing into groups
Example annotation, the process rotated at 72 will appear interactive mark every 6, form, generate corresponding in such a way that piecemeal is spelled and added
Interaction display;
It imports engine and interacts processing unit, be based on unity softwares, it will in the form of database reads load marshalling
The interfaces ui of software are packaged into client-side program, put together with resource packet, are identified and are read by database.
Advantageous effect
The present invention realizes that the method and system of virtual reality scenario have following advantageous effect with the prior art:
The present invention solves the disadvantage that the inadequate true and accurate of other three-dimensional modeling softwares, is ground in teaching demonstration, medicine
Study carefully, can be more intuitive true in medical application, can more effectively be directed to sagittal, coronal, three medical speciality coordinates of level
Image is observed, has filled up the market vacancy in a sense
Specific implementation mode
The present invention will be further described below.
It is aobvious using 3-D graphic generation technique, more sensing interaction techniques and high-resolution that virtual reality, which is also known as virtual reality,
Show technology, generates the virtual environment of three dimensional lifelike, user puts on the sensing equipments such as the special helmet, data glove, or utilizes
The input equipments such as keyboard, mouse can enter Virtual Space, become a member of virtual environment, carry out real-time, interactive, perception and
The various objects in virtual world are operated, to obtain impression on the spot in person and cognition.
Virtual reality technology has following five main features:
1) property immersed be allowed to created virtual environment can make student generate " feeling on the spot in person, makes it believe virtual
People is also to be implicitly present in, and what it can be from beginning to end in operation plays a role in environment, just as really objective
It is the same to see the world.
2 interactivity are task, thing of the student as in true environment and in virtual environment in virtual environment
Interactive relation occurs for object, and middle school student are interactive main bodys, and virtual objects are interactive object, the interaction between subject and object
It is comprehensive.
3 imaginations are that virtual reality is to want that the creative activity of people can be inspired, and not only want to make to be immersed in this environment
Student obtain new instruction, improve perception and rational knowledge, and want that student can be made to generate new design.
4 actions, which refer to student, to operate by the actual act of objective world or in such a way that the mankind are actual virtual system
System, allow student feels that he faces is a true environment.
5 independences are that object can be by respective model and regular autokinetic movement in virtual world.
Embodiment 1
The method that the present embodiment realizes virtual reality scenario, includes the following steps:
By all parts in human dissection according to the common three kinds of displaying patterns level of medicine, sagittal, coronal three sides
It takes pictures acquisition to high definition, background is sheerly convenient for the later stage to scratch picture using flat, in dustfree environment, using comprehensive illumination, by people
Body anatomic components are placed on special rotating device, and the high-definition image of a face is acquired every 5 degree, it is ensured that rotation is smooth
Steadily, the focal length of every image is correctly clear
The photo of each object is divided into level, sagittal, three groups coronal, and every group 72 is opened image, first by every image wiping
Except background, and every image naming number is awarded, for needing the part for generating interaction to mark off zone location by professional,
The region marked off is individually plucked out using the form for scratching figure, around uses Transparent color, in this way this part that can be superimposed upon
Above original image, mouse collision time-varying can generate corresponding interaction.
Add number, such as skull-tg according to component names suffix, sagittal-sz, level-sp, coronal-gz, skull sagittal
45 are substantially what user specified for this region tgsz-045, according to the details tab area title on bone or muscle.
Based on unity softwares, the form of load marshalling is read using database, the image that pixel is 2048 is using 72 volumes
Group, structure tree use the NGUI in unity to develop, and each object of organizing into groups has corresponding textual annotation, voice notes, part to compile
Group can also be annotated with legend, and the process rotated at 72 will appear interactive mark every 6, this pictures is added using piecemeal spelling
Mode forms, and mouse, which often collides segmented areas, can generate corresponding interaction display, and can be demonstrated with further operating.Finally
The interfaces ui of software are packaged into client-side program, are put together with resource packet, is identified and is read by database.
Embodiment 2
The system that the present embodiment realizes virtual reality scenario, including:High-definition image collecting unit, the layout processing of image are single
Member, importing engine interact processing unit, wherein
High-definition image collecting unit, for all parts in human dissection to be shown pattern for common three kinds according to medicine
Level, sagittal, coronal three directions acquire high-definition image;The pixel of the acquisition image is not less than 2048*2048;
The layout processing unit of image is used for every image wipe background, and awards every image naming number, for
It needs the part for generating interaction to mark off zone location, the region marked off is individually plucked out using the form for scratching figure, is around made
Use Transparent color;
On the spinning device by the placement of human dissection component, the high-definition image of a face is acquired every 5 degree;
The image that pixel is 2048 has corresponding textual annotation, voice notes, figure using 72 marshallings, each object of organizing into groups
Example annotation, the process rotated at 72 will appear interactive mark every 6, form, generate corresponding in such a way that piecemeal is spelled and added
Interaction display;
It imports engine and interacts processing unit, be based on unity softwares, it will in the form of database reads load marshalling
The interfaces ui of software are packaged into client-side program, put together with resource packet, are identified and are read by database.
To the present invention it should be understood that embodiment described above, to the purpose of the present invention, technical solution and beneficial to effect
Fruit has carried out further details of explanation, these are only the embodiment of the present invention, is not intended to limit the present invention, it is every
Within the spiritual principles of the present invention, made any modification, equivalent substitution, improvement and etc. should be included in the protection of the present invention
Within the scope of, the scope of protection of the present invention shall be subject to the scope of protection defined by the claims.
Claims (2)
1. a kind of method for realizing virtual reality scenario, which is characterized in that including:
All parts in human dissection are acquired into image according to medicine three kinds of displaying pattern levels, sagittal, coronal three directions,
All parts wherein in human dissection refer to:206 pieces of bones, 639 pieces of muscle;
The various displaying patterns of all parts have 72 images, by every image wipe background, and award every image name and compile
Number, for needing the part for generating interaction to mark off zone location, the region marked off is individually plucked out using the form for scratching figure,
Surrounding uses Transparent color;
Based on unity softwares, the interfaces ui of software are packaged into client-side program by the form that load marshalling is read using database,
It puts together with resource packet, is identified and read by database;
The pixel of the acquisition image is not less than 2048*2048;
On the spinning device by the placement of human dissection component, the high-definition image of a face is acquired every 5 degree;
The image that pixel is 2048 has corresponding textual annotation, voice notes, legend note using 72 marshallings, each object of organizing into groups
It releases, the process rotated at 72 will appear interactive mark every 6, is formed in such a way that piecemeal is spelled and added, generates corresponding hand over
Mutually display.
2. a kind of system for realizing virtual reality scenario, which is characterized in that including:The layout of high-definition image collecting unit, image
Processing unit, importing engine interact processing unit, wherein
High-definition image collecting unit, for all parts in human dissection to be shown pattern water for common three kinds according to medicine
Flat, sagittal, coronal three directions acquire high-definition image;The pixel of the acquisition image is not less than 2048*2048;
The layout processing unit of image is used for every image wipe background, and awards every image naming number, for needing
The part for generating interaction marks off zone location, and the region marked off is individually plucked out using the form for scratching figure, is around used saturating
Light colour;
On the spinning device by the placement of human dissection component, the high-definition image of a face is acquired every 5 degree;
The image that pixel is 2048 has corresponding textual annotation, voice notes, legend note using 72 marshallings, each object of organizing into groups
It releases, the process rotated at 72 will appear interactive mark every 6, is formed in such a way that piecemeal is spelled and added, generates corresponding hand over
Mutually display;
It imports engine and interacts processing unit, be based on unity softwares, the form of load marshalling is read by software using database
The interfaces ui be packaged into client-side program, put together with resource packet, pass through database identification read.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510778146.3A CN105487653B (en) | 2015-11-16 | 2015-11-16 | Realize the method and system of virtual reality scenario |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510778146.3A CN105487653B (en) | 2015-11-16 | 2015-11-16 | Realize the method and system of virtual reality scenario |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105487653A CN105487653A (en) | 2016-04-13 |
CN105487653B true CN105487653B (en) | 2018-09-21 |
Family
ID=55674677
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510778146.3A Active CN105487653B (en) | 2015-11-16 | 2015-11-16 | Realize the method and system of virtual reality scenario |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105487653B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106445277B (en) * | 2016-08-31 | 2019-05-14 | 和思易科技(武汉)有限责任公司 | Text rendering method in virtual reality |
CN108074202A (en) * | 2016-11-08 | 2018-05-25 | 武汉亿维达信息科技有限公司 | Hotel management specialty instruction management platform system based on virtual reality technology |
CN106527724A (en) * | 2016-11-14 | 2017-03-22 | 墨宝股份有限公司 | Method and system for realizing virtual reality scene |
CN109597478A (en) * | 2017-09-30 | 2019-04-09 | 复旦大学 | A kind of human anatomic structure displaying and exchange method based on actual situation combination |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104269125A (en) * | 2014-10-20 | 2015-01-07 | 西安冉科信息技术有限公司 | Multi-angle shooting and image processing based three-dimensional display method |
CN105023295A (en) * | 2015-08-05 | 2015-11-04 | 成都嘉逸科技有限公司 | Human anatomy unit 3D model establishment method and teaching system |
-
2015
- 2015-11-16 CN CN201510778146.3A patent/CN105487653B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104269125A (en) * | 2014-10-20 | 2015-01-07 | 西安冉科信息技术有限公司 | Multi-angle shooting and image processing based three-dimensional display method |
CN105023295A (en) * | 2015-08-05 | 2015-11-04 | 成都嘉逸科技有限公司 | Human anatomy unit 3D model establishment method and teaching system |
Also Published As
Publication number | Publication date |
---|---|
CN105487653A (en) | 2016-04-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Hsieh et al. | Preliminary study of VR and AR applications in medical and healthcare education | |
US10592067B2 (en) | Distributed interactive medical visualization system with primary/secondary interaction features | |
CA2694095C (en) | Virtual interactive presence systems and methods | |
Ware et al. | Fish tank virtual reality | |
Kockro et al. | A collaborative virtual reality environment for neurosurgical planning and training | |
Manetta et al. | Glossary of virtual reality terminology | |
CN206133563U (en) | Virtual reality's surgery operation training device | |
CN105487653B (en) | Realize the method and system of virtual reality scenario | |
Alnagrat et al. | A review of extended reality (XR) technologies in the future of human education: Current trend and future opportunity | |
Agarwal et al. | The evolution and future scope of augmented reality | |
Salvetti et al. | Effective extended reality: a mixed-reality simulation demonstration with digitized and holographic tools and intelligent avatars | |
Saggio et al. | Augmented reality for restoration/reconstruction of artefacts with artistic or historical value | |
Nesamalar et al. | An introduction to virtual reality techniques and its applications | |
McDonnell | Immersive Technology and Medical Visualisation: A Users Guide | |
CN205540574U (en) | Display device based on virtual reality glasses | |
Salvetti et al. | Effective Extended Reality: A Mixed-Reality Simulation Demonstration with Intelligent Avatars, Digitized and Holographic Tools | |
CN108346458A (en) | Medical teaching AR capture overlapping systems | |
CN205594577U (en) | Virtual reality display system based on head is trailed | |
Xiao et al. | Visualisation of large networks in 3-D space: issues in implementation and experimental evaluation. | |
Nachimuthu et al. | Virtual Reality Enhanced Instructional Learning. | |
Jadhav et al. | Mixed reality in healthcare education | |
Knight | Virtual reality for visualisation | |
John | Basis and principles of virtual reality in medical imaging | |
CN115547129A (en) | AR implementation system and method for heart three-dimensional visualization | |
Andersen | Effective User Guidance Through Augmented Reality Interfaces: Advances and Applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |