CN206819290U - A kind of system of virtual reality multi-person interactive - Google Patents

A kind of system of virtual reality multi-person interactive Download PDF

Info

Publication number
CN206819290U
CN206819290U CN201720299529.7U CN201720299529U CN206819290U CN 206819290 U CN206819290 U CN 206819290U CN 201720299529 U CN201720299529 U CN 201720299529U CN 206819290 U CN206819290 U CN 206819290U
Authority
CN
China
Prior art keywords
output end
module
dynamic
input
catch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201720299529.7U
Other languages
Chinese (zh)
Inventor
徐志
邱春麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Multispace Media & Exhibition Co Ltd
Original Assignee
Suzhou Multispace Media & Exhibition Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Multispace Media & Exhibition Co Ltd filed Critical Suzhou Multispace Media & Exhibition Co Ltd
Priority to CN201720299529.7U priority Critical patent/CN206819290U/en
Application granted granted Critical
Publication of CN206819290U publication Critical patent/CN206819290U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The utility model provides a kind of system of virtual reality multi-person interactive, including:Wear-type visual device (1), image rendering computer (2), mixing, which are moved, catches space positioning system (3), central server (4);As rendering computers (2), mixing is dynamic to catch space positioning system (3) includes the corresponding connection figure of wear-type visual device (1):Multiple optical alignment modules (31) and inertia, which move, catches module (32), and mixing is dynamic to catch server (33);Optical alignment module (31) includes:First output end (311), inertia is dynamic, which to catch module (32), includes:Second output end (321), mixing is dynamic, which to catch server (33), includes:First input end (331), the second input (332) and the 3rd output end (333), 3rd output end (333) is connected with image rendering computer (2), and the display image of wear-type visual device (1) output is provided by image rendering computer (2).The utility model is easy to operate, simple in construction, has high commercial value.

Description

A kind of system of virtual reality multi-person interactive
Technical field
It the utility model is related to field of virtual reality, more particularly to a kind of system of virtual reality multi-person interactive.
Background technology
Virtual reality technology is a kind of computer simulation system that can be created with the experiencing virtual world, and it utilizes computer A kind of simulated environment is generated, is a kind of Multi-source Information Fusion, the system of interactive Three-Dimensional Dynamic what comes into a driver's and entity behavior imitates Really user is set to be immersed in the environment.
Virtual reality technology continuous development innovation at present, if realizing more people virtual using virtual reality technology Environment in interact to become and need the target constantly groped at present, for example, how to realize the reality for carrying out more people at home When concert, virtual Basketball Match of more people etc. how is realized in the case of no basketball court, these problems are all urgently to be resolved hurrily.
And do not have a kind of system of virtual reality multi-person interactive at present.
Utility model content
For technological deficiency existing for prior art, the purpose of this utility model is to provide a kind of virtual reality multi-person interactive System, including:Wear-type visual device 1, image rendering computer 2, mixing, which are moved, catches space positioning system 3, central server 4;The corresponding connection described image rendering computers 2 of the wear-type visual device 1, the mixing is dynamic to catch space positioning system 3 and wraps Include:Multiple optical alignment modules 31 for being respectively arranged on object multiple spot and inertia, which move, catches module 32, and mixing is dynamic to catch server 33;It is described Optical alignment module 31 includes:First output end 311, the inertia is dynamic, which to catch module 32, includes:Second output end 321, it is described mixed Close and dynamic catch server 33 and include:First input end 331, the second input 332 and the 3rd output end 333, first output end 311 are connected with the first input end 331, and second output end 321 is connected with second input 332, the 3rd output End 333 is connected with described image rendering computers 2, and provides the wear-type visual device 1 by described image rendering computers 2 The display image of output.
Preferably, the wear-type visual device 1 includes:Show camera lens 11 and display image input 12, the display Image input 12 is connected with described image rendering computers 2.
Preferably, described image rendering computers 2 include:Blended data input 21, the image production mould being sequentially connected Block 22, image rendering module 23 and display image output end 24, the blended data input 21 and the 3rd output end 333 Connection, the display image output end 24 are connected with the wear-type visual device 1.
Preferably, first output end 311 is optical alignment data output end, and second output end 321 is inertia Exercise data output end, the 3rd output end 333 are blended data output end.
Preferably, the optical alignment module 31 includes:Optical locating point 312, infrared camera 313 and location processor 314, the optical locating point 312 is located at multiple first artis of object, and the infrared camera 313 is to the optical alignment It is by infrared image delivery to the location processor 314, first output end 311 after the progress infrared image shooting of point 312 The output end of the location processor 314.
Preferably, the inertia is dynamic catches module 32 and includes:Sensor 322 and inertia, which move, catches processor 323, the sensor 322 are located at multiple second joint points of object and gather line angular speed between the acceleration and second joint point of second joint point; The inertia is dynamic, which to catch processor 323, includes obtaining input and orientation inertial positioning data output end, the acquisition input with Sensor connects, and second output end 321 is the orientation inertial positioning data output end.
Preferably, the mixing is dynamic catches server 33 and also includes:Calibration module 334, the calibration module 334 are defeated by first The data for entering the input 332 of end 331 and second are compared and export the data after calibrating in the 3rd output end 333.
Preferably, the wear-type visual device also includes human body support, and the optical alignment module and the inertia move Module is caught on the human body support.
The beneficial effects of the utility model:The utility model is by mixing the dynamic action caught space positioning system and capture people And analyzed, analysis result is sent to image rendering computer and rendered, and rendering result is sent to central service Device, central server are integrated multiple rendering results, it is determined that last rendering result, and send to everyone wear-type Visual device, finally seen by human eye.The utility model is easy to operate, simple in construction, has high commercial value.
Brief description of the drawings
By reading the detailed description made with reference to the following drawings to non-limiting example, other spies of the present utility model Sign, objects and advantages will become more apparent upon:
Fig. 1 shows specific embodiment of the present utility model, a kind of module of the system of virtual reality multi-person interactive Connection diagram;
Fig. 2 shows first embodiment of the present utility model, the module connection diagram of the optical alignment module;
Fig. 3 shows second embodiment of the present utility model, the dynamic module connection diagram for catching module of the inertia;With And
Fig. 4 shows 3rd embodiment of the present utility model, the dynamic module connection diagram for catching server of mixing.
Embodiment
It is new to this practicality below in conjunction with the accompanying drawings in order to preferably make the technical solution of the utility model clearly show Type is described further.
Fig. 1 shows specific embodiment of the present utility model, a kind of module of the system of virtual reality multi-person interactive Connection diagram, it will be appreciated by those skilled in the art that the system of the virtual reality multi-person interactive is more by being set with human body Individual locating module, and central server is reflected into, central server integrates the real time kinematics situation of more people, and carries out wash with watercolours Dye, finally reflects everyone wear in visual device, realizes wearing visual device and realizes virtual environment and virtual reality The purpose of multi-person interactive, specifically, the system of the virtual reality multi-person interactive include:Wear-type visual device 1, image rendering Computer 2, mixing, which are moved, catches space positioning system 3, central server 4, it will be appreciated by those skilled in the art that the wear-type is visually set Standby 1 can be virtual implementing helmet or virtual reality eyes, be mainly used in making one to see virtual reality scenario, more Specifically, the wear-type visual device also includes human body support, and the human body support is applied to human bodily form, further The dynamic module of catching of ground, optical alignment module and the inertia is on the human body support, the optical alignment module and described used Property it is dynamic catch that module is used to obtaining optical alignment data and inertia is dynamic catches data, done in these embodiments that will be described below into One step describes, and will not be described here.
Described image rendering computers 2 can be provided in processing unit in wear-type visual device 1 or remote The fixation arithmetic facility set from wear-type visual device 1, described image rendering computers are mainly used in receiving, capturing Human action is analyzed, and is rendered based on the analysis in the case of virtual scene, and the mixing is dynamic to catch space and determine The action that position system 3 is mainly used in the people in virtual reality multi-person interactive is caught, and track of the generation based on the action, Direction etc., offer condition and basis are rendered for image rendering computer 2, the central server is multiple described for integrating Mixing is dynamic to catch more human actions that space positioning system 3 is caught, and is rendered according to different wear-type visual devices 1 different Scene, and send to each wear-type visual device.
Preferably, the corresponding connection described image rendering computers 2 of the wear-type visual device 1, the mixing is dynamic to catch sky Between alignment system 3 include:Multiple optical alignment modules 31 for being respectively arranged on object multiple spot and inertia, which move, catches module 32, and mixing is dynamic to catch Server 33;The optical alignment module 31 includes:First output end 311, the inertia is dynamic, which to catch module 32, includes:Second output End 321, the mixing is dynamic, which to catch server 33, includes:First input end 331, the second input 332 and the 3rd output end 333, enter One step, the corresponding connection described image rendering computers 2 of the wear-type visual device 1, in such embodiments, the head The one image rendering computer 2 of corresponding connection of formula visual device 1 is worn, and in other examples, each wear-type can Depending on the one image rendering computer 2 of corresponding connection of equipment 1.In a preferred embodiment, set in the head of object and hand position It is equipped with multiple optical alignment modules 31 and inertia moves and catches module 32, and in other examples, can also be in pin, waist etc. Position sets multiple optical alignment modules 31 and inertia to move and catch module 32.
The optical alignment module 31 mainly realizes positioning by way of infrared shooting, and the dynamic module 32 of catching of the inertia is led to Cross velocity sensor and catch the track moved, position, the dynamic server 33 of catching of the mixing is used for optical alignment module 31 and is used to Property it is dynamic catch module 32 integrated, computing, show that one is most preferably moved orientation and movement locus, the mixing is dynamic to catch clothes The input 332 of first input end 331 and second of business device 33 is used to obtain the optical alignment module 31 and inertia moves and catches mould The data of block 32, the 3rd output end 333 are used to transmit the data after described integrate.
Preferably, first output end 311 is connected with the first input end 331, second output end 321 and institute State the second input 332 to connect, the 3rd output end 333 is connected with described image rendering computers 2, and renders meter by described image Calculation machine 2 provides the display image that the wear-type visual device 1 exports, and first output end 311 is that optical alignment data are defeated Go out end, first output end 311 is connected Ji Wei the optical alignment module 31 with the first input end 331 by positioning result It is transferred to that the mixing is dynamic to catch server 33, second output end 321, second output end 321 and the described second input Holding 332 connections to be as transferred to the result of seizure, the mixing is dynamic to catch server 33, further, described to mix the dynamic service of catching Device 33 by the 3rd output end 333 will it is integrated after data result be transferred to described image rendering computers 2 and rendered, And it is transferred to the wear-type visual device 1 using rendering result as display image.
Preferably, the wear-type visual device 1 includes:Show camera lens 11 and display image input 12, the display Image input 12 is connected with described image rendering computers 2, in such embodiments, the display camera lens and human body head The position of glasses is adapted, and for showing the display image from image rendering computer 2, the display image input 12 is used In display image of the reception from image rendering computer 2.
Preferably, described image rendering computers 2 include:Blended data input 21, the image production mould being sequentially connected Block 22, image rendering module 23 and display image output end 24, the blended data input 21 and the 3rd output end 333 Connection, the display image output end 24 are connected with the wear-type visual device 1.It will be appreciated by those skilled in the art that described Three output ends 333 are blended data output end, and the blended data input 21 is used to receive from the dynamic service of catching of the mixing The exercise data of 3rd output end 333 described in device 33, after the exercise data is obtained, described image rendering computers 2 are excellent Selection of land produces module 22 by image and establishes virtual environment, builds virtual scene, and combine by described image rendering module 23 The exercise data is rendered, and rendering result is dissolved into the virtual environment, and further, the display image is defeated Go out the display image input 12 of the connection of the end 24 wear-type visual device 1, the final rendering result is passed through described Display image input 12 is transferred to the wear-type visual device 1.
Fig. 2 shows first embodiment of the present utility model, the module connection diagram of the optical alignment module, makees For first embodiment of the present utility model, the optical alignment module is dynamic one caught in space positioning system of the mixing Point, for providing visual positioning.
Further, the optical alignment module 31 includes:Optical locating point 312, infrared camera 313 and localization process Device 314, the optical locating point 312 are located at multiple first artis of object, and the infrared camera 313 is determined the optics By infrared image delivery to the location processor 314, first output end 311 after the progress infrared image shooting of site 312 For the output end of the location processor 314.
In such embodiments, the optical locating point 312 is preferably provided in the key position of human body, preferably sets Put in the head of human body and hand position, foot etc. position can also be arranged on, it is described red for positioning the orientation of human body Outer camera 313 be used for according to the optical locating point 312 carry out infrared shooting, and obtain according to the displacement on human visual, The change in orientation, the left and right of the optical locating point are movable up and down, obtain infrared image, are transferred to the localization process Device.The location processor is used to pre-process the infrared data, and the infrared data is defeated by described first Go out end 311 and be transferred to the first input end 331.
Fig. 3 shows second embodiment of the present utility model, the dynamic module connection diagram for catching module of the inertia, makees For second embodiment of the present utility model, the dynamic module of catching of the inertia is the dynamic another portion caught in space positioning system of the mixing Point, for the positioning in offer action.
Further, the inertia is dynamic catches that module 32 includes sensor 322 and inertia is dynamic catches processor 323, the sensing The action that device 332 is used for human body perceives, it is therefore preferable to acceleration transducer, angular-rate sensor, and other real Apply in example, the sensor also includes displacement transducer, height sensor etc., and the dynamic processor 323 of catching of the inertia is used to locate Manage the data acquired in the sensor 332.
Further, the sensor 322 is located at multiple second joint points of object and gathers the acceleration of second joint point Line angular speed between degree and second joint point, it will be appreciated by those skilled in the art that the setting of second joint point can be covering The position of first artis, artis it can also set in addition, for example, swivel of hand, ankle, knee, shoulder etc., described The dynamic processor 323 of catching of inertia includes obtaining input and orientation inertial positioning data output end, the acquisition input and sensing Device connects, and second output end 321 is the orientation inertial positioning data output end, and the acquisition input connects the biography Sensor, the orientation inertial positioning data output end are second output end 321, and second output end 321 connects institute State second input 332.
Fig. 4 shows 3rd embodiment of the present utility model, and the module connection diagram for catching server is moved in the mixing, The mixing is dynamic catch server be used for the visual positioning that will be obtained in the embodiment one and the embodiment two and Positioning in action is combined, and show that a kind of highly preferred motion images are shown.
Specifically, the mixing is dynamic catches server 33 and also includes calibration module 334, and the calibration module 334 is defeated by first The data for entering the input 332 of end 331 and second are compared and export the data after calibrating in the 3rd output end 333.
As 3rd embodiment of the present utility model, it will be appreciated by those skilled in the art that in the virtual environment interaction of reality, Often because the fact that some are objective causes people completely can not transmit the declaration of will of oneself by limbs, and it is described Calibration module needs what is done, exactly realizes complete declaration of will on thought of people by analyzing, and in further embodiment In, the calibration module can also be calibrated to the nonstandard posture of people, it is slack action carry out it is perfect so that its Other people are by wear-type visual device it is seen that complete, smooth, the action created an aesthetic feeling.
For example, in a specific embodiment, it is assumed that people carry out basketball using the system of virtual reality multi-person interactive Match, wherein someone has used the posture for dunk shot of jumping up, at this moment, because under actual conditions, this person has reached targets threshold Highly, and this person position is mutually agreed with dunk shot position, but the posture of dunk shot generates error due to objective condition, and has only detained one Partly stop, now according to calibration module, calibration reparation can be carried out to the posture of the dunk shot, and other people are seeing this person's button It is complete dunk shot action during basket, including action during this person's landing etc., and in another change case of the present embodiment, If this person takeoffs, the object height of system defined is not reaching to, but the posture of this person is then used to as dunk shot according to described Property the dynamic movement locus for catching the swivel of hand that module 32 is captured, judge that this person is dunk shot or shooting according to calibration module.Most Afterwards, described image rendering computers are transferred to using the data after calibration as the final data of this person.
Specific embodiment of the utility model is described above.It is to be appreciated that the utility model not office It is limited to above-mentioned particular implementation, those skilled in the art can make various deformations or amendments within the scope of the claims, This has no effect on substantive content of the present utility model.

Claims (8)

  1. A kind of 1. system of virtual reality multi-person interactive, it is characterised in that including:Wear-type visual device (1), image rendering meter Calculation machine (2), mixing, which are moved, catches space positioning system (3), central server (4);The corresponding connection institute of the wear-type visual device (1) State image rendering computer (2), the mixing is dynamic, which to catch space positioning system (3), includes:Multiple light for being respectively arranged on object multiple spot Learn locating module (31) and inertia moves and catches module (32), mixing is dynamic to catch server (33);The optical alignment module (31) includes: First output end (311), the inertia is dynamic, which to catch module (32), includes:Second output end (321), the mixing is dynamic to catch server (33) include:First input end (331), the second input (332) and the 3rd output end (333), first output end (311) It is connected with the first input end (331), second output end (321) is connected with second input (332), and the 3rd is defeated Go out end (333) to be connected with described image rendering computers (2), and the wear-type is provided by described image rendering computers (2) The display image of visual device (1) output.
  2. 2. the system of virtual reality multi-person interactive as claimed in claim 1, it is characterised in that the wear-type visual device (1) include:Camera lens (11) and display image input (12) are shown, the display image input (12) renders with described image Computer (2) connects.
  3. 3. the system of virtual reality multi-person interactive as claimed in claim 1, it is characterised in that described image rendering computers (2) include:Blended data input (21), image production module (22), image rendering module (23) and the display being sequentially connected Output end of image (24), the blended data input (21) are connected with the 3rd output end (333), and the display image is defeated Go out end (24) to be connected with the wear-type visual device (1).
  4. 4. the system of virtual reality multi-person interactive as claimed in claim 3, it is characterised in that first output end (311) For optical alignment data output end, second output end (321) is inertia motion data output end, the 3rd output end (333) it is blended data output end.
  5. 5. the system of virtual reality multi-person interactive as claimed in claim 1, it is characterised in that the optical alignment module (31) Including:Optical locating point (312), infrared camera (313) and location processor (314), the optical locating point (312) are located at Multiple first artis of object, the infrared camera (313) carry out infrared image shooting to the optical locating point (312) Afterwards by infrared image delivery to the location processor (314), first output end (311) is the location processor (314) output end.
  6. 6. the system of virtual reality multi-person interactive as claimed in claim 1, it is characterised in that the inertia is dynamic to catch module (32) Including:Sensor (322) and inertia, which move, catches processor (323), and the sensor (322) is located at multiple second joint points of object And gather line angular speed between the acceleration and second joint point of second joint point;The inertia is dynamic, which to catch processor (323), includes Input and orientation inertial positioning data output end are obtained, the acquisition input is connected with sensor, second output end (321) it is the orientation inertial positioning data output end.
  7. 7. the system of virtual reality multi-person interactive as claimed in claim 1, it is characterised in that the mixing is dynamic to catch server (33) also include:Calibration module (334), the calibration module (334) is by first input end (331) and the second input (332) Data be compared and the 3rd output end (333) output calibration after data.
  8. 8. the system of virtual reality multi-person interactive as claimed in claim 7, it is characterised in that the wear-type visual device is also Including human body support, the optical alignment module and the dynamic module of catching of the inertia are on the human body support.
CN201720299529.7U 2017-03-24 2017-03-24 A kind of system of virtual reality multi-person interactive Active CN206819290U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201720299529.7U CN206819290U (en) 2017-03-24 2017-03-24 A kind of system of virtual reality multi-person interactive

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201720299529.7U CN206819290U (en) 2017-03-24 2017-03-24 A kind of system of virtual reality multi-person interactive

Publications (1)

Publication Number Publication Date
CN206819290U true CN206819290U (en) 2017-12-29

Family

ID=60752403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201720299529.7U Active CN206819290U (en) 2017-03-24 2017-03-24 A kind of system of virtual reality multi-person interactive

Country Status (1)

Country Link
CN (1) CN206819290U (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106843507A (en) * 2017-03-24 2017-06-13 苏州创捷传媒展览股份有限公司 A kind of method and system of virtual reality multi-person interactive
TWI688900B (en) * 2018-01-08 2020-03-21 宏達國際電子股份有限公司 Reality system and control method suitable for head-mounted devices located in physical environment
CN110928404A (en) * 2018-09-19 2020-03-27 未来市股份有限公司 Tracking system and related tracking method thereof
CN112306240A (en) * 2020-10-29 2021-02-02 中国移动通信集团黑龙江有限公司 Virtual reality data processing method, device, equipment and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106843507A (en) * 2017-03-24 2017-06-13 苏州创捷传媒展览股份有限公司 A kind of method and system of virtual reality multi-person interactive
CN106843507B (en) * 2017-03-24 2024-01-05 苏州创捷传媒展览股份有限公司 Virtual reality multi-person interaction method and system
TWI688900B (en) * 2018-01-08 2020-03-21 宏達國際電子股份有限公司 Reality system and control method suitable for head-mounted devices located in physical environment
US10600205B2 (en) 2018-01-08 2020-03-24 Htc Corporation Anchor recognition in reality system
US11120573B2 (en) 2018-01-08 2021-09-14 Htc Corporation Anchor recognition in reality system
CN110928404A (en) * 2018-09-19 2020-03-27 未来市股份有限公司 Tracking system and related tracking method thereof
CN110928404B (en) * 2018-09-19 2024-04-19 未来市股份有限公司 Tracking system and related tracking method thereof
CN112306240A (en) * 2020-10-29 2021-02-02 中国移动通信集团黑龙江有限公司 Virtual reality data processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
Colyer et al. A review of the evolution of vision-based motion analysis and the integration of advanced computer vision methods towards developing a markerless system
JP6938542B2 (en) Methods and program products for articulated tracking that combine embedded and external sensors
US11262841B2 (en) Wireless wrist computing and control device and method for 3D imaging, mapping, networking and interfacing
CN106843507B (en) Virtual reality multi-person interaction method and system
US8786680B2 (en) Motion capture from body mounted cameras
CN206819290U (en) A kind of system of virtual reality multi-person interactive
Mohler et al. Gait parameters while walking in a head-mounted display virtual environment and the real world
CN111460875A (en) Image processing method and apparatus, image device, and storage medium
Waltemate et al. Realizing a low-latency virtual reality environment for motor learning
D’Antonio et al. Validation of a 3D markerless system for gait analysis based on OpenPose and two RGB webcams
US11199711B2 (en) Enhanced reality systems
Van der Aa et al. Umpm benchmark: A multi-person dataset with synchronized video and motion capture data for evaluation of articulated human motion and interaction
CN106256394A (en) The training devices of mixing motion capture and system
CN109242887A (en) A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU
Yang et al. 3-D markerless tracking of human gait by geometric trilateration of multiple Kinects
Lin et al. Using hybrid sensoring method for motion capture in volleyball techniques training
Cheng et al. Capturing human motion in natural environments
CN210109743U (en) VR interactive system based on motion capture
You et al. Kinect-based 3D human motion acquisition and evaluation system for remote rehabilitation and exercise
Chong et al. A photogrammetric application in virtual sport training
McGuirk A multi-view video based deep learning approach for human movement analysis
CN110515466A (en) A kind of motion capture system based on virtual reality scenario
Poussard et al. 3DLive: A multi-modal sensing platform allowing tele-immersive sports applications
CN108093222A (en) A kind of hand-held VR imaging methods
Deyzel Markerless versus marker-based 3D human pose estimation for strength and conditioning exercise identification.

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant