CN105824416A - Method for combining virtual reality technique with cloud service technique - Google Patents

Method for combining virtual reality technique with cloud service technique Download PDF

Info

Publication number
CN105824416A
CN105824416A CN201610150103.5A CN201610150103A CN105824416A CN 105824416 A CN105824416 A CN 105824416A CN 201610150103 A CN201610150103 A CN 201610150103A CN 105824416 A CN105824416 A CN 105824416A
Authority
CN
China
Prior art keywords
virtual reality
scene
virtual
reference point
reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610150103.5A
Other languages
Chinese (zh)
Other versions
CN105824416B (en
Inventor
赖斌斌
江兰波
樊星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FUJIAN DUODUOYUN TECHNOLOGY Co.,Ltd.
Original Assignee
Chengdu Chainsaw Interactive Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Chainsaw Interactive Technology Co Ltd filed Critical Chengdu Chainsaw Interactive Technology Co Ltd
Priority to CN201610150103.5A priority Critical patent/CN105824416B/en
Publication of CN105824416A publication Critical patent/CN105824416A/en
Application granted granted Critical
Publication of CN105824416B publication Critical patent/CN105824416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a method for combining a virtual reality technique with a cloud service technique. The method for combining the virtual reality technique with the cloud service technique comprises a scene data collecting and uploading step, a data downloading step, a scene mapping step, a locating step and a motion way mapping step. According to the method for combining the virtual reality technique with the cloud service technique, a virtual reality world is associated with the real world; the real-time display is carried out on a small map of the virtual reality world through locating; meanwhile, a user can observe an own behavioral action in real time; particularly, a building in the real world is exhibited in a three-dimensional view manner by adopting the way of the small map; moreover, the method for combining the virtual reality technique with the cloud service technique is associated with the location, and is vivid and intuitional; a real environment is understood and analyzed by adopting an application real scene sensing technique and a corresponding operational processing technique; some characteristics in the real environment are mapped into a virtual scene shown to the user, so as to improve the user experience.

Description

A kind of method that virtual reality technology is combined with cloud service technology
Technical field
The present invention relates to a kind of method virtual reality technology being combined with cloud service technology.
Background technology
Virtual reality technology is a kind of can to create with the computer simulation system in the experiencing virtual world it and utilize computer to generate a kind of simulated environment to be the interactively Three-Dimensional Dynamic what comes into a driver's of a kind of Multi-source Information Fusion and the system emulation of entity behavior makes user be immersed in this environment.Simultaneously, virtual reality is to utilize computer simulation to produce a three-dimensional virtual world, there is provided user about the simulation of the sense organs such as vision, audition, sense of touch, allow user as being personally on the scene, the things in three-dimensional space can be observed in time, without limitation.Virtual reality is the comprehensive of multiple technologies, including real-time three-dimensional computer graphics techniques, Radix Rumicis (wide the visual field) stereo display technique, the tracking technique to observer's head, eye and hands, and sense of touch/power feel feedback, stereo, network transmission, phonetic entry export technique etc..
In virtual reality technology, when user carry out position move time, computer can carry out immediately complexity computing, pass accurate 3D world image back generation telepresenc.This Integration ofTechnology later development of the technology such as computer graphical (CG) technology, computer simulation technique, artificial intelligence, sensing technology, Display Technique, network parallel process, is a kind of high-tech analog systems generated by computer technology auxiliary.
But existing virtual reality technology cannot be associated with real real world, the world in virtual reality cannot be coupled together by user with real world, therefore always produces distance perspective.Meanwhile, the most virtual reality is not combined with cloud service.
Summary of the invention
It is an object of the invention to overcome the deficiencies in the prior art, a kind of method virtual reality technology being combined with cloud service technology is provided, by the world of virtual reality, the real world is associated with reality, uses cloud simultaneously, facilitates the realization of multiple virtual reality terminal.
It is an object of the invention to be achieved through the following technical solutions: a kind of method virtual reality technology being combined with cloud service technology, it includes contextual data collection and uploading step, data download step, scene mapping step, positioning step and motion mode mapping step;
Described contextual data collection includes with uploading step: the entity object of the reality that user treats virtual image in advance carries out data acquisition, has gathered and has been uploaded to Cloud Server afterwards and preserves;
Described data download step includes: corresponding contextual data is downloaded in Cloud Server by user by virtual reality terminal;
Described scene mapping step shows for the region around the virtual scene of the data downloaded and user is carried out virtual reality in virtual reality terminal, including for virtual network element and real entity object being shown among GIS-Geographic Information System and form the first scene mapping sub-step of composite space and be used for being mapped as scene around the second scene mapping step of virtual scene;
Described GIS-Geographic Information System includes that electronic three-dimensional map, the first described scene mapping sub-step include following sub-step:
S111: described network element carries out GISization, described network element is the dummy object being not present in reality;
S112: described composite space is carried out three-dimensional visualization;
S113: virtual reality terminal presents the composite space after three-dimensional visualization and dummy object position;
The second described scene mapping step includes following sub-step:
S121: by the reality scene information of reality scene sensing module capture user's surrounding enviroment;
S122: calculate processing module and extract reality scene feature from described reality scene information, based on mapping relations set in advance, it is the feature for building virtual scene by described reality scene Feature Mapping, and based on the described feature construction virtual reality scenario information for building virtual scene;
S123: virtual reality terminal presents described virtual reality scenario information;
Described positioning step includes:
S21: initialize indoor reference point, is loaded into reference point information in data base;
S22: arrange queue and filter parameter, gathers WIFI signal data to queue;
S23: utilize the data queue gathered, calculates RSSI average corresponding for each AP on current location;
S24: travel through all reference points, whether the RSSI average calculated according to step S23 is within corresponding AP is about the RSSI interval of certain reference point, it is judged that whether this reference point is concentrated in the judgement of corresponding A P;
S25: ask each AP judge collection common factor:
(1) if only one of which reference point in Jiao Jiing, this reference point coordinate is exported as the estimation of algorithm, and terminates;
(2) if more than one reference point in Jiao Jiing, then calculate RSSI error vector, according to error to the reference point sequence in occuring simultaneously, and select k the point that wherein error is minimum, to weight k nearest neighbor algorithm calculating estimated result, and terminate;
(3) if occuring simultaneously for empty set, calculate each center judging collection, and using the center at these collection centers as Global center, Euler's distance is utilized to get rid of the judgement collection that centre-to-centre spacing Global center is farthest, and remaining judgement collection is made sub-step in step S25 (1), sub-step (2) and the intersection operation of sub-step (3), until obtaining estimated result, and terminate;If going to last layer still can not get result, perform sub-step (4);
(4) if sub-step (3) goes to last layer, occuring simultaneously is still empty set, then utilize the error distance between current RSSI average and reference point RSSI average, according to RSSI error minimum principle, utilizes weighting k nearest neighbor algorithm to calculate estimated result;
S26: the composite space after location information and three-dimensional visualization is mapped, shows current location information in composite space;
Described motion mode positioning step includes following sub-step:
S31: multiple and virtual reality terminal association sensory package is set at person joint;
The information of S32: each sensory package sends in real time to virtual reality terminal;
S33: virtual reality terminal resolves after receiving information, and be presented in described virtual reality scenario information.
Described virtual reality terminal is virtual implementing helmet or mobile terminal.
Described positioning step also includes an off-line training step:
S201: discretization area to be targeted, takes N number of position as a reference point in area to be targeted uniformly;
S202: scan WIFI signal in the reference point described in each step S201, records received signal strength indicator value RSSI of each AP interior of continuous a period of time;
S203: process the RSSI vector of gained in step 202, calculate each AP interval at RSSI average, variance and the minimax of this reference point, these parameters are saved in data base together with mark SSID of corresponding A P;
S204: all of reference point is carried out S203 and the operation of step S204, until all of reference point is all trained complete, thus sets up the RSSI distribution map that area to be targeted is complete.
The described composite space after three-dimensional visualization is the 3-D view of building.
Described virtual reality terminal presents the visual angle adjustable of the composite space after three-dimensional visualization.
The time series frame data that reality scene information is capture user's surrounding enviroment image of the capture user's surrounding enviroment described in step S121;Described calculating processing module extracts reality scene feature from described reality scene information and described time series frame data is carried out pattern recognition analysis, to extract reality scene feature.
Described reality scene sensing module includes: depth camera sensor, depth camera sensor and one or more the combination in the binding entity of RGB image sensor, ultrasonic locating sensing module, thermal imaging orientation sensing module and electromagnetic location sensing module.
Described sensory package include 3-axis acceleration sensor, three axis angular rate sensors, three axle geomagnetic sensors one or more.
When the data of certain scene are updated in Cloud Server, to downloading the virtual reality terminal PUSH message of this scene, described virtual reality terminal is reminded to be updated.
The invention has the beneficial effects as follows:
The world of Virtual Realization is associated by the present invention with real world, shows in real time by being positioned at the little map in the world of Virtual Realization, and user can be with Real Time Observation to factum action simultaneously.Meanwhile, use cloud, be acquired being saved in Cloud Server by contextual data in advance, need the virtual reality terminal used only with downloading.
Specifically, use the mode of little map that the building of real world carries out the displaying of 3-D view, and be associated with location, visual in image;Use application reality scene sensing technology and corresponding calculation process technology to understand and analyze actual environment, by some Feature Mapping in actual environment to the virtual scene being presented to user, thus improve Consumer's Experience;And, location therein mode realizes moving the location of target (people, equipment) and three-dimensional position shows, there is provided coordinate to estimate for applying based on location-based service of virtual reality terminal, possess higher precision and relatively low time delay (time delay can be arranged indirectly by the scan period);It addition, Cloud Server also provides for updating push function, improve reliability.
Accompanying drawing explanation
Fig. 1 is the inventive method flow chart.
Detailed description of the invention
Technical scheme is described in further detail below in conjunction with the accompanying drawings:
As it is shown in figure 1, a kind of method virtual reality technology being combined with cloud service technology, it includes contextual data collection and uploading step, data download step, scene mapping step, positioning step and motion mode mapping step;
Described contextual data collection includes with uploading step: the entity object of the reality that user treats virtual image in advance carries out data acquisition, has gathered and has been uploaded to Cloud Server afterwards and preserves;
Described data download step includes: corresponding contextual data is downloaded in Cloud Server by user by virtual reality terminal;
Described scene mapping step shows for the region around the virtual scene of the data downloaded and user is carried out virtual reality in virtual reality terminal, including for virtual network element and real entity object being shown among GIS-Geographic Information System and form the first scene mapping sub-step of composite space and be used for being mapped as scene around the second scene mapping step of virtual scene;
Described GIS-Geographic Information System includes that electronic three-dimensional map, the first described scene mapping sub-step include following sub-step:
S111: described network element carries out GISization, described network element is the dummy object being not present in reality;
S112: described composite space is carried out three-dimensional visualization;
S113: virtual reality terminal presents the composite space after three-dimensional visualization and dummy object position;
The second described scene mapping step includes following sub-step:
S121: by the reality scene information of reality scene sensing module capture user's surrounding enviroment;
S122: calculate processing module and extract reality scene feature from described reality scene information, based on mapping relations set in advance, it is the feature for building virtual scene by described reality scene Feature Mapping, and based on the described feature construction virtual reality scenario information for building virtual scene;
S123: virtual reality terminal presents described virtual reality scenario information;
Described positioning step includes:
S21: initialize indoor reference point, is loaded into reference point information in data base;
S22: arrange queue and filter parameter, gathers WIFI signal data to queue;
S23: utilize the data queue gathered, calculates RSSI average corresponding for each AP on current location;
S24: travel through all reference points, whether the RSSI average calculated according to step S23 is within corresponding AP is about the RSSI interval of certain reference point, it is judged that whether this reference point is concentrated in the judgement of corresponding A P;
S25: ask each AP judge collection common factor:
(1) if only one of which reference point in Jiao Jiing, this reference point coordinate is exported as the estimation of algorithm, and terminates;
(2) if more than one reference point in Jiao Jiing, then calculate RSSI error vector, according to error to the reference point sequence in occuring simultaneously, and select k the point that wherein error is minimum, to weight k nearest neighbor algorithm calculating estimated result, and terminate;
(3) if occuring simultaneously for empty set, calculate each center judging collection, and using the center at these collection centers as Global center, Euler's distance is utilized to get rid of the judgement collection that centre-to-centre spacing Global center is farthest, and remaining judgement collection is made sub-step in step S25 (1), sub-step (2) and the intersection operation of sub-step (3), until obtaining estimated result, and terminate;If going to last layer still can not get result, perform sub-step (4);
(4) if sub-step (3) goes to last layer, occuring simultaneously is still empty set, then utilize the error distance between current RSSI average and reference point RSSI average, according to RSSI error minimum principle, utilizes weighting k nearest neighbor algorithm to calculate estimated result;
S26: the composite space after location information and three-dimensional visualization is mapped, shows current location information in composite space;
Described motion mode positioning step includes following sub-step:
S31: multiple and virtual reality terminal association sensory package is set at person joint;
The information of S32: each sensory package sends in real time to virtual reality terminal;
S33: virtual reality terminal resolves after receiving information, and be presented in described virtual reality scenario information.
Described virtual reality terminal is virtual implementing helmet or mobile terminal.
Described positioning step also includes an off-line training step:
S201: discretization area to be targeted, takes N number of position as a reference point in area to be targeted uniformly;
S202: scan WIFI signal in the reference point described in each step S201, records received signal strength indicator value RSSI of each AP interior of continuous a period of time;
S203: process the RSSI vector of gained in step 202, calculate each AP interval at RSSI average, variance and the minimax of this reference point, these parameters are saved in data base together with mark SSID of corresponding A P;
S204: all of reference point is carried out S203 and the operation of step S204, until all of reference point is all trained complete, thus sets up the RSSI distribution map that area to be targeted is complete.
The described composite space after three-dimensional visualization is the 3-D view of building.
Described virtual reality terminal presents the visual angle adjustable of the composite space after three-dimensional visualization.
The time series frame data that reality scene information is capture user's surrounding enviroment image of the capture user's surrounding enviroment described in step S121;Described calculating processing module extracts reality scene feature from described reality scene information and described time series frame data is carried out pattern recognition analysis, to extract reality scene feature.
Described reality scene sensing module includes: depth camera sensor, depth camera sensor and one or more the combination in the binding entity of RGB image sensor, ultrasonic locating sensing module, thermal imaging orientation sensing module and electromagnetic location sensing module.
Described sensory package include 3-axis acceleration sensor, three axis angular rate sensors, three axle geomagnetic sensors one or more.
When the data of certain scene are updated in Cloud Server, to downloading the virtual reality terminal PUSH message of this scene, described virtual reality terminal is reminded to be updated.
The present embodiment is movable for being applied to market, and activity is held in certain market, needs to use virtual reality, and user needs the method by the present invention to search out the particular artifact of ad-hoc location.Such as, virtual NPC etc. is found.
First, the entity object of the reality that user treats virtual image in advance carries out data acquisition, has gathered and has been uploaded to Cloud Server afterwards and preserves;Described data download step includes: corresponding contextual data is downloaded in Cloud Server by user by virtual reality terminal.When the data of certain scene are updated in Cloud Server, to downloading the virtual reality terminal PUSH message of this scene, described virtual reality terminal is reminded to be updated.
Afterwards, user obtains the first scene and maps, and for the virtual scene of data downloaded, carries out virtual reality and show being the shape in whole market and floor and the ad-hoc location of virtual NPC.
S111: network element carries out GISization, described network element is the dummy object being not present in reality, and network element in the present embodiment is virtual NPC;
S112: described composite space carries out three-dimensional visualization, i.e. obtains shape and the floor in whole market, it is also possible to include the part landform outside market;
S113: virtual reality terminal presents the shape in the whole market after three-dimensional visualization and floor and virtual NPC certain position in market, realizes (picture i.e. occupied in virtual reality terminal is a corner) in the present embodiment by the way of little map.
Described virtual reality terminal presents the visual angle adjustable of the composite space after three-dimensional visualization.
Then, user obtains the second scene and maps, and i.e. obtains the virtual reality information of surrounding.
S121: by the reality scene information of reality scene sensing module capture user's surrounding enviroment;
S122: calculate processing module and extract reality scene feature from described reality scene information, based on mapping relations set in advance, it is the feature for building virtual scene by described reality scene Feature Mapping, and based on the described feature construction virtual reality scenario information for building virtual scene;
S123: virtual reality terminal presents described virtual reality scenario information, in the present embodiment, is carried out all pictures realizing and occupying whole picture in addition to little map segment by the form of virtual animation.
Wherein, the time series frame data that reality scene information is capture user's surrounding enviroment image of the capture user's surrounding enviroment described in step S121;Described calculating processing module extracts reality scene feature from described reality scene information and described time series frame data is carried out pattern recognition analysis, to extract reality scene feature.
Then, oneself is positioned by user.
Described positioning step includes:
S21: initialize indoor reference point, is loaded into reference point information in data base;
S22: arrange queue and filter parameter, gathers WIFI signal data to queue;
S23: utilize the data queue gathered, calculates RSSI average corresponding for each AP on current location;
S24: travel through all reference points, whether the RSSI average calculated according to step S23 is within corresponding AP is about the RSSI interval of certain reference point, it is judged that whether this reference point is concentrated in the judgement of corresponding A P;
S25: ask each AP judge collection common factor:
(1) if only one of which reference point in Jiao Jiing, this reference point coordinate is exported as the estimation of algorithm, and terminates;
(2) if more than one reference point in Jiao Jiing, then calculate RSSI error vector, according to error to the reference point sequence in occuring simultaneously, and select k the point that wherein error is minimum, to weight k nearest neighbor algorithm calculating estimated result, and terminate;
(3) if occuring simultaneously for empty set, calculate each center judging collection, and using the center at these collection centers as Global center, Euler's distance is utilized to get rid of the judgement collection that centre-to-centre spacing Global center is farthest, and remaining judgement collection is made sub-step in step S25 (1), sub-step (2) and the intersection operation of sub-step (3), until obtaining estimated result, and terminate;If going to last layer still can not get result, perform sub-step (4);
(4) if sub-step (3) goes to last layer, occuring simultaneously is still empty set, then utilize the error distance between current RSSI average and reference point RSSI average, according to RSSI error minimum principle, utilizes weighting k nearest neighbor algorithm to calculate estimated result;
S26: the composite space after location information and three-dimensional visualization is mapped, shows current location information in composite space.Oneself position location is shown in little map by i.e. user in real time.
Wherein, described data base needs an off-line training step:
S201: discretization area to be targeted, takes N number of position as a reference point in area to be targeted uniformly;
S202: scan WIFI signal in the reference point described in each step S201, records received signal strength indicator value RSSI of each AP interior of continuous a period of time;
S203: process the RSSI vector of gained in step 202, calculate each AP interval at RSSI average, variance and the minimax of this reference point, these parameters are saved in data base together with mark SSID of corresponding A P;
S204: all of reference point is carried out S203 and the operation of step S204, until all of reference point is all trained complete, thus sets up the RSSI distribution map that area to be targeted is complete.
Finally, need to be embodied in composite space by the motion mode of oneself in real time:
S31: multiple and virtual reality terminal association sensory package is set at person joint;
The information of S32: each sensory package sends in real time to virtual reality terminal;
S33: virtual reality terminal resolves after receiving information, and be presented in described virtual reality scenario information.
Described sensory package include 3-axis acceleration sensor, three axis angular rate sensors, three axle geomagnetic sensors one or more.
The now action of user can embody in virtual reality scenario information.
When above-mentioned all complete after, user can proceed by virtual NPC mobile.
In the present embodiment, described virtual reality terminal is virtual implementing helmet or mobile terminal.The specifically chosen cost according to businessman is considered.
If employing virtual implementing helmet, need to purchase special equipment, but better.User can put on virtual implementing helmet and carry out virtual NPC searching.And this kind of method be applicable to personnel less in the case of.
If employing mobile terminal, such as mobile phone or panel computer, then need to install corresponding software, but convenient and swift effect is poor for using the method for virtual implementing helmet.This kind of method be applicable to personnel more in the case of.

Claims (9)

1. the method that virtual reality technology is combined with cloud service technology, it is characterised in that: it includes contextual data collection and uploading step, data download step, scene mapping step, positioning step and motion mode mapping step;
Described contextual data collection includes with uploading step: the entity object of the reality that user treats virtual image in advance carries out data acquisition, has gathered and has been uploaded to Cloud Server afterwards and preserves;
Described data download step includes: corresponding contextual data is downloaded in Cloud Server by user by virtual reality terminal;
Described scene mapping step shows for the region around the virtual scene of the data downloaded and user is carried out virtual reality in virtual reality terminal, including for virtual network element and real entity object being shown among GIS-Geographic Information System and form the first scene mapping sub-step of composite space and be used for being mapped as scene around the second scene mapping step of virtual scene;
Described GIS-Geographic Information System includes that electronic three-dimensional map, the first described scene mapping sub-step include following sub-step:
S111: described network element carries out GISization, described network element is the dummy object being not present in reality;
S112: described composite space is carried out three-dimensional visualization;
S113: virtual reality terminal presents the composite space after three-dimensional visualization and dummy object position;
The second described scene mapping step includes following sub-step:
S121: by the reality scene information of reality scene sensing module capture user's surrounding enviroment;
S122: calculate processing module and extract reality scene feature from described reality scene information, based on mapping relations set in advance, it is the feature for building virtual scene by described reality scene Feature Mapping, and based on the described feature construction virtual reality scenario information for building virtual scene;
S123: virtual reality terminal presents described virtual reality scenario information;
Described positioning step includes:
S21: initialize indoor reference point, is loaded into reference point information in data base;
S22: arrange queue and filter parameter, gathers WIFI signal data to queue;
S23: utilize the data queue gathered, calculates RSSI average corresponding for each AP on current location;
S24: travel through all reference points, whether the RSSI average calculated according to step S23 is within corresponding AP is about the RSSI interval of certain reference point, it is judged that whether this reference point is concentrated in the judgement of corresponding A P;
S25: ask each AP judge collection common factor:
(1) if only one of which reference point in Jiao Jiing, this reference point coordinate is exported as the estimation of algorithm, and terminates;
(2) if more than one reference point in Jiao Jiing, then calculate RSSI error vector, according to error to the reference point sequence in occuring simultaneously, and select k the point that wherein error is minimum, to weight k nearest neighbor algorithm calculating estimated result, and terminate;
(3) if occuring simultaneously for empty set, calculate each center judging collection, and using the center at these collection centers as Global center, Euler's distance is utilized to get rid of the judgement collection that centre-to-centre spacing Global center is farthest, and remaining judgement collection is made sub-step in step S25 (1), sub-step (2) and the intersection operation of sub-step (3), until obtaining estimated result, and terminate;If going to last layer still can not get result, perform sub-step (4);
(4) if sub-step (3) goes to last layer, occuring simultaneously is still empty set, then utilize the error distance between current RSSI average and reference point RSSI average, according to RSSI error minimum principle, utilizes weighting k nearest neighbor algorithm to calculate estimated result;
S26: the composite space after location information and three-dimensional visualization is mapped, shows current location information in composite space;
Described motion mode positioning step includes following sub-step:
S31: multiple and virtual reality terminal association sensory package is set at person joint;
The information of S32: each sensory package sends in real time to virtual reality terminal;
S33: virtual reality terminal resolves after receiving information, and be presented in described virtual reality scenario information.
A kind of method that virtual reality technology is combined with cloud service technology the most according to claim 1, it is characterised in that: described virtual reality terminal is virtual implementing helmet or mobile terminal.
A kind of method that virtual reality technology is combined with cloud service technology the most according to claim 1, it is characterised in that: described positioning step also includes an off-line training step:
S201: discretization area to be targeted, takes N number of position as a reference point in area to be targeted uniformly;
S202: scan WIFI signal in the reference point described in each step S201, records received signal strength indicator value RSSI of each AP interior of continuous a period of time;
S203: process the RSSI vector of gained in step 202, calculate each AP interval at RSSI average, variance and the minimax of this reference point, these parameters are saved in data base together with mark SSID of corresponding A P;
S204: all of reference point is carried out S203 and the operation of step S204, until all of reference point is all trained complete, thus sets up the RSSI distribution map that area to be targeted is complete.
A kind of method that virtual reality technology is combined with cloud service technology the most according to claim 1, it is characterised in that: the described composite space after three-dimensional visualization is the 3-D view of building.
A kind of method that virtual reality technology is combined with cloud service technology the most according to claim 1, it is characterised in that: described virtual reality terminal presents the visual angle adjustable of the composite space after three-dimensional visualization.
A kind of method that virtual reality technology is combined with cloud service technology the most according to claim 1, it is characterised in that: the time series frame data that reality scene information is capture user's surrounding enviroment image of the capture user's surrounding enviroment described in step S121;Described calculating processing module extracts reality scene feature from described reality scene information and described time series frame data is carried out pattern recognition analysis, to extract reality scene feature.
A kind of method that virtual reality technology is combined with cloud service technology the most according to claim 1, it is characterised in that: described reality scene sensing module includes: depth camera sensor, depth camera sensor and one or more the combination in the binding entity of RGB image sensor, ultrasonic locating sensing module, thermal imaging orientation sensing module and electromagnetic location sensing module.
A kind of method that virtual reality technology is combined with cloud service technology the most according to claim 1, it is characterised in that: described sensory package include 3-axis acceleration sensor, three axis angular rate sensors, three axle geomagnetic sensors one or more.
A kind of method that virtual reality technology is combined with cloud service technology the most according to claim 1, it is characterized in that: when in Cloud Server, the data of certain scene are updated, to downloading the virtual reality terminal PUSH message of this scene, described virtual reality terminal is reminded to be updated.
CN201610150103.5A 2016-03-16 2016-03-16 A method of by virtual reality technology in conjunction with cloud service technology Active CN105824416B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610150103.5A CN105824416B (en) 2016-03-16 2016-03-16 A method of by virtual reality technology in conjunction with cloud service technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610150103.5A CN105824416B (en) 2016-03-16 2016-03-16 A method of by virtual reality technology in conjunction with cloud service technology

Publications (2)

Publication Number Publication Date
CN105824416A true CN105824416A (en) 2016-08-03
CN105824416B CN105824416B (en) 2019-09-17

Family

ID=56523462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610150103.5A Active CN105824416B (en) 2016-03-16 2016-03-16 A method of by virtual reality technology in conjunction with cloud service technology

Country Status (1)

Country Link
CN (1) CN105824416B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875474A (en) * 2017-02-16 2017-06-20 北京通正设施设备有限公司 A kind of virtual elevator system
CN107102728A (en) * 2017-03-28 2017-08-29 北京犀牛数字互动科技有限公司 Display methods and system based on virtual reality technology
CN107168532A (en) * 2017-05-05 2017-09-15 武汉秀宝软件有限公司 A kind of virtual synchronous display methods and system based on augmented reality
CN107854288A (en) * 2017-11-01 2018-03-30 暨南大学 Ocular disorders monitoring and rehabilitation training glasses based on digital intelligent virtual three-dimensional stereopsis technology
CN108401463A (en) * 2017-08-11 2018-08-14 深圳前海达闼云端智能科技有限公司 Virtual display device, intelligent interaction method and cloud server
CN108514421A (en) * 2018-03-30 2018-09-11 福建幸福家园投资管理有限公司 The method for promoting mixed reality and routine health monitoring
CN109041012A (en) * 2018-08-21 2018-12-18 上海交通大学 Base station selecting method and system based on integrated communication and computing capability
CN109121143A (en) * 2017-06-23 2019-01-01 联芯科技有限公司 A kind of position mark method, terminal and computer readable storage medium
CN109902387A (en) * 2019-03-01 2019-06-18 广联达科技股份有限公司 A kind of method and apparatus of cutting or isolation based on small map
CN110418127A (en) * 2019-07-29 2019-11-05 南京师范大学 Virtual reality fusion device and method based on template pixel under a kind of Web environment
CN113655415A (en) * 2021-08-16 2021-11-16 东北大学 Augmented reality online visualization method for magnetic field distribution
CN114154038A (en) * 2021-11-02 2022-03-08 绘见科技(深圳)有限公司 Method and device for pushing virtual content information in batches, computer equipment and storage medium
CN116597119A (en) * 2022-12-30 2023-08-15 北京津发科技股份有限公司 Man-machine interaction acquisition method, device and system of wearable augmented reality equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102279697A (en) * 2010-06-09 2011-12-14 Lg电子株式会社 Mobile terminal and displaying method thereof
US20120214515A1 (en) * 2011-02-23 2012-08-23 Davis Bruce L Mobile Device Indoor Navigation
CN103384358A (en) * 2013-06-25 2013-11-06 云南大学 Indoor positioning method based on virtual reality and WIFI space field strength
CN103810353A (en) * 2014-03-09 2014-05-21 杨智 Real scene mapping system and method in virtual reality
CN103823818A (en) * 2012-11-19 2014-05-28 大连鑫奇辉科技有限公司 Book system on basis of virtual reality
CN104063466A (en) * 2014-06-27 2014-09-24 深圳先进技术研究院 Virtuality-reality integrated three-dimensional display method and virtuality-reality integrated three-dimensional display system
CN104796444A (en) * 2014-01-21 2015-07-22 广州海图克计算机技术有限公司 Digital household scene control management system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102279697A (en) * 2010-06-09 2011-12-14 Lg电子株式会社 Mobile terminal and displaying method thereof
US20120214515A1 (en) * 2011-02-23 2012-08-23 Davis Bruce L Mobile Device Indoor Navigation
CN103823818A (en) * 2012-11-19 2014-05-28 大连鑫奇辉科技有限公司 Book system on basis of virtual reality
CN103384358A (en) * 2013-06-25 2013-11-06 云南大学 Indoor positioning method based on virtual reality and WIFI space field strength
CN104796444A (en) * 2014-01-21 2015-07-22 广州海图克计算机技术有限公司 Digital household scene control management system and method
CN103810353A (en) * 2014-03-09 2014-05-21 杨智 Real scene mapping system and method in virtual reality
CN104063466A (en) * 2014-06-27 2014-09-24 深圳先进技术研究院 Virtuality-reality integrated three-dimensional display method and virtuality-reality integrated three-dimensional display system

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875474A (en) * 2017-02-16 2017-06-20 北京通正设施设备有限公司 A kind of virtual elevator system
CN107102728A (en) * 2017-03-28 2017-08-29 北京犀牛数字互动科技有限公司 Display methods and system based on virtual reality technology
CN107102728B (en) * 2017-03-28 2021-06-18 北京犀牛数字互动科技有限公司 Display method and system based on virtual reality technology
CN107168532A (en) * 2017-05-05 2017-09-15 武汉秀宝软件有限公司 A kind of virtual synchronous display methods and system based on augmented reality
CN107168532B (en) * 2017-05-05 2020-09-11 武汉秀宝软件有限公司 Virtual synchronous display method and system based on augmented reality
CN109121143A (en) * 2017-06-23 2019-01-01 联芯科技有限公司 A kind of position mark method, terminal and computer readable storage medium
WO2019028855A1 (en) * 2017-08-11 2019-02-14 深圳前海达闼云端智能科技有限公司 Virtual display device, intelligent interaction method, and cloud server
CN108401463A (en) * 2017-08-11 2018-08-14 深圳前海达闼云端智能科技有限公司 Virtual display device, intelligent interaction method and cloud server
CN107854288A (en) * 2017-11-01 2018-03-30 暨南大学 Ocular disorders monitoring and rehabilitation training glasses based on digital intelligent virtual three-dimensional stereopsis technology
CN108514421A (en) * 2018-03-30 2018-09-11 福建幸福家园投资管理有限公司 The method for promoting mixed reality and routine health monitoring
CN109041012A (en) * 2018-08-21 2018-12-18 上海交通大学 Base station selecting method and system based on integrated communication and computing capability
CN109902387A (en) * 2019-03-01 2019-06-18 广联达科技股份有限公司 A kind of method and apparatus of cutting or isolation based on small map
CN109902387B (en) * 2019-03-01 2023-06-09 广联达科技股份有限公司 Method and device for sectioning or isolating based on small map
CN110418127A (en) * 2019-07-29 2019-11-05 南京师范大学 Virtual reality fusion device and method based on template pixel under a kind of Web environment
CN110418127B (en) * 2019-07-29 2021-05-11 南京师范大学 Operation method of pixel template-based virtual-real fusion device in Web environment
CN113655415A (en) * 2021-08-16 2021-11-16 东北大学 Augmented reality online visualization method for magnetic field distribution
CN114154038A (en) * 2021-11-02 2022-03-08 绘见科技(深圳)有限公司 Method and device for pushing virtual content information in batches, computer equipment and storage medium
CN114154038B (en) * 2021-11-02 2024-01-19 绘见科技(深圳)有限公司 Virtual content information batch pushing method and device, computer equipment and storage medium
CN116597119A (en) * 2022-12-30 2023-08-15 北京津发科技股份有限公司 Man-machine interaction acquisition method, device and system of wearable augmented reality equipment

Also Published As

Publication number Publication date
CN105824416B (en) 2019-09-17

Similar Documents

Publication Publication Date Title
CN105807931B (en) A kind of implementation method of virtual reality
CN105608746B (en) A method of reality is subjected to Virtual Realization
CN105824416A (en) Method for combining virtual reality technique with cloud service technique
JP7002684B2 (en) Systems and methods for augmented reality and virtual reality
CN105797378A (en) Game video realizing method based on virtual reality technology
CN108401461B (en) Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN105824417A (en) Method for combining people and objects through virtual reality technology
EP2579128B1 (en) Portable device, virtual reality system and method
CN104699247B (en) A kind of virtual reality interactive system and method based on machine vision
KR101229283B1 (en) Method and system for visualising virtual three-dimensional objects
US20150070274A1 (en) Methods and systems for determining 6dof location and orientation of head-mounted display and associated user movements
CN109671118A (en) A kind of more people's exchange methods of virtual reality, apparatus and system
CN105027030A (en) Wireless wrist computing and control device and method for 3d imaging, mapping, networking and interfacing
CN103810353A (en) Real scene mapping system and method in virtual reality
CN104781849A (en) Fast initialization for monocular visual simultaneous localization and mapping (SLAM)
CN106484115A (en) For strengthening the system and method with virtual reality
JP2019537144A (en) Thermal management system for wearable components
CN105094335A (en) Scene extracting method, object positioning method and scene extracting system
JP7546116B2 (en) Systems and methods for augmented reality - Patents.com
CN105074776A (en) In situ creation of planar natural feature targets
CN106352870B (en) A kind of localization method and device of target
US20210190538A1 (en) Location determination and mapping with 3d line junctions
CN105225270B (en) A kind of information processing method and electronic equipment
KR102199772B1 (en) Method for providing 3D modeling data
KR101905272B1 (en) Apparatus for user direction recognition based on beacon cooperated with experiential type content providing apparatus and method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200714

Address after: Room 01, 7th floor, Huaxiong building, No.5, liangcuo Road, Gulou District, Fuzhou City, Fujian Province

Patentee after: FUJIAN DUODUOYUN TECHNOLOGY Co.,Ltd.

Address before: 610000 No. 6, No. 505, D zone, Tianfu Software Park, 599 century South Road, Tianfu District, Chengdu, Sichuan

Patentee before: CHENGDU CHAINSAW INTERACTIVE TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right