CN105824416B - A method of by virtual reality technology in conjunction with cloud service technology - Google Patents

A method of by virtual reality technology in conjunction with cloud service technology Download PDF

Info

Publication number
CN105824416B
CN105824416B CN201610150103.5A CN201610150103A CN105824416B CN 105824416 B CN105824416 B CN 105824416B CN 201610150103 A CN201610150103 A CN 201610150103A CN 105824416 B CN105824416 B CN 105824416B
Authority
CN
China
Prior art keywords
scene
virtual reality
virtual
information
reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610150103.5A
Other languages
Chinese (zh)
Other versions
CN105824416A (en
Inventor
赖斌斌
江兰波
樊星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FUJIAN DUODUOYUN TECHNOLOGY Co.,Ltd.
Original Assignee
Chengdu Chainsaw Interactive Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Chainsaw Interactive Technology Co Ltd filed Critical Chengdu Chainsaw Interactive Technology Co Ltd
Priority to CN201610150103.5A priority Critical patent/CN105824416B/en
Publication of CN105824416A publication Critical patent/CN105824416A/en
Application granted granted Critical
Publication of CN105824416B publication Critical patent/CN105824416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of methods by virtual reality technology in conjunction with cloud service technology, it includes contextual data acquisition and uploading step, data download step, scene mapping step, positioning step and motion mode mapping step.The world of Virtual Realization is associated by the present invention with real world, carries out real-time display by being located in the small map in the world of Virtual Realization, while factum movement can be observed in real time in user;Specifically, the building of real world is carried out to the displaying of 3-D view by the way of small map, and associated with positioning, it is visual in image;Analysis actual environment is understood using application reality scene sensing technology and corresponding calculation process technology, by some Feature Mappings in actual environment into the virtual scene for being presented to user, to improve user experience.

Description

A method of by virtual reality technology in conjunction with cloud service technology
Technical field
The present invention relates to a kind of methods by virtual reality technology in conjunction with cloud service technology.
Background technique
Virtual reality technology is a kind of can to create that it utilizes computer with the computer simulation system in the experiencing virtual world Generate the system emulation that a kind of simulated environment is a kind of interactive Three-Dimensional Dynamic what comes into a driver's and entity behavior of Multi-source Information Fusion It is immersed to user in the environment.Meanwhile virtual reality is the virtual world that a three-dimensional space is generated using computer simulation, is mentioned Simulation for user about sense organs such as vision, the sense of hearing, tactiles, allows user as being personally on the scene, can in time, do not have Things in limitation ground observation three-dimensional space.Virtual reality is the synthesis of multiple technologies, including real-time three-dimensional computer graphical skill Art, wide-angle (the wide visual field) stereo display technique are felt feedback to tracking technique and tactile/power of observer's head, eye and hand, are stood Body sound, network transmission, voice input and output technology etc..
In virtual reality technology, when user carries out position movement, computer can carry out complicated operation immediately, will The accurate world 3D image passes generation telepresenc back.The Integration ofTechnology computer graphical (CG) technology, Computer Simulation skill The later development of the technologies such as art, artificial intelligence, sensing technology, display technology, network parallel processing is one kind by computer The high-tech simulation system that technology auxiliary generates.
However existing virtual reality technology can not be associated with real real world, user can not be by virtual reality In the world connected with real world, therefore always generate distance perception.Meanwhile also not by virtual reality and cloud service into Row combines.
Summary of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide a kind of by virtual reality technology and cloud service technology In conjunction with method, the true world in the world of virtual reality and reality is associated, while using cloud, is facilitated multiple The realization of virtual reality terminal.
The purpose of the present invention is achieved through the following technical solutions: a kind of by virtual reality technology and cloud service technology In conjunction with method, it include contextual data acquisition with uploading step, data download step, scene mapping step, positioning step and Motion mode mapping step;
The contextual data acquisition and the entity object that uploading step includes: that user treats the reality of virtual image in advance Data acquisition is carried out, acquisition is uploaded to Cloud Server after completing and is saved;
The data download step includes: that user passes through virtual reality terminal in Cloud Server to corresponding scene number According to being downloaded;
The scene mapping step is used for the region around the virtual scene of the data of downloading and user virtual Non-real end carries out virtual reality and shows, including for the entity object of virtual network element and reality to be shown in geographical letter The first scene mapping sub-step of composite space is formed among breath system and for scene around to be mapped as the of virtual scene Two scene mapping steps;
The GIS-Geographic Information System includes electronic three-dimensional map, and the first scene mapping sub-step includes following son Step:
S111: the network element is subjected to GISization, the network element is the virtual object being not present in reality Body;
S112: the composite space is subjected to three-dimensional visualization;
S113: composite space and dummy object position after virtual reality terminal presentation three-dimensional visualization;
The second scene mapping step includes following sub-step:
S121: the reality scene information of user's surrounding enviroment is captured by reality scene sensing module;
S122: calculation processing module extracts reality scene feature from the reality scene information, based on setting in advance The reality scene Feature Mapping is to be used to construct the feature of virtual scene, and be used for based on described by fixed mapping relations Construct the feature construction virtual reality scenario information of virtual scene;
S123: the virtual reality scenario information is presented in virtual reality terminal;
The positioning step includes:
S21: initialization indoor reference point is loaded into reference point information in database;
S22: setting queue and filter parameter, acquisition WIFI signal data to queue;
S23: using the data queue of acquisition, the corresponding RSSI mean value of each AP on current location is calculated;
S24: traversing all reference points, and whether the RSSI mean value calculated according to step S23 refers in corresponding AP about certain Within the section RSSI of point, judge whether the reference point is concentrated in the judgement of corresponding A P;
S25: the intersection of each AP judgement collection is asked:
(1) it if only one reference point in intersection, is exported the reference point coordinate as the estimation of algorithm, and terminate;
(2) if more than one reference point in intersection, RSSI error vector is calculated, according to error to the reference point in intersection Sequence, and the wherein the smallest k point of error is selected, estimated result is calculated to weight k nearest neighbor algorithm, and terminate;
(3) if intersection is empty set, the center of each judgement collection is calculated, and using the center at these collection centers as in the overall situation The heart excludes the center judgement collection farthest away from Global center using Euler's distance, and makees sub-step in step S25 to remaining judgement collection Suddenly (1), sub-step (2) and sub-step (3) intersection operation, until obtaining estimated result, and terminate;If going to the last layer Still it cannot get as a result, executing sub-step (4);
(4) if sub-step (3) goes to the last layer, intersection is still empty set, then utilizes current RSSI mean value and reference Error distance between point RSSI mean value calculates estimated result using weighting k nearest neighbor algorithm according to RSSI error minimum principle;
S26: location information and the composite space after three-dimensional visualization are mapped, and it is current fixed to show in composite space Position information;
The motion mode positioning step includes following sub-step:
S31: multiple and virtual reality terminal association sensory package is set in personal joint;
S32: the information of each sensory package is sent to virtual reality terminal in real time;
S33: virtual reality terminal is parsed after receiving information, and is presented on the virtual reality scenario information In.
The virtual reality terminal is virtual implementing helmet or mobile terminal.
The positioning step further includes an off-line training step:
S201: discretization area to be targeted uniformly takes N number of position as a reference point in area to be targeted;
S202: the reference spot scan WIFI signal described in each step S201 records each AP in continuous a period of time Received signal strength indication value RSSI;
S203: RSSI vector obtained in processing step 202 calculates each AP in RSSI mean value, the variance of the reference point And minimax section, the mark SSID by these parameters together with corresponding A P are saved in database;
S204: carrying out S203 and step S204 to all reference points and operate, and finishes until all reference points are all trained, To establish the complete RSSI distribution map in area to be targeted.
Composite space after the three-dimensional visualization is the 3-D view of building.
The visual angle that the composite space after three-dimensional visualization is presented in the virtual reality terminal is adjustable.
The reality scene information that user's surrounding enviroment are captured described in step S121 is capture user's surrounding enviroment image Time series frame data;The calculation processing module extracts reality scene feature to institute from the reality scene information It states time series frame data and carries out pattern recognition analysis, to extract reality scene feature.
The reality scene sensing module includes: that depth camera sensor, depth camera sensor and RGB camera shooting pass One in the binding entity of sensor, ultrasonic wave orientation sensing module, thermal imaging orientation sensing module and electromagnetic location sensing module Kind or a variety of combinations.
The sensory package includes 3-axis acceleration sensor, three axis angular rate sensors, three axis geomagnetic sensors It is one or more kinds of.
When the data of some scene in Cloud Server are updated, pushed to the virtual reality terminal for downloading the scene Message reminds the virtual reality terminal to be updated.
The beneficial effects of the present invention are:
The world of Virtual Realization is associated by the present invention with real world, by be located in Virtual Realization the world it is small Map carries out real-time display, while factum movement can be observed in real time in user.Meanwhile using cloud, in advance will Contextual data, which is acquired, to be stored in Cloud Server, needs virtual reality terminal to be used only with downloading.
Specifically, by the way of small map by real world building carry out 3-D view displaying, and with positioning It is associated, it is visual in image;Understood using application reality scene sensing technology and corresponding calculation process technology and analyzes real ring Border, by some Feature Mappings in actual environment into the virtual scene for being presented to user, to improve user experience;And And positioning method therein realizes that the positioning of mobile target (people, equipment) and three-dimensional position are shown, is the base of virtual reality terminal In location-based service application provide coordinate estimation, have higher precision and it is lower delay (delay can by the scan period come Setting indirectly);In addition, Cloud Server, which also provides, updates push function, reliability is improved.
Detailed description of the invention
Fig. 1 is the method for the present invention flow chart.
Specific embodiment
Technical solution of the present invention is described in further detail with reference to the accompanying drawing:
As shown in Figure 1, a kind of method by virtual reality technology in conjunction with cloud service technology, it includes contextual data acquisition With uploading step, data download step, scene mapping step, positioning step and motion mode mapping step;
The contextual data acquisition and the entity object that uploading step includes: that user treats the reality of virtual image in advance Data acquisition is carried out, acquisition is uploaded to Cloud Server after completing and is saved;
The data download step includes: that user passes through virtual reality terminal in Cloud Server to corresponding scene number According to being downloaded;
The scene mapping step is used for the region around the virtual scene of the data of downloading and user virtual Non-real end carries out virtual reality and shows, including for the entity object of virtual network element and reality to be shown in geographical letter The first scene mapping sub-step of composite space is formed among breath system and for scene around to be mapped as the of virtual scene Two scene mapping steps;
The GIS-Geographic Information System includes electronic three-dimensional map, and the first scene mapping sub-step includes following son Step:
S111: the network element is subjected to GISization, the network element is the virtual object being not present in reality Body;
S112: the composite space is subjected to three-dimensional visualization;
S113: composite space and dummy object position after virtual reality terminal presentation three-dimensional visualization;
The second scene mapping step includes following sub-step:
S121: the reality scene information of user's surrounding enviroment is captured by reality scene sensing module;
S122: calculation processing module extracts reality scene feature from the reality scene information, based on setting in advance The reality scene Feature Mapping is to be used to construct the feature of virtual scene, and be used for based on described by fixed mapping relations Construct the feature construction virtual reality scenario information of virtual scene;
S123: the virtual reality scenario information is presented in virtual reality terminal;
The positioning step includes:
S21: initialization indoor reference point is loaded into reference point information in database;
S22: setting queue and filter parameter, acquisition WIFI signal data to queue;
S23: using the data queue of acquisition, the corresponding RSSI mean value of each AP on current location is calculated;
S24: traversing all reference points, and whether the RSSI mean value calculated according to step S23 refers in corresponding AP about certain Within the section RSSI of point, judge whether the reference point is concentrated in the judgement of corresponding A P;
S25: the intersection of each AP judgement collection is asked:
(1) it if only one reference point in intersection, is exported the reference point coordinate as the estimation of algorithm, and terminate;
(2) if more than one reference point in intersection, RSSI error vector is calculated, according to error to the reference point in intersection Sequence, and the wherein the smallest k point of error is selected, estimated result is calculated to weight k nearest neighbor algorithm, and terminate;
(3) if intersection is empty set, the center of each judgement collection is calculated, and using the center at these collection centers as in the overall situation The heart excludes the center judgement collection farthest away from Global center using Euler's distance, and makees sub-step in step S25 to remaining judgement collection Suddenly (1), sub-step (2) and sub-step (3) intersection operation, until obtaining estimated result, and terminate;If going to the last layer Still it cannot get as a result, executing sub-step (4);
(4) if sub-step (3) goes to the last layer, intersection is still empty set, then utilizes current RSSI mean value and reference Error distance between point RSSI mean value calculates estimated result using weighting k nearest neighbor algorithm according to RSSI error minimum principle;
S26: location information and the composite space after three-dimensional visualization are mapped, and it is current fixed to show in composite space Position information;
The motion mode positioning step includes following sub-step:
S31: multiple and virtual reality terminal association sensory package is set in personal joint;
S32: the information of each sensory package is sent to virtual reality terminal in real time;
S33: virtual reality terminal is parsed after receiving information, and is presented on the virtual reality scenario information In.
The virtual reality terminal is virtual implementing helmet or mobile terminal.
The positioning step further includes an off-line training step:
S201: discretization area to be targeted uniformly takes N number of position as a reference point in area to be targeted;
S202: the reference spot scan WIFI signal described in each step S201 records each AP in continuous a period of time Received signal strength indication value RSSI;
S203: RSSI vector obtained in processing step 202 calculates each AP in RSSI mean value, the variance of the reference point And minimax section, the mark SSID by these parameters together with corresponding A P are saved in database;
S204: carrying out S203 and step S204 to all reference points and operate, and finishes until all reference points are all trained, To establish the complete RSSI distribution map in area to be targeted.
Composite space after the three-dimensional visualization is the 3-D view of building.
The visual angle that the composite space after three-dimensional visualization is presented in the virtual reality terminal is adjustable.
The reality scene information that user's surrounding enviroment are captured described in step S121 is capture user's surrounding enviroment image Time series frame data;The calculation processing module extracts reality scene feature to institute from the reality scene information It states time series frame data and carries out pattern recognition analysis, to extract reality scene feature.
The reality scene sensing module includes: that depth camera sensor, depth camera sensor and RGB camera shooting pass One in the binding entity of sensor, ultrasonic wave orientation sensing module, thermal imaging orientation sensing module and electromagnetic location sensing module Kind or a variety of combinations.
The sensory package includes 3-axis acceleration sensor, three axis angular rate sensors, three axis geomagnetic sensors It is one or more kinds of.
When the data of some scene in Cloud Server are updated, pushed to the virtual reality terminal for downloading the scene Message reminds the virtual reality terminal to be updated.
The present embodiment is applied to market activity, and activity is held in certain market, is needed using to virtual reality, user needs logical Cross the particular artifact that method of the invention searches out specific position.For example, finding virtual NPC etc..
Firstly, the entity object that user treats the reality of virtual image in advance carries out data acquisition, acquire on after completing Cloud Server is reached to be saved;The data download step includes: that user passes through virtual reality terminal in Cloud Server Corresponding contextual data is downloaded.When the data of some scene in Cloud Server are updated, to downloading the scene Virtual reality terminal PUSH message, remind the virtual reality terminal to be updated.
Later, user obtains the mapping of the first scene, for the virtual scene of the data to downloading, carries out virtual reality and shows The shape in i.e. entire market and the specific position of floor and virtual NPC.
S111: carrying out GISization for network element, and the network element is the dummy object being not present in reality, Network element in the present embodiment is virtual NPC;
S112: carrying out three-dimensional visualization for the composite space, that is, obtain the shape and floor in entire market, can also To include the part landform outside market;
S113: the shape and floor in the entire market after virtual reality terminal presentation three-dimensional visualization and virtual NPC exist Some position in market, realizes (the picture occupied in virtual reality terminal by way of small map in the present embodiment Face is a corner).
The visual angle that the composite space after three-dimensional visualization is presented in the virtual reality terminal is adjustable.
Then, user obtains the mapping of the second scene, that is, obtains the virtual reality information of ambient enviroment.
S121: the reality scene information of user's surrounding enviroment is captured by reality scene sensing module;
S122: calculation processing module extracts reality scene feature from the reality scene information, based on setting in advance The reality scene Feature Mapping is to be used to construct the feature of virtual scene, and be used for based on described by fixed mapping relations Construct the feature construction virtual reality scenario information of virtual scene;
S123: virtual reality terminal is presented the virtual reality scenario information and passes through virtual animation in the present embodiment Form is realized and occupies all pictures of the entire picture in addition to small map segment.
Wherein, the reality scene information that user's surrounding enviroment are captured described in step S121 is capture user's surrounding enviroment The time series frame data of image;The calculation processing module extracts reality scene feature from the reality scene information Pattern recognition analysis is carried out to the time series frame data, to extract reality scene feature.
Then, user positions oneself.
The positioning step includes:
S21: initialization indoor reference point is loaded into reference point information in database;
S22: setting queue and filter parameter, acquisition WIFI signal data to queue;
S23: using the data queue of acquisition, the corresponding RSSI mean value of each AP on current location is calculated;
S24: traversing all reference points, and whether the RSSI mean value calculated according to step S23 refers in corresponding AP about certain Within the section RSSI of point, judge whether the reference point is concentrated in the judgement of corresponding A P;
S25: the intersection of each AP judgement collection is asked:
(1) it if only one reference point in intersection, is exported the reference point coordinate as the estimation of algorithm, and terminate;
(2) if more than one reference point in intersection, RSSI error vector is calculated, according to error to the reference point in intersection Sequence, and the wherein the smallest k point of error is selected, estimated result is calculated to weight k nearest neighbor algorithm, and terminate;
(3) if intersection is empty set, the center of each judgement collection is calculated, and using the center at these collection centers as in the overall situation The heart excludes the center judgement collection farthest away from Global center using Euler's distance, and makees sub-step in step S25 to remaining judgement collection Suddenly (1), sub-step (2) and sub-step (3) intersection operation, until obtaining estimated result, and terminate;If going to the last layer Still it cannot get as a result, executing sub-step (4);
(4) if sub-step (3) goes to the last layer, intersection is still empty set, then utilizes current RSSI mean value and reference Error distance between point RSSI mean value calculates estimated result using weighting k nearest neighbor algorithm according to RSSI error minimum principle;
S26: location information and the composite space after three-dimensional visualization are mapped, and it is current fixed to show in composite space Position information.I.e. oneself position location is carried out real-time display by user in small map.
Wherein, the database needs an off-line training step:
S201: discretization area to be targeted uniformly takes N number of position as a reference point in area to be targeted;
S202: the reference spot scan WIFI signal described in each step S201 records each AP in continuous a period of time Received signal strength indication value RSSI;
S203: RSSI vector obtained in processing step 202 calculates each AP in RSSI mean value, the variance of the reference point And minimax section, the mark SSID by these parameters together with corresponding A P are saved in database;
S204: carrying out S203 and step S204 to all reference points and operate, and finishes until all reference points are all trained, To establish the complete RSSI distribution map in area to be targeted.
Finally, needing in real time to embody the motion mode of oneself in composite space:
S31: multiple and virtual reality terminal association sensory package is set in personal joint;
S32: the information of each sensory package is sent to virtual reality terminal in real time;
S33: virtual reality terminal is parsed after receiving information, and is presented on the virtual reality scenario information In.
The sensory package includes 3-axis acceleration sensor, three axis angular rate sensors, three axis geomagnetic sensors It is one or more kinds of.
The movement of user at this time can embody in virtual reality scenario information.
After above-mentioned all completions, user can start move at virtual NPC.
In the present embodiment, the virtual reality terminal is virtual implementing helmet or mobile terminal.It is specifically chosen root It is considered according to the cost of businessman.
If needing to purchase dedicated equipment using virtual implementing helmet, but better effect.User can put on virtual existing The real helmet carries out virtual NPC searching.And such method be suitable for personnel it is less in the case where.
If it is mobile terminal, such as mobile phone or tablet computer is used, then need to install corresponding software, it is convenient and efficient But effect is poor for the method using virtual implementing helmet.Such method be suitable for personnel it is more in the case where.

Claims (1)

1. a kind of method that reality is carried out Virtual Realization, it is characterised in that: it includes contextual data acquisition and uploading step, number It is fixed according to download step, scene mapping step, positioning step, motion mode mapping step, specific information triggering step, particular artifact Position step and authorisation step;
The entity object that the contextual data acquisition includes: the reality that user treats virtual image in advance with uploading step carries out Data acquisition, acquisition are uploaded to Cloud Server after completing and are saved;
The data download step include: user by virtual reality terminal in Cloud Server to corresponding contextual data into Row downloading;
The scene mapping step is used for the region around the virtual scene of the data of downloading and user in virtual reality Terminal carries out virtual reality and shows, including for the entity object of virtual network element and reality to be shown in geography information system The first scene mapping sub-step of composite space is formed among system and for scene around to be mapped as second of virtual scene Scape mapping step;
The GIS-Geographic Information System includes electronic three-dimensional map, and the first scene mapping sub-step includes following sub-step It is rapid:
S111: the network element is subjected to GISization, the network element is the dummy object being not present in reality;
S112: the composite space is subjected to three-dimensional visualization;
S113: composite space and dummy object position after virtual reality terminal presentation three-dimensional visualization;
The second scene mapping step includes following sub-step:
S121: the reality scene information of user's surrounding enviroment is captured by reality scene sensing module;
S122: calculation processing module extracts reality scene feature from the reality scene information, based on preset The reality scene Feature Mapping is to be used to construct the feature of virtual scene, and be used to construct based on described by mapping relations The feature construction virtual reality scenario information of virtual scene;
S123: the virtual reality scenario information is presented in virtual reality terminal;
The positioning step includes:
S21: initialization indoor reference point is loaded into reference point information in database;
S22: setting queue and filter parameter, acquisition WIFI signal data to queue;
S23: using the data queue of acquisition, the corresponding RSSI mean value of each AP on current location is calculated;
S24: traversing all reference points, according to step S23 calculate RSSI mean value whether in corresponding AP about certain reference point Within the section RSSI, judge whether the reference point is concentrated in the judgement of corresponding A P;
S25: the intersection of each AP judgement collection is asked:
(1) it if only one reference point in intersection, is exported the reference point coordinate as the estimation of algorithm, and terminate;
(2) if more than one reference point in intersection, RSSI error vector is calculated, the reference point in intersection is arranged according to error Sequence, and the wherein the smallest k point of error is selected, estimated result is calculated to weight k nearest neighbor algorithm, and terminate;
(3) if intersection is empty set, the center of each judgement collection is calculated, and using the center at these collection centers as Global center, benefit The center judgement collection farthest away from Global center is excluded with Euler's distance, and sub-step in step S25 is made to remaining judgement collection (1), the intersection operation of sub-step (2) and sub-step (3), until obtaining estimated result, and terminates;If going to the last layer still It cannot get as a result, executing sub-step (4);
(4) if sub-step (3) goes to the last layer, intersection is still empty set, then utilizes current RSSI mean value and reference point Error distance between RSSI mean value calculates estimated result using weighting k nearest neighbor algorithm according to RSSI error minimum principle;
S26: location information and the composite space after three-dimensional visualization are mapped, and display is when prelocalization letter in composite space Breath;
The motion mode positioning step includes following sub-step:
S31: multiple and virtual reality terminal association sensory package is set in personal joint;
S32: the information of each sensory package is sent to virtual reality terminal in real time;
S33: virtual reality terminal is parsed after receiving information, and is presented in the virtual reality scenario information;
The specific information triggering step includes: when user reaches specific region, and virtual reality terminal triggers particular message, Described in arrival specific region judgement by including in location determination or the picture frame occurred by virtual reality terminal Preset specify information;
The particular artifact is physical objects, and the particular artifact positioning step includes:
S41: locating module is set in particular artifact;
S42: particular artifact is shown in composite space according to step S21 ~ step S26 method;
The authorisation step includes: after server is to virtual reality authorization terminal, and multiple virtual reality terminals appear in together In one composite space, i.e. the virtual reality terminal of user shows the location information of the virtual reality terminal of multiple other users;
The virtual reality terminal is virtual implementing helmet or mobile terminal;
The positioning step further includes an off-line training step:
S201: discretization area to be targeted uniformly takes N number of position as a reference point in area to be targeted;
S202: the reference spot scan WIFI signal described in each step S201 records connecing for each AP in continuous a period of time Receive signal strength indication value RSSI;
S203: RSSI vector obtained in processing step 202, calculate each AP the RSSI mean value of the reference point, variance and Minimax section, the mark SSID by these parameters together with corresponding A P are saved in database;
S204: carrying out S203 and step S204 to all reference points and operate, and finishes until all reference points are all trained, thus Establish the complete RSSI distribution map in area to be targeted;
Described in step S121 capture user's surrounding enviroment reality scene information be capture user's surrounding enviroment image when Sequence frame data;The calculation processing module extracted from the reality scene information reality scene feature to it is described when Sequence frame data carry out pattern recognition analysis, to extract reality scene feature;
The reality scene sensing module includes: depth camera sensor, depth camera sensor and RGB image sensor One of binding entity, ultrasonic wave orientation sensing module, thermal imaging orientation sensing module and electromagnetic location sensing module or A variety of combinations;
The sensory package includes one kind of 3-axis acceleration sensor, three axis angular rate sensors, three axis geomagnetic sensors Or it is a variety of;
When the data of some scene in Cloud Server are updated, push and disappear to the virtual reality terminal for downloading the scene Breath, reminds the virtual reality terminal to be updated;
The specific information triggering step includes following sub-step:
S41: judge in going game picture frame whether include target person specify information;
S42: if including the specify information of the target person in going game picture frame, the display position of the specify information is obtained It sets;
S43: specified animation is added in the display position based on specify information described in going game picture frame;
The particular artifact positioning step further includes a sub-steps S43: when user carries the particular artifact and movement When, it is shown in composite space;Wherein, judge that the step of personage carries particular artifact includes the biography that judgement is set to hand Whether sensor keeps in a certain range whithin a period of time at a distance from the relative position of particular artifact;
The 3-axis acceleration sensor selects BMA250, three axis angular rate sensor MPU6050, three axis geomagnetic sensors HMC5883。
CN201610150103.5A 2016-03-16 2016-03-16 A method of by virtual reality technology in conjunction with cloud service technology Active CN105824416B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610150103.5A CN105824416B (en) 2016-03-16 2016-03-16 A method of by virtual reality technology in conjunction with cloud service technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610150103.5A CN105824416B (en) 2016-03-16 2016-03-16 A method of by virtual reality technology in conjunction with cloud service technology

Publications (2)

Publication Number Publication Date
CN105824416A CN105824416A (en) 2016-08-03
CN105824416B true CN105824416B (en) 2019-09-17

Family

ID=56523462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610150103.5A Active CN105824416B (en) 2016-03-16 2016-03-16 A method of by virtual reality technology in conjunction with cloud service technology

Country Status (1)

Country Link
CN (1) CN105824416B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875474A (en) * 2017-02-16 2017-06-20 北京通正设施设备有限公司 A kind of virtual elevator system
CN107102728B (en) * 2017-03-28 2021-06-18 北京犀牛数字互动科技有限公司 Display method and system based on virtual reality technology
CN107168532B (en) * 2017-05-05 2020-09-11 武汉秀宝软件有限公司 Virtual synchronous display method and system based on augmented reality
CN109121143A (en) * 2017-06-23 2019-01-01 联芯科技有限公司 A kind of position mark method, terminal and computer readable storage medium
CN108401463A (en) * 2017-08-11 2018-08-14 深圳前海达闼云端智能科技有限公司 Virtual display device, intelligent interaction method and cloud server
CN107854288A (en) * 2017-11-01 2018-03-30 暨南大学 Ocular disorders monitoring and rehabilitation training glasses based on digital intelligent virtual three-dimensional stereopsis technology
CN108514421A (en) * 2018-03-30 2018-09-11 福建幸福家园投资管理有限公司 The method for promoting mixed reality and routine health monitoring
CN109041012A (en) * 2018-08-21 2018-12-18 上海交通大学 Base station selecting method and system based on integrated communication and computing capability
CN109902387B (en) * 2019-03-01 2023-06-09 广联达科技股份有限公司 Method and device for sectioning or isolating based on small map
CN110418127B (en) * 2019-07-29 2021-05-11 南京师范大学 Operation method of pixel template-based virtual-real fusion device in Web environment
CN113655415B (en) * 2021-08-16 2023-01-17 东北大学 Augmented reality online visualization method for magnetic field distribution
CN114154038B (en) * 2021-11-02 2024-01-19 绘见科技(深圳)有限公司 Virtual content information batch pushing method and device, computer equipment and storage medium
CN116597119A (en) * 2022-12-30 2023-08-15 北京津发科技股份有限公司 Man-machine interaction acquisition method, device and system of wearable augmented reality equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102279697A (en) * 2010-06-09 2011-12-14 Lg电子株式会社 Mobile terminal and displaying method thereof
CN103384358A (en) * 2013-06-25 2013-11-06 云南大学 Indoor positioning method based on virtual reality and WIFI space field strength
CN103810353A (en) * 2014-03-09 2014-05-21 杨智 Real scene mapping system and method in virtual reality
CN103823818A (en) * 2012-11-19 2014-05-28 大连鑫奇辉科技有限公司 Book system on basis of virtual reality
CN104063466A (en) * 2014-06-27 2014-09-24 深圳先进技术研究院 Virtuality-reality integrated three-dimensional display method and virtuality-reality integrated three-dimensional display system
CN104796444A (en) * 2014-01-21 2015-07-22 广州海图克计算机技术有限公司 Digital household scene control management system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8660581B2 (en) * 2011-02-23 2014-02-25 Digimarc Corporation Mobile device indoor navigation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102279697A (en) * 2010-06-09 2011-12-14 Lg电子株式会社 Mobile terminal and displaying method thereof
CN103823818A (en) * 2012-11-19 2014-05-28 大连鑫奇辉科技有限公司 Book system on basis of virtual reality
CN103384358A (en) * 2013-06-25 2013-11-06 云南大学 Indoor positioning method based on virtual reality and WIFI space field strength
CN104796444A (en) * 2014-01-21 2015-07-22 广州海图克计算机技术有限公司 Digital household scene control management system and method
CN103810353A (en) * 2014-03-09 2014-05-21 杨智 Real scene mapping system and method in virtual reality
CN104063466A (en) * 2014-06-27 2014-09-24 深圳先进技术研究院 Virtuality-reality integrated three-dimensional display method and virtuality-reality integrated three-dimensional display system

Also Published As

Publication number Publication date
CN105824416A (en) 2016-08-03

Similar Documents

Publication Publication Date Title
CN105807931B (en) A kind of implementation method of virtual reality
CN105824416B (en) A method of by virtual reality technology in conjunction with cloud service technology
CN105608746B (en) A method of reality is subjected to Virtual Realization
AU2023200677B2 (en) System and method for augmented and virtual reality
US11262841B2 (en) Wireless wrist computing and control device and method for 3D imaging, mapping, networking and interfacing
JP5934368B2 (en) Portable device, virtual reality system and method
CN105824417B (en) human-object combination method adopting virtual reality technology
CN105797378A (en) Game video realizing method based on virtual reality technology
JP2023126474A (en) Systems and methods for augmented reality
CN105797379A (en) Game video processing method based on virtual reality technology

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200714

Address after: Room 01, 7th floor, Huaxiong building, No.5, liangcuo Road, Gulou District, Fuzhou City, Fujian Province

Patentee after: FUJIAN DUODUOYUN TECHNOLOGY Co.,Ltd.

Address before: 610000 No. 6, No. 505, D zone, Tianfu Software Park, 599 century South Road, Tianfu District, Chengdu, Sichuan

Patentee before: CHENGDU CHAINSAW INTERACTIVE TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right