CN105807931B - A kind of implementation method of virtual reality - Google Patents

A kind of implementation method of virtual reality Download PDF

Info

Publication number
CN105807931B
CN105807931B CN201610150102.0A CN201610150102A CN105807931B CN 105807931 B CN105807931 B CN 105807931B CN 201610150102 A CN201610150102 A CN 201610150102A CN 105807931 B CN105807931 B CN 105807931B
Authority
CN
China
Prior art keywords
scene
virtual reality
virtual
information
reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610150102.0A
Other languages
Chinese (zh)
Other versions
CN105807931A (en
Inventor
赖斌斌
江兰波
樊星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XUZHOU SHUOBO ELECTRONIC TECHNOLOGY Co.,Ltd.
Original Assignee
Chengdu Chainsaw Interactive Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Chainsaw Interactive Technology Co Ltd filed Critical Chengdu Chainsaw Interactive Technology Co Ltd
Priority to CN201610150102.0A priority Critical patent/CN105807931B/en
Publication of CN105807931A publication Critical patent/CN105807931A/en
Application granted granted Critical
Publication of CN105807931B publication Critical patent/CN105807931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a kind of methods that reality is carried out Virtual Realization, it includes contextual data acquisition and uploading step, data download step, scene mapping step, positioning step, motion mode mapping step, specific information triggering step, particular artifact positioning step and authorisation step.The world of Virtual Realization is associated by the present invention with real world, carries out real-time display by being located in the small map in the world of Virtual Realization, while factum movement can be observed in real time in user;Specifically, the building of real world is carried out to the displaying of 3-D view by the way of small map, and associated with positioning, it is visual in image;Analysis actual environment is understood using application reality scene sensing technology and corresponding calculation process technology, by some Feature Mappings in actual environment into the virtual scene for being presented to user, to improve user experience.

Description

A kind of implementation method of virtual reality
Technical field
The present invention relates to a kind of methods that reality is carried out Virtual Realization.
Background technique
Virtual reality technology is a kind of can to create that it utilizes computer with the computer simulation system in the experiencing virtual world Generate the system emulation that a kind of simulated environment is a kind of interactive Three-Dimensional Dynamic what comes into a driver's and entity behavior of Multi-source Information Fusion It is immersed to user in the environment.Meanwhile virtual reality is the virtual world that a three-dimensional space is generated using computer simulation, is mentioned Simulation for user about sense organs such as vision, the sense of hearing, tactiles, allows user as being personally on the scene, can in time, do not have Things in limitation ground observation three-dimensional space.Virtual reality is the synthesis of multiple technologies, including real-time three-dimensional computer graphical skill Art, wide-angle (the wide visual field) stereo display technique are felt feedback to tracking technique and tactile/power of observer's head, eye and hand, are stood Body sound, network transmission, voice input and output technology etc..
In virtual reality technology, when user carries out position movement, computer can carry out complicated operation immediately, will The accurate world 3D image passes generation telepresenc back.The Integration ofTechnology computer graphical (CG) technology, Computer Simulation skill The later development of the technologies such as art, artificial intelligence, sensing technology, display technology, network parallel processing is one kind by computer The high-tech simulation system that technology auxiliary generates.
However existing virtual reality technology can not be associated with real real world, user can not be by virtual reality In the world connected with real world, therefore always generate distance perception.
Summary of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide it is a kind of by reality carry out Virtual Realization method, The world of virtual reality and the world true in reality are associated.
The purpose of the present invention is achieved through the following technical solutions: a method of reality is subjected to Virtual Realization, It includes that contextual data acquisition is mapped with uploading step, data download step, scene mapping step, positioning step, motion mode Step, specific information set out step and authorisation step;
The contextual data acquisition and the entity object that uploading step includes: that user treats the reality of virtual image in advance Data acquisition is carried out, acquisition is uploaded to Cloud Server after completing and is saved;
The data download step includes: that user passes through virtual reality terminal in Cloud Server to corresponding scene number According to being downloaded;
The scene mapping step is used for the region around the virtual scene of the data of downloading and user virtual Non-real end carries out virtual reality and shows, including for the entity object of virtual network element and reality to be shown in geographical letter The first scene mapping sub-step of composite space is formed among breath system and for scene around to be mapped as the of virtual scene Two scene mapping steps;
The GIS-Geographic Information System includes electronic three-dimensional map, and the first scene mapping sub-step includes following son Step:
S111: the network element is subjected to GISization, the network element is the virtual object being not present in reality Body;
S112: the composite space is subjected to three-dimensional visualization;
S113: composite space and dummy object position after virtual reality terminal presentation three-dimensional visualization;
The second scene mapping step includes following sub-step:
S121: the reality scene information of user's surrounding enviroment is captured by reality scene sensing module;
S122: calculation processing module extracts reality scene feature from the reality scene information, based on setting in advance The reality scene Feature Mapping is to be used to construct the feature of virtual scene, and be used for based on described by fixed mapping relations Construct the feature construction virtual reality scenario information of virtual scene;
S123: the virtual reality scenario information is presented in virtual reality terminal;
The positioning step includes:
S21: initialization indoor reference point is loaded into reference point information in database;
S22: setting queue and filter parameter, acquisition WIFI signal data to queue;
S23: using the data queue of acquisition, the corresponding RSSI mean value of each AP on current location is calculated;
S24: traversing all reference points, and whether the RSSI mean value calculated according to step S23 refers in corresponding AP about certain Within the section RSSI of point, judge whether the reference point is concentrated in the judgement of corresponding A P;
S25: the intersection of each AP judgement collection is asked:
(1) it if only one reference point in intersection, is exported the reference point coordinate as the estimation of algorithm, and terminate;
(2) if more than one reference point in intersection, RSSI error vector is calculated, according to error to the reference point in intersection Sequence, and the wherein the smallest k point of error is selected, estimated result is calculated to weight k nearest neighbor algorithm, and terminate;
(3) if intersection is empty set, the center of each judgement collection is calculated, and using the center at these collection centers as in the overall situation The heart excludes the center judgement collection farthest away from Global center using Euler's distance, and makees sub-step in step S25 to remaining judgement collection Suddenly (1), sub-step (2) and sub-step (3) intersection operation, until obtaining estimated result, and terminate;If going to the last layer Still it cannot get as a result, executing sub-step (4);
(4) if sub-step (3) goes to the last layer, intersection is still empty set, then utilizes current RSSI mean value and reference Error distance between point RSSI mean value calculates estimated result using weighting k nearest neighbor algorithm according to RSSI error minimum principle;
S26: location information and the composite space after three-dimensional visualization are mapped, and it is current fixed to show in composite space Position information;
The motion mode positioning step includes following sub-step:
S31: multiple and virtual reality terminal association sensory package is set in personal joint;
S32: the information of each sensory package is sent to virtual reality terminal in real time;
S33: virtual reality terminal is parsed after receiving information, and is presented on the virtual reality scenario information In;
The specific information triggering step includes: when user reaches specific region, and the triggering of virtual reality terminal is specific to disappear Breath, wherein in the picture frame for judging to occur by location determination or by virtual reality terminal of the arrivals specific region Include preset specify information;
The particular artifact is physical objects, and the particular artifact positioning step includes:
S41: locating module is set in particular artifact;
S42: particular artifact is shown in composite space according to step S21 ~ step S26 method;
The authorisation step includes: after server is to virtual reality authorization terminal, and multiple virtual reality terminals occur In the same composite space, i.e. the virtual reality terminal of user shows the positioning letter of the virtual reality terminal of multiple other users Breath.
The virtual reality terminal is virtual implementing helmet or mobile terminal.
The positioning step further includes an off-line training step:
S201: discretization area to be targeted uniformly takes N number of position as a reference point in area to be targeted;
S202: the reference spot scan WIFI signal described in each step S201 records each AP in continuous a period of time Received signal strength indication value RSSI;
S203: RSSI vector obtained in processing step 202 calculates each AP in RSSI mean value, the variance of the reference point And minimax section, the mark SSID by these parameters together with corresponding A P are saved in database;
S204: carrying out S203 and step S204 to all reference points and operate, and finishes until all reference points are all trained, To establish the complete RSSI distribution map in area to be targeted.
Composite space after the three-dimensional visualization is the 3-D view of building.
The visual angle that the composite space after three-dimensional visualization is presented in the virtual reality terminal is adjustable.
The reality scene information that user's surrounding enviroment are captured described in step S121 is capture user's surrounding enviroment image Time series frame data;The calculation processing module extracts reality scene feature to institute from the reality scene information It states time series frame data and carries out pattern recognition analysis, to extract reality scene feature.
The reality scene sensing module includes: that depth camera sensor, depth camera sensor and RGB camera shooting pass One in the binding entity of sensor, ultrasonic wave orientation sensing module, thermal imaging orientation sensing module and electromagnetic location sensing module Kind or a variety of combinations.
The sensory package includes 3-axis acceleration sensor, three axis angular rate sensors, three axis geomagnetic sensors It is one or more kinds of.
The specific information triggering step includes following sub-step:
S41: judge in going game picture frame whether include target person specify information;
S42: if in going game picture frame including the specify information of the target person, the aobvious of the specify information is obtained Show position;
S43: specified animation is added in the display position based on specify information described in going game picture frame.
The particular artifact positioning step further includes a sub-steps S43: when user carry the particular artifact and When mobile, shown in composite space;Wherein, judge that the step of personage carries particular artifact includes that judgement is set to hand Sensor whether kept whithin a period of time in a certain range at a distance from the relative position of particular artifact.
The beneficial effects of the present invention are:
The world of Virtual Realization is associated by the present invention with real world, by be located in Virtual Realization the world it is small Map carries out real-time display, while factum movement can be observed in real time in user.Meanwhile using cloud, in advance will Contextual data, which is acquired, to be stored in Cloud Server, needs virtual reality terminal to be used only with downloading.
Specifically, by the way of small map by real world building carry out 3-D view displaying, and with positioning It is associated, it is visual in image;Understood using application reality scene sensing technology and corresponding calculation process technology and analyzes real ring Border, by some Feature Mappings in actual environment into the virtual scene for being presented to user, to improve user experience;And And positioning method therein realizes that the positioning of mobile target (people, equipment) and three-dimensional position are shown, is the base of virtual reality terminal In location-based service application provide coordinate estimation, have higher precision and it is lower delay (delay can by the scan period come Setting indirectly);In addition, Cloud Server, which also provides, updates push function, reliability is improved;The invention also includes a specific informations Triggering, specific information can be triggered when user is moved to particular range;Meanwhile the present invention is also by the object of real world and void The function that quasi- reality is associated display and multiple virtual reality terminals are shown.
Detailed description of the invention
Fig. 1 is the method for the present invention flow chart.
Specific embodiment
Technical solution of the present invention is described in further detail with reference to the accompanying drawing:
As shown in Figure 1, it is a kind of by reality carry out Virtual Realization method, it include contextual data acquisition with uploading step, Data download step, scene mapping step, positioning step, motion mode mapping step, specific information trigger step, particular artifact Positioning step and authorisation step;
The contextual data acquisition and the entity object that uploading step includes: that user treats the reality of virtual image in advance Data acquisition is carried out, acquisition is uploaded to Cloud Server after completing and is saved;
The data download step includes: that user passes through virtual reality terminal in Cloud Server to corresponding scene number According to being downloaded;
The scene mapping step is used for the region around the virtual scene of the data of downloading and user virtual Non-real end carries out virtual reality and shows, including for the entity object of virtual network element and reality to be shown in geographical letter The first scene mapping sub-step of composite space is formed among breath system and for scene around to be mapped as the of virtual scene Two scene mapping steps;
The GIS-Geographic Information System includes electronic three-dimensional map, and the first scene mapping sub-step includes following son Step:
S111: the network element is subjected to GISization, the network element is the virtual object being not present in reality Body;
S112: the composite space is subjected to three-dimensional visualization;
S113: composite space and dummy object position after virtual reality terminal presentation three-dimensional visualization;
The second scene mapping step includes following sub-step:
S121: the reality scene information of user's surrounding enviroment is captured by reality scene sensing module;
S122: calculation processing module extracts reality scene feature from the reality scene information, based on setting in advance The reality scene Feature Mapping is to be used to construct the feature of virtual scene, and be used for based on described by fixed mapping relations Construct the feature construction virtual reality scenario information of virtual scene;
S123: the virtual reality scenario information is presented in virtual reality terminal;
The positioning step includes:
S21: initialization indoor reference point is loaded into reference point information in database;
S22: setting queue and filter parameter, acquisition WIFI signal data to queue;
S23: using the data queue of acquisition, the corresponding RSSI mean value of each AP on current location is calculated;
S24: traversing all reference points, and whether the RSSI mean value calculated according to step S23 refers in corresponding AP about certain Within the section RSSI of point, judge whether the reference point is concentrated in the judgement of corresponding A P;
S25: the intersection of each AP judgement collection is asked:
(1) it if only one reference point in intersection, is exported the reference point coordinate as the estimation of algorithm, and terminate;
(2) if more than one reference point in intersection, RSSI error vector is calculated, according to error to the reference point in intersection Sequence, and the wherein the smallest k point of error is selected, estimated result is calculated to weight k nearest neighbor algorithm, and terminate;
(3) if intersection is empty set, the center of each judgement collection is calculated, and using the center at these collection centers as in the overall situation The heart excludes the center judgement collection farthest away from Global center using Euler's distance, and makees sub-step in step S25 to remaining judgement collection Suddenly (1), sub-step (2) and sub-step (3) intersection operation, until obtaining estimated result, and terminate;If going to the last layer Still it cannot get as a result, executing sub-step (4);
(4) if sub-step (3) goes to the last layer, intersection is still empty set, then utilizes current RSSI mean value and reference Error distance between point RSSI mean value calculates estimated result using weighting k nearest neighbor algorithm according to RSSI error minimum principle;
S26: location information and the composite space after three-dimensional visualization are mapped, and it is current fixed to show in composite space Position information;
The motion mode positioning step includes following sub-step:
S31: multiple and virtual reality terminal association sensory package is set in personal joint;
S32: the information of each sensory package is sent to virtual reality terminal in real time;
S33: virtual reality terminal is parsed after receiving information, and is presented on the virtual reality scenario information In;
The specific information triggering step includes: when user reaches specific region, and the triggering of virtual reality terminal is specific to disappear Breath, wherein in the picture frame for judging to occur by location determination or by virtual reality terminal of the arrivals specific region Include preset specify information;
The particular artifact is physical objects, and the particular artifact positioning step includes:
S41: locating module is set in particular artifact;
S42: particular artifact is shown in composite space according to step S21 ~ step S26 method;
The authorisation step includes: after server is to virtual reality authorization terminal, and multiple virtual reality terminals occur In the same composite space, i.e. the virtual reality terminal of user shows the positioning letter of the virtual reality terminal of multiple other users Breath.
The virtual reality terminal is virtual implementing helmet or mobile terminal.
The positioning step further includes an off-line training step:
S201: discretization area to be targeted uniformly takes N number of position as a reference point in area to be targeted;
S202: the reference spot scan WIFI signal described in each step S201 records each AP in continuous a period of time Received signal strength indication value RSSI;
S203: RSSI vector obtained in processing step 202 calculates each AP in RSSI mean value, the variance of the reference point And minimax section, the mark SSID by these parameters together with corresponding A P are saved in database;
S204: carrying out S203 and step S204 to all reference points and operate, and finishes until all reference points are all trained, To establish the complete RSSI distribution map in area to be targeted.
Composite space after the three-dimensional visualization is the 3-D view of building.
The visual angle that the composite space after three-dimensional visualization is presented in the virtual reality terminal is adjustable.
The reality scene information that user's surrounding enviroment are captured described in step S121 is capture user's surrounding enviroment image Time series frame data;The calculation processing module extracts reality scene feature to institute from the reality scene information It states time series frame data and carries out pattern recognition analysis, to extract reality scene feature.
The reality scene sensing module includes: that depth camera sensor, depth camera sensor and RGB camera shooting pass One in the binding entity of sensor, ultrasonic wave orientation sensing module, thermal imaging orientation sensing module and electromagnetic location sensing module Kind or a variety of combinations.
The sensory package includes 3-axis acceleration sensor, three axis angular rate sensors, three axis geomagnetic sensors It is one or more kinds of.
When the data of some scene in Cloud Server are updated, pushed to the virtual reality terminal for downloading the scene Message reminds the virtual reality terminal to be updated.
The specific information triggering step includes following sub-step:
S41: judge in going game picture frame whether include target person specify information;
S42: if in going game picture frame including the specify information of the target person, the aobvious of the specify information is obtained Show position;
S43: specified animation is added in the display position based on specify information described in going game picture frame.
The present embodiment is applied to market activity, and activity is held in certain market, is needed using to virtual reality, user needs logical Cross the particular artifact that method of the invention searches out specific position.For example, finding virtual NPC etc..
The first step, the entity object that user treats the reality of virtual image in advance carries out data acquisition, after acquisition is completed Cloud Server is uploaded to be saved;The data download step includes: that user passes through virtual reality terminal in Cloud Server In corresponding contextual data is downloaded.When the data of some scene in Cloud Server are updated, to downloading this The virtual reality terminal PUSH message of scape, reminds the virtual reality terminal to be updated.
Second step, user obtain the mapping of the first scene, i.e., the shape and floor in entire market and virtual NPC's is specific Position.
S111: carrying out GISization for network element, and the network element is the dummy object being not present in reality, Network element in the present embodiment is virtual NPC;
S112: carrying out three-dimensional visualization for the composite space, that is, obtain the shape and floor in entire market, can also To include the part landform outside market;
S113: the shape and floor in the entire market after virtual reality terminal presentation three-dimensional visualization and virtual NPC exist Some position in market, realizes (the picture occupied in virtual reality terminal by way of small map in the present embodiment Face is a corner).
The visual angle that the composite space after three-dimensional visualization is presented in the virtual reality terminal is adjustable.
Third step, user obtain the mapping of the second scene, that is, obtain the virtual reality information of ambient enviroment.
S121: the reality scene information of user's surrounding enviroment is captured by reality scene sensing module;
S122: calculation processing module extracts reality scene feature from the reality scene information, based on setting in advance The reality scene Feature Mapping is to be used to construct the feature of virtual scene, and be used for based on described by fixed mapping relations Construct the feature construction virtual reality scenario information of virtual scene;
S123: virtual reality terminal is presented the virtual reality scenario information and passes through virtual animation in the present embodiment Form is realized and occupies all pictures of the entire picture in addition to small map segment.
Wherein, the reality scene information that user's surrounding enviroment are captured described in step S121 is capture user's surrounding enviroment The time series frame data of image;The calculation processing module extracts reality scene feature from the reality scene information Pattern recognition analysis is carried out to the time series frame data, to extract reality scene feature.
4th step, user position oneself.
The positioning step includes:
S21: initialization indoor reference point is loaded into reference point information in database;
S22: setting queue and filter parameter, acquisition WIFI signal data to queue;
S23: using the data queue of acquisition, the corresponding RSSI mean value of each AP on current location is calculated;
S24: traversing all reference points, and whether the RSSI mean value calculated according to step S23 refers in corresponding AP about certain Within the section RSSI of point, judge whether the reference point is concentrated in the judgement of corresponding A P;
S25: the intersection of each AP judgement collection is asked:
(1) it if only one reference point in intersection, is exported the reference point coordinate as the estimation of algorithm, and terminate;
(2) if more than one reference point in intersection, RSSI error vector is calculated, according to error to the reference point in intersection Sequence, and the wherein the smallest k point of error is selected, estimated result is calculated to weight k nearest neighbor algorithm, and terminate;
(3) if intersection is empty set, the center of each judgement collection is calculated, and using the center at these collection centers as in the overall situation The heart excludes the center judgement collection farthest away from Global center using Euler's distance, and makees sub-step in step S25 to remaining judgement collection Suddenly (1), sub-step (2) and sub-step (3) intersection operation, until obtaining estimated result, and terminate;If going to the last layer Still it cannot get as a result, executing sub-step (4);
(4) if sub-step (3) goes to the last layer, intersection is still empty set, then utilizes current RSSI mean value and reference Error distance between point RSSI mean value calculates estimated result using weighting k nearest neighbor algorithm according to RSSI error minimum principle;
S26: location information and the composite space after three-dimensional visualization are mapped, and it is current fixed to show in composite space Position information.I.e. oneself position location is carried out real-time display by user in small map.
Wherein, the database needs an off-line training step:
S201: discretization area to be targeted uniformly takes N number of position as a reference point in area to be targeted;
S202: the reference spot scan WIFI signal described in each step S201 records each AP in continuous a period of time Received signal strength indication value RSSI;
S203: RSSI vector obtained in processing step 202 calculates each AP in RSSI mean value, the variance of the reference point And minimax section, the mark SSID by these parameters together with corresponding A P are saved in database;
S204: carrying out S203 and step S204 to all reference points and operate, and finishes until all reference points are all trained, To establish the complete RSSI distribution map in area to be targeted.
5th step needs in real time to embody the motion mode of oneself in composite space:
S31: multiple and virtual reality terminal association sensory package is set in personal joint;
S32: the information of each sensory package is sent to virtual reality terminal in real time;
S33: virtual reality terminal is parsed after receiving information, and is presented on the virtual reality scenario information In.
The sensory package includes 3-axis acceleration sensor, three axis angular rate sensors, three axis geomagnetic sensors It is one or more kinds of.
The movement of user at this time can embody in virtual reality scenario information.
6th step further includes the steps that one positions physical objects.
The particular artifact is physical objects, and the particular artifact positioning step includes:
S41: locating module is set in particular artifact;
S42: particular artifact is shown in composite space according to step S21 ~ step S26 method.
After above-mentioned all completions, user can start move at virtual NPC.
Wherein, when close at user and virtual NPC, it will do it the broadcasting of animation.
When user reaches specific region, virtual reality terminal triggers particular message, wherein the arrival specific region Judgement is by including preset specify information in location determination or the picture frame occurred by virtual reality terminal.
The specific information triggering step includes following sub-step:
S41: judge in going game picture frame whether include target person specify information;
S42: if in going game picture frame including the specify information of the target person, the aobvious of the specify information is obtained Show position;
S43: specified animation is added in the display position based on specify information described in going game picture frame.
User can also propose authorization requests to server before the use, and after server authorization, user's is virtual Non-real end shows the location information of the virtual reality terminal of other multiple authorized users.
The authorisation step includes: after server is to virtual reality authorization terminal, and multiple virtual reality terminals occur In the same composite space, i.e. the virtual reality terminal of user shows the positioning letter of the virtual reality terminal of multiple other users Breath.
In the present embodiment, the virtual reality terminal is virtual implementing helmet or mobile terminal.It is specifically chosen root It is considered according to the cost of businessman.
If needing to purchase dedicated equipment using virtual implementing helmet, but better effect.User can put on virtual existing The real helmet carries out virtual NPC searching.And such method be suitable for personnel it is less in the case where.
If it is mobile terminal, such as mobile phone or tablet computer is used, then need to install corresponding software, it is convenient and efficient But effect is poor for the method using virtual implementing helmet.Such method be suitable for personnel it is more in the case where.

Claims (1)

1. a kind of method that reality is carried out Virtual Realization, it is characterised in that: it includes contextual data acquisition and uploading step, number It is fixed according to download step, scene mapping step, positioning step, motion mode mapping step, specific information triggering step, particular artifact Position step and authorisation step;
The entity object that the contextual data acquisition includes: the reality that user treats virtual image in advance with uploading step carries out Data acquisition, acquisition are uploaded to Cloud Server after completing and are saved;
The data download step include: user by virtual reality terminal in Cloud Server to corresponding contextual data into Row downloading;
The scene mapping step is used for the region around the virtual scene of the data of downloading and user in virtual reality Terminal carries out virtual reality and shows, including for the entity object of virtual network element and reality to be shown in geography information system The first scene mapping sub-step of composite space is formed among system and for scene around to be mapped as second of virtual scene Scape mapping step;
The GIS-Geographic Information System includes electronic three-dimensional map, and the first scene mapping sub-step includes following sub-step It is rapid:
S111: the network element is subjected to GISization, the network element is the dummy object being not present in reality;
S112: the composite space is subjected to three-dimensional visualization;
S113: composite space and dummy object position after virtual reality terminal presentation three-dimensional visualization;
The second scene mapping step includes following sub-step:
S121: the reality scene information of user's surrounding enviroment is captured by reality scene sensing module;
S122: calculation processing module extracts reality scene feature from the reality scene information, based on preset The reality scene Feature Mapping is to be used to construct the feature of virtual scene, and be used to construct based on described by mapping relations The feature construction virtual reality scenario information of virtual scene;
S123: the virtual reality scenario information is presented in virtual reality terminal;
The positioning step includes:
S21: initialization indoor reference point is loaded into reference point information in database;
S22: setting queue and filter parameter, acquisition WIFI signal data to queue;
S23: using the data queue of acquisition, the corresponding RSSI mean value of each AP on current location is calculated;
S24: traversing all reference points, according to step S23 calculate RSSI mean value whether in corresponding AP about certain reference point Within the section RSSI, judge whether the reference point is concentrated in the judgement of corresponding A P;
S25: the intersection of each AP judgement collection is asked:
(1) it if only one reference point in intersection, is exported the reference point coordinate as the estimation of algorithm, and terminate;
(2) if more than one reference point in intersection, RSSI error vector is calculated, the reference point in intersection is arranged according to error Sequence, and the wherein the smallest k point of error is selected, estimated result is calculated to weight k nearest neighbor algorithm, and terminate;
(3) if intersection is empty set, the center of each judgement collection is calculated, and using the center at these collection centers as Global center, benefit The center judgement collection farthest away from Global center is excluded with Euler's distance, and sub-step in step S25 is made to remaining judgement collection (1), the intersection operation of sub-step (2) and sub-step (3), until obtaining estimated result, and terminates;If going to the last layer still It cannot get as a result, executing sub-step (4);
(4) if sub-step (3) goes to the last layer, intersection is still empty set, then utilizes current RSSI mean value and reference point Error distance between RSSI mean value calculates estimated result using weighting k nearest neighbor algorithm according to RSSI error minimum principle;
S26: location information and the composite space after three-dimensional visualization are mapped, and display is when prelocalization letter in composite space Breath;
The motion mode positioning step includes following sub-step:
S31: multiple and virtual reality terminal association sensory package is set in personal joint;
S32: the information of each sensory package is sent to virtual reality terminal in real time;
S33: virtual reality terminal is parsed after receiving information, and is presented in the virtual reality scenario information;
The specific information triggering step includes: when user reaches specific region, and virtual reality terminal triggers particular message, Described in arrival specific region judgement by including in location determination or the picture frame occurred by virtual reality terminal Preset specify information;
The particular artifact is physical objects, and the particular artifact positioning step includes:
S41: locating module is set in particular artifact;
S42: particular artifact is shown in composite space according to step S21 ~ step S26 method;
The authorisation step includes: after server is to virtual reality authorization terminal, and multiple virtual reality terminals appear in together In one composite space, i.e. the virtual reality terminal of user shows the location information of the virtual reality terminal of multiple other users;
The virtual reality terminal is virtual implementing helmet or mobile terminal;
The positioning step further includes an off-line training step:
S201: discretization area to be targeted uniformly takes N number of position as a reference point in area to be targeted;
S202: the reference spot scan WIFI signal described in each step S201 records connecing for each AP in continuous a period of time Receive signal strength indication value RSSI;
S203: RSSI vector obtained in processing step 202, calculate each AP the RSSI mean value of the reference point, variance and Minimax section, the mark SSID by these parameters together with corresponding A P are saved in database;
S204: carrying out S203 and step S204 to all reference points and operate, and finishes until all reference points are all trained, thus Establish the complete RSSI distribution map in area to be targeted;
Described in step S121 capture user's surrounding enviroment reality scene information be capture user's surrounding enviroment image when Sequence frame data;The calculation processing module extracted from the reality scene information reality scene feature to it is described when Sequence frame data carry out pattern recognition analysis, to extract reality scene feature;
The reality scene sensing module includes: depth camera sensor, depth camera sensor and RGB image sensor One of binding entity, ultrasonic wave orientation sensing module, thermal imaging orientation sensing module and electromagnetic location sensing module or A variety of combinations;
The sensory package includes one kind of 3-axis acceleration sensor, three axis angular rate sensors, three axis geomagnetic sensors Or it is a variety of;
When the data of some scene in Cloud Server are updated, push and disappear to the virtual reality terminal for downloading the scene Breath, reminds the virtual reality terminal to be updated;
The specific information triggering step includes following sub-step:
S41: judge in going game picture frame whether include target person specify information;
S42: if including the specify information of the target person in going game picture frame, the display position of the specify information is obtained It sets;
S43: specified animation is added in the display position based on specify information described in going game picture frame;
The particular artifact positioning step further includes a sub-steps S43: when user carries the particular artifact and movement When, it is shown in composite space;Wherein, judge that the step of personage carries particular artifact includes the biography that judgement is set to hand Whether sensor keeps in a certain range whithin a period of time at a distance from the relative position of particular artifact;
The 3-axis acceleration sensor selects BMA250, three axis angular rate sensor MPU6050, three axis geomagnetic sensors HMC5883。
CN201610150102.0A 2016-03-16 2016-03-16 A kind of implementation method of virtual reality Active CN105807931B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610150102.0A CN105807931B (en) 2016-03-16 2016-03-16 A kind of implementation method of virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610150102.0A CN105807931B (en) 2016-03-16 2016-03-16 A kind of implementation method of virtual reality

Publications (2)

Publication Number Publication Date
CN105807931A CN105807931A (en) 2016-07-27
CN105807931B true CN105807931B (en) 2019-09-17

Family

ID=56468528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610150102.0A Active CN105807931B (en) 2016-03-16 2016-03-16 A kind of implementation method of virtual reality

Country Status (1)

Country Link
CN (1) CN105807931B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101748401B1 (en) * 2016-08-22 2017-06-16 강두환 Method for controlling virtual reality attraction and system thereof
CN106648046A (en) * 2016-09-14 2017-05-10 同济大学 Virtual reality technology-based real environment mapping system
CN106649508A (en) * 2016-10-12 2017-05-10 北京小米移动软件有限公司 Page display method and apparatus, and electronic device
CN106569044B (en) * 2016-11-02 2019-05-03 西安电子科技大学 Electromagnetic spectrum situation observation method based on immersed system of virtual reality
CN106598229B (en) * 2016-11-11 2020-02-18 歌尔科技有限公司 Virtual reality scene generation method and device and virtual reality system
FR3062489B1 (en) * 2017-02-01 2020-12-25 Peugeot Citroen Automobiles Sa ANALYSIS DEVICE FOR DETERMINING A DETECTION PERIOD CONTRIBUTING TO A LATENCY TIME WITHIN AN IMMERSIVE SYSTEM OF VIRTUAL REALITY
CN109426333B (en) * 2017-08-23 2022-11-04 腾讯科技(深圳)有限公司 Information interaction method and device based on virtual space scene
CN109419604A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 Lower limb rehabilitation training method and system based on virtual reality
CN108107580A (en) * 2017-12-20 2018-06-01 浙江煮艺文化科技有限公司 Methods of exhibiting and system is presented in a kind of virtual reality scenario
CN108711189A (en) * 2018-03-29 2018-10-26 联想(北京)有限公司 The processing method and its system of virtual reality
CN109831659B (en) * 2018-11-29 2020-05-08 北京邮电大学 VR video caching method and system
CN110147770A (en) * 2019-05-23 2019-08-20 北京七鑫易维信息技术有限公司 A kind of gaze data restoring method and system
CN110349270B (en) * 2019-07-02 2023-07-28 上海迪沪景观设计有限公司 Virtual sand table presenting method based on real space positioning
CN111142967B (en) * 2019-12-26 2021-07-27 腾讯科技(深圳)有限公司 Augmented reality display method and device, electronic equipment and storage medium
CN111988375B (en) * 2020-08-04 2023-10-27 瑞立视多媒体科技(北京)有限公司 Terminal positioning method, device, equipment and storage medium
CN112330819B (en) * 2020-11-04 2024-02-06 腾讯科技(深圳)有限公司 Interaction method and device based on virtual article and storage medium
US20240169582A1 (en) * 2021-03-08 2024-05-23 Hangzhou Taro Positioning Technology Co., Ltd. Scenario triggering and interaction based on target positioning and identification
CN115645925B (en) * 2022-10-26 2023-05-30 杭州有九文化传媒有限公司 Game data processing method and system based on augmented reality
CN116597119A (en) * 2022-12-30 2023-08-15 北京津发科技股份有限公司 Man-machine interaction acquisition method, device and system of wearable augmented reality equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103137042A (en) * 2013-02-13 2013-06-05 张翔 Guide information trigger method based on scenery-viewing area in global positioning system (GPS) intelligent guide system
CN103985175A (en) * 2014-05-21 2014-08-13 清华大学 Control method and system of home entrance guard
CN104793749A (en) * 2015-04-30 2015-07-22 小米科技有限责任公司 Intelligent glasses and control method and device thereof
CN104834449A (en) * 2015-05-28 2015-08-12 广东欧珀移动通信有限公司 Mobile terminal icon managing method and device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101685145B1 (en) * 2010-06-09 2016-12-09 엘지전자 주식회사 Mobile terminal and control method thereof
US8660581B2 (en) * 2011-02-23 2014-02-25 Digimarc Corporation Mobile device indoor navigation
CN103823818A (en) * 2012-11-19 2014-05-28 大连鑫奇辉科技有限公司 Book system on basis of virtual reality
CN103384358A (en) * 2013-06-25 2013-11-06 云南大学 Indoor positioning method based on virtual reality and WIFI space field strength
CN104796444A (en) * 2014-01-21 2015-07-22 广州海图克计算机技术有限公司 Digital household scene control management system and method
CN103810353A (en) * 2014-03-09 2014-05-21 杨智 Real scene mapping system and method in virtual reality
CN104063466B (en) * 2014-06-27 2017-11-07 深圳先进技术研究院 The 3 D displaying method and system of virtual reality integration
CN104616333A (en) * 2014-12-24 2015-05-13 深圳市腾讯计算机系统有限公司 Game video processing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103137042A (en) * 2013-02-13 2013-06-05 张翔 Guide information trigger method based on scenery-viewing area in global positioning system (GPS) intelligent guide system
CN103985175A (en) * 2014-05-21 2014-08-13 清华大学 Control method and system of home entrance guard
CN104793749A (en) * 2015-04-30 2015-07-22 小米科技有限责任公司 Intelligent glasses and control method and device thereof
CN104834449A (en) * 2015-05-28 2015-08-12 广东欧珀移动通信有限公司 Mobile terminal icon managing method and device

Also Published As

Publication number Publication date
CN105807931A (en) 2016-07-27

Similar Documents

Publication Publication Date Title
CN105807931B (en) A kind of implementation method of virtual reality
CN105824416B (en) A method of by virtual reality technology in conjunction with cloud service technology
CN105608746B (en) A method of reality is subjected to Virtual Realization
AU2023200677B2 (en) System and method for augmented and virtual reality
JP5934368B2 (en) Portable device, virtual reality system and method
US9690376B2 (en) Wireless wrist computing and control device and method for 3D imaging, mapping, networking and interfacing
CN105824417B (en) human-object combination method adopting virtual reality technology
CN105797378A (en) Game video realizing method based on virtual reality technology
JP2023126474A (en) Systems and methods for augmented reality
CN105797379A (en) Game video processing method based on virtual reality technology
JP2019512173A (en) Method and apparatus for displaying multimedia information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200701

Address after: 221000 Xuzhou High-tech Industrial Development Zone, Xuzhou City, Jiangsu Province Second Industrial Park Yinshan Road East and Lijiang Road South Safety Science and Technology Industrial Park

Patentee after: XUZHOU SHUOBO ELECTRONIC TECHNOLOGY Co.,Ltd.

Address before: 610000 No. 6, No. 505, D zone, Tianfu Software Park, 599 century South Road, Tianfu District, Chengdu, Sichuan

Patentee before: CHENGDU CHAINSAW INTERACTIVE TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: An Implementation Method of Virtual Reality

Effective date of registration: 20230223

Granted publication date: 20190917

Pledgee: Xuzhou Huaichang Investment Co.,Ltd.

Pledgor: XUZHOU SHUOBO ELECTRONIC TECHNOLOGY CO.,LTD.

Registration number: Y2023320000096

PE01 Entry into force of the registration of the contract for pledge of patent right