Summary of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide it is a kind of by reality carry out Virtual Realization method,
The world of virtual reality and the world true in reality are associated.
The purpose of the present invention is achieved through the following technical solutions: a method of reality is subjected to Virtual Realization,
It includes that contextual data acquisition is mapped with uploading step, data download step, scene mapping step, positioning step, motion mode
Step, specific information set out step and authorisation step;
The contextual data acquisition and the entity object that uploading step includes: that user treats the reality of virtual image in advance
Data acquisition is carried out, acquisition is uploaded to Cloud Server after completing and is saved;
The data download step includes: that user passes through virtual reality terminal in Cloud Server to corresponding scene number
According to being downloaded;
The scene mapping step is used for the region around the virtual scene of the data of downloading and user virtual
Non-real end carries out virtual reality and shows, including for the entity object of virtual network element and reality to be shown in geographical letter
The first scene mapping sub-step of composite space is formed among breath system and for scene around to be mapped as the of virtual scene
Two scene mapping steps;
The GIS-Geographic Information System includes electronic three-dimensional map, and the first scene mapping sub-step includes following son
Step:
S111: the network element is subjected to GISization, the network element is the virtual object being not present in reality
Body;
S112: the composite space is subjected to three-dimensional visualization;
S113: composite space and dummy object position after virtual reality terminal presentation three-dimensional visualization;
The second scene mapping step includes following sub-step:
S121: the reality scene information of user's surrounding enviroment is captured by reality scene sensing module;
S122: calculation processing module extracts reality scene feature from the reality scene information, based on setting in advance
The reality scene Feature Mapping is to be used to construct the feature of virtual scene, and be used for based on described by fixed mapping relations
Construct the feature construction virtual reality scenario information of virtual scene;
S123: the virtual reality scenario information is presented in virtual reality terminal;
The positioning step includes:
S21: initialization indoor reference point is loaded into reference point information in database;
S22: setting queue and filter parameter, acquisition WIFI signal data to queue;
S23: using the data queue of acquisition, the corresponding RSSI mean value of each AP on current location is calculated;
S24: traversing all reference points, and whether the RSSI mean value calculated according to step S23 refers in corresponding AP about certain
Within the section RSSI of point, judge whether the reference point is concentrated in the judgement of corresponding A P;
S25: the intersection of each AP judgement collection is asked:
(1) it if only one reference point in intersection, is exported the reference point coordinate as the estimation of algorithm, and terminate;
(2) if more than one reference point in intersection, RSSI error vector is calculated, according to error to the reference point in intersection
Sequence, and the wherein the smallest k point of error is selected, estimated result is calculated to weight k nearest neighbor algorithm, and terminate;
(3) if intersection is empty set, the center of each judgement collection is calculated, and using the center at these collection centers as in the overall situation
The heart excludes the center judgement collection farthest away from Global center using Euler's distance, and makees sub-step in step S25 to remaining judgement collection
Suddenly (1), sub-step (2) and sub-step (3) intersection operation, until obtaining estimated result, and terminate;If going to the last layer
Still it cannot get as a result, executing sub-step (4);
(4) if sub-step (3) goes to the last layer, intersection is still empty set, then utilizes current RSSI mean value and reference
Error distance between point RSSI mean value calculates estimated result using weighting k nearest neighbor algorithm according to RSSI error minimum principle;
S26: location information and the composite space after three-dimensional visualization are mapped, and it is current fixed to show in composite space
Position information;
The motion mode positioning step includes following sub-step:
S31: multiple and virtual reality terminal association sensory package is set in personal joint;
S32: the information of each sensory package is sent to virtual reality terminal in real time;
S33: virtual reality terminal is parsed after receiving information, and is presented on the virtual reality scenario information
In;
The specific information triggering step includes: when user reaches specific region, and the triggering of virtual reality terminal is specific to disappear
Breath, wherein in the picture frame for judging to occur by location determination or by virtual reality terminal of the arrivals specific region
Include preset specify information;
The particular artifact is physical objects, and the particular artifact positioning step includes:
S41: locating module is set in particular artifact;
S42: particular artifact is shown in composite space according to step S21 ~ step S26 method;
The authorisation step includes: after server is to virtual reality authorization terminal, and multiple virtual reality terminals occur
In the same composite space, i.e. the virtual reality terminal of user shows the positioning letter of the virtual reality terminal of multiple other users
Breath.
The virtual reality terminal is virtual implementing helmet or mobile terminal.
The positioning step further includes an off-line training step:
S201: discretization area to be targeted uniformly takes N number of position as a reference point in area to be targeted;
S202: the reference spot scan WIFI signal described in each step S201 records each AP in continuous a period of time
Received signal strength indication value RSSI;
S203: RSSI vector obtained in processing step 202 calculates each AP in RSSI mean value, the variance of the reference point
And minimax section, the mark SSID by these parameters together with corresponding A P are saved in database;
S204: carrying out S203 and step S204 to all reference points and operate, and finishes until all reference points are all trained,
To establish the complete RSSI distribution map in area to be targeted.
Composite space after the three-dimensional visualization is the 3-D view of building.
The visual angle that the composite space after three-dimensional visualization is presented in the virtual reality terminal is adjustable.
The reality scene information that user's surrounding enviroment are captured described in step S121 is capture user's surrounding enviroment image
Time series frame data;The calculation processing module extracts reality scene feature to institute from the reality scene information
It states time series frame data and carries out pattern recognition analysis, to extract reality scene feature.
The reality scene sensing module includes: that depth camera sensor, depth camera sensor and RGB camera shooting pass
One in the binding entity of sensor, ultrasonic wave orientation sensing module, thermal imaging orientation sensing module and electromagnetic location sensing module
Kind or a variety of combinations.
The sensory package includes 3-axis acceleration sensor, three axis angular rate sensors, three axis geomagnetic sensors
It is one or more kinds of.
The specific information triggering step includes following sub-step:
S41: judge in going game picture frame whether include target person specify information;
S42: if in going game picture frame including the specify information of the target person, the aobvious of the specify information is obtained
Show position;
S43: specified animation is added in the display position based on specify information described in going game picture frame.
The particular artifact positioning step further includes a sub-steps S43: when user carry the particular artifact and
When mobile, shown in composite space;Wherein, judge that the step of personage carries particular artifact includes that judgement is set to hand
Sensor whether kept whithin a period of time in a certain range at a distance from the relative position of particular artifact.
The beneficial effects of the present invention are:
The world of Virtual Realization is associated by the present invention with real world, by be located in Virtual Realization the world it is small
Map carries out real-time display, while factum movement can be observed in real time in user.Meanwhile using cloud, in advance will
Contextual data, which is acquired, to be stored in Cloud Server, needs virtual reality terminal to be used only with downloading.
Specifically, by the way of small map by real world building carry out 3-D view displaying, and with positioning
It is associated, it is visual in image;Understood using application reality scene sensing technology and corresponding calculation process technology and analyzes real ring
Border, by some Feature Mappings in actual environment into the virtual scene for being presented to user, to improve user experience;And
And positioning method therein realizes that the positioning of mobile target (people, equipment) and three-dimensional position are shown, is the base of virtual reality terminal
In location-based service application provide coordinate estimation, have higher precision and it is lower delay (delay can by the scan period come
Setting indirectly);In addition, Cloud Server, which also provides, updates push function, reliability is improved;The invention also includes a specific informations
Triggering, specific information can be triggered when user is moved to particular range;Meanwhile the present invention is also by the object of real world and void
The function that quasi- reality is associated display and multiple virtual reality terminals are shown.
Specific embodiment
Technical solution of the present invention is described in further detail with reference to the accompanying drawing:
As shown in Figure 1, it is a kind of by reality carry out Virtual Realization method, it include contextual data acquisition with uploading step,
Data download step, scene mapping step, positioning step, motion mode mapping step, specific information trigger step, particular artifact
Positioning step and authorisation step;
The contextual data acquisition and the entity object that uploading step includes: that user treats the reality of virtual image in advance
Data acquisition is carried out, acquisition is uploaded to Cloud Server after completing and is saved;
The data download step includes: that user passes through virtual reality terminal in Cloud Server to corresponding scene number
According to being downloaded;
The scene mapping step is used for the region around the virtual scene of the data of downloading and user virtual
Non-real end carries out virtual reality and shows, including for the entity object of virtual network element and reality to be shown in geographical letter
The first scene mapping sub-step of composite space is formed among breath system and for scene around to be mapped as the of virtual scene
Two scene mapping steps;
The GIS-Geographic Information System includes electronic three-dimensional map, and the first scene mapping sub-step includes following son
Step:
S111: the network element is subjected to GISization, the network element is the virtual object being not present in reality
Body;
S112: the composite space is subjected to three-dimensional visualization;
S113: composite space and dummy object position after virtual reality terminal presentation three-dimensional visualization;
The second scene mapping step includes following sub-step:
S121: the reality scene information of user's surrounding enviroment is captured by reality scene sensing module;
S122: calculation processing module extracts reality scene feature from the reality scene information, based on setting in advance
The reality scene Feature Mapping is to be used to construct the feature of virtual scene, and be used for based on described by fixed mapping relations
Construct the feature construction virtual reality scenario information of virtual scene;
S123: the virtual reality scenario information is presented in virtual reality terminal;
The positioning step includes:
S21: initialization indoor reference point is loaded into reference point information in database;
S22: setting queue and filter parameter, acquisition WIFI signal data to queue;
S23: using the data queue of acquisition, the corresponding RSSI mean value of each AP on current location is calculated;
S24: traversing all reference points, and whether the RSSI mean value calculated according to step S23 refers in corresponding AP about certain
Within the section RSSI of point, judge whether the reference point is concentrated in the judgement of corresponding A P;
S25: the intersection of each AP judgement collection is asked:
(1) it if only one reference point in intersection, is exported the reference point coordinate as the estimation of algorithm, and terminate;
(2) if more than one reference point in intersection, RSSI error vector is calculated, according to error to the reference point in intersection
Sequence, and the wherein the smallest k point of error is selected, estimated result is calculated to weight k nearest neighbor algorithm, and terminate;
(3) if intersection is empty set, the center of each judgement collection is calculated, and using the center at these collection centers as in the overall situation
The heart excludes the center judgement collection farthest away from Global center using Euler's distance, and makees sub-step in step S25 to remaining judgement collection
Suddenly (1), sub-step (2) and sub-step (3) intersection operation, until obtaining estimated result, and terminate;If going to the last layer
Still it cannot get as a result, executing sub-step (4);
(4) if sub-step (3) goes to the last layer, intersection is still empty set, then utilizes current RSSI mean value and reference
Error distance between point RSSI mean value calculates estimated result using weighting k nearest neighbor algorithm according to RSSI error minimum principle;
S26: location information and the composite space after three-dimensional visualization are mapped, and it is current fixed to show in composite space
Position information;
The motion mode positioning step includes following sub-step:
S31: multiple and virtual reality terminal association sensory package is set in personal joint;
S32: the information of each sensory package is sent to virtual reality terminal in real time;
S33: virtual reality terminal is parsed after receiving information, and is presented on the virtual reality scenario information
In;
The specific information triggering step includes: when user reaches specific region, and the triggering of virtual reality terminal is specific to disappear
Breath, wherein in the picture frame for judging to occur by location determination or by virtual reality terminal of the arrivals specific region
Include preset specify information;
The particular artifact is physical objects, and the particular artifact positioning step includes:
S41: locating module is set in particular artifact;
S42: particular artifact is shown in composite space according to step S21 ~ step S26 method;
The authorisation step includes: after server is to virtual reality authorization terminal, and multiple virtual reality terminals occur
In the same composite space, i.e. the virtual reality terminal of user shows the positioning letter of the virtual reality terminal of multiple other users
Breath.
The virtual reality terminal is virtual implementing helmet or mobile terminal.
The positioning step further includes an off-line training step:
S201: discretization area to be targeted uniformly takes N number of position as a reference point in area to be targeted;
S202: the reference spot scan WIFI signal described in each step S201 records each AP in continuous a period of time
Received signal strength indication value RSSI;
S203: RSSI vector obtained in processing step 202 calculates each AP in RSSI mean value, the variance of the reference point
And minimax section, the mark SSID by these parameters together with corresponding A P are saved in database;
S204: carrying out S203 and step S204 to all reference points and operate, and finishes until all reference points are all trained,
To establish the complete RSSI distribution map in area to be targeted.
Composite space after the three-dimensional visualization is the 3-D view of building.
The visual angle that the composite space after three-dimensional visualization is presented in the virtual reality terminal is adjustable.
The reality scene information that user's surrounding enviroment are captured described in step S121 is capture user's surrounding enviroment image
Time series frame data;The calculation processing module extracts reality scene feature to institute from the reality scene information
It states time series frame data and carries out pattern recognition analysis, to extract reality scene feature.
The reality scene sensing module includes: that depth camera sensor, depth camera sensor and RGB camera shooting pass
One in the binding entity of sensor, ultrasonic wave orientation sensing module, thermal imaging orientation sensing module and electromagnetic location sensing module
Kind or a variety of combinations.
The sensory package includes 3-axis acceleration sensor, three axis angular rate sensors, three axis geomagnetic sensors
It is one or more kinds of.
When the data of some scene in Cloud Server are updated, pushed to the virtual reality terminal for downloading the scene
Message reminds the virtual reality terminal to be updated.
The specific information triggering step includes following sub-step:
S41: judge in going game picture frame whether include target person specify information;
S42: if in going game picture frame including the specify information of the target person, the aobvious of the specify information is obtained
Show position;
S43: specified animation is added in the display position based on specify information described in going game picture frame.
The present embodiment is applied to market activity, and activity is held in certain market, is needed using to virtual reality, user needs logical
Cross the particular artifact that method of the invention searches out specific position.For example, finding virtual NPC etc..
The first step, the entity object that user treats the reality of virtual image in advance carries out data acquisition, after acquisition is completed
Cloud Server is uploaded to be saved;The data download step includes: that user passes through virtual reality terminal in Cloud Server
In corresponding contextual data is downloaded.When the data of some scene in Cloud Server are updated, to downloading this
The virtual reality terminal PUSH message of scape, reminds the virtual reality terminal to be updated.
Second step, user obtain the mapping of the first scene, i.e., the shape and floor in entire market and virtual NPC's is specific
Position.
S111: carrying out GISization for network element, and the network element is the dummy object being not present in reality,
Network element in the present embodiment is virtual NPC;
S112: carrying out three-dimensional visualization for the composite space, that is, obtain the shape and floor in entire market, can also
To include the part landform outside market;
S113: the shape and floor in the entire market after virtual reality terminal presentation three-dimensional visualization and virtual NPC exist
Some position in market, realizes (the picture occupied in virtual reality terminal by way of small map in the present embodiment
Face is a corner).
The visual angle that the composite space after three-dimensional visualization is presented in the virtual reality terminal is adjustable.
Third step, user obtain the mapping of the second scene, that is, obtain the virtual reality information of ambient enviroment.
S121: the reality scene information of user's surrounding enviroment is captured by reality scene sensing module;
S122: calculation processing module extracts reality scene feature from the reality scene information, based on setting in advance
The reality scene Feature Mapping is to be used to construct the feature of virtual scene, and be used for based on described by fixed mapping relations
Construct the feature construction virtual reality scenario information of virtual scene;
S123: virtual reality terminal is presented the virtual reality scenario information and passes through virtual animation in the present embodiment
Form is realized and occupies all pictures of the entire picture in addition to small map segment.
Wherein, the reality scene information that user's surrounding enviroment are captured described in step S121 is capture user's surrounding enviroment
The time series frame data of image;The calculation processing module extracts reality scene feature from the reality scene information
Pattern recognition analysis is carried out to the time series frame data, to extract reality scene feature.
4th step, user position oneself.
The positioning step includes:
S21: initialization indoor reference point is loaded into reference point information in database;
S22: setting queue and filter parameter, acquisition WIFI signal data to queue;
S23: using the data queue of acquisition, the corresponding RSSI mean value of each AP on current location is calculated;
S24: traversing all reference points, and whether the RSSI mean value calculated according to step S23 refers in corresponding AP about certain
Within the section RSSI of point, judge whether the reference point is concentrated in the judgement of corresponding A P;
S25: the intersection of each AP judgement collection is asked:
(1) it if only one reference point in intersection, is exported the reference point coordinate as the estimation of algorithm, and terminate;
(2) if more than one reference point in intersection, RSSI error vector is calculated, according to error to the reference point in intersection
Sequence, and the wherein the smallest k point of error is selected, estimated result is calculated to weight k nearest neighbor algorithm, and terminate;
(3) if intersection is empty set, the center of each judgement collection is calculated, and using the center at these collection centers as in the overall situation
The heart excludes the center judgement collection farthest away from Global center using Euler's distance, and makees sub-step in step S25 to remaining judgement collection
Suddenly (1), sub-step (2) and sub-step (3) intersection operation, until obtaining estimated result, and terminate;If going to the last layer
Still it cannot get as a result, executing sub-step (4);
(4) if sub-step (3) goes to the last layer, intersection is still empty set, then utilizes current RSSI mean value and reference
Error distance between point RSSI mean value calculates estimated result using weighting k nearest neighbor algorithm according to RSSI error minimum principle;
S26: location information and the composite space after three-dimensional visualization are mapped, and it is current fixed to show in composite space
Position information.I.e. oneself position location is carried out real-time display by user in small map.
Wherein, the database needs an off-line training step:
S201: discretization area to be targeted uniformly takes N number of position as a reference point in area to be targeted;
S202: the reference spot scan WIFI signal described in each step S201 records each AP in continuous a period of time
Received signal strength indication value RSSI;
S203: RSSI vector obtained in processing step 202 calculates each AP in RSSI mean value, the variance of the reference point
And minimax section, the mark SSID by these parameters together with corresponding A P are saved in database;
S204: carrying out S203 and step S204 to all reference points and operate, and finishes until all reference points are all trained,
To establish the complete RSSI distribution map in area to be targeted.
5th step needs in real time to embody the motion mode of oneself in composite space:
S31: multiple and virtual reality terminal association sensory package is set in personal joint;
S32: the information of each sensory package is sent to virtual reality terminal in real time;
S33: virtual reality terminal is parsed after receiving information, and is presented on the virtual reality scenario information
In.
The sensory package includes 3-axis acceleration sensor, three axis angular rate sensors, three axis geomagnetic sensors
It is one or more kinds of.
The movement of user at this time can embody in virtual reality scenario information.
6th step further includes the steps that one positions physical objects.
The particular artifact is physical objects, and the particular artifact positioning step includes:
S41: locating module is set in particular artifact;
S42: particular artifact is shown in composite space according to step S21 ~ step S26 method.
After above-mentioned all completions, user can start move at virtual NPC.
Wherein, when close at user and virtual NPC, it will do it the broadcasting of animation.
When user reaches specific region, virtual reality terminal triggers particular message, wherein the arrival specific region
Judgement is by including preset specify information in location determination or the picture frame occurred by virtual reality terminal.
The specific information triggering step includes following sub-step:
S41: judge in going game picture frame whether include target person specify information;
S42: if in going game picture frame including the specify information of the target person, the aobvious of the specify information is obtained
Show position;
S43: specified animation is added in the display position based on specify information described in going game picture frame.
User can also propose authorization requests to server before the use, and after server authorization, user's is virtual
Non-real end shows the location information of the virtual reality terminal of other multiple authorized users.
The authorisation step includes: after server is to virtual reality authorization terminal, and multiple virtual reality terminals occur
In the same composite space, i.e. the virtual reality terminal of user shows the positioning letter of the virtual reality terminal of multiple other users
Breath.
In the present embodiment, the virtual reality terminal is virtual implementing helmet or mobile terminal.It is specifically chosen root
It is considered according to the cost of businessman.
If needing to purchase dedicated equipment using virtual implementing helmet, but better effect.User can put on virtual existing
The real helmet carries out virtual NPC searching.And such method be suitable for personnel it is less in the case where.
If it is mobile terminal, such as mobile phone or tablet computer is used, then need to install corresponding software, it is convenient and efficient
But effect is poor for the method using virtual implementing helmet.Such method be suitable for personnel it is more in the case where.