CN105797379A - Game video processing method based on virtual reality technology - Google Patents

Game video processing method based on virtual reality technology Download PDF

Info

Publication number
CN105797379A
CN105797379A CN201610150148.2A CN201610150148A CN105797379A CN 105797379 A CN105797379 A CN 105797379A CN 201610150148 A CN201610150148 A CN 201610150148A CN 105797379 A CN105797379 A CN 105797379A
Authority
CN
China
Prior art keywords
virtual reality
scene
information
reference point
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610150148.2A
Other languages
Chinese (zh)
Inventor
赖斌斌
江兰波
樊星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Chainsaw Interactive Technology Co Ltd
Original Assignee
Chengdu Chainsaw Interactive Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Chainsaw Interactive Technology Co Ltd filed Critical Chengdu Chainsaw Interactive Technology Co Ltd
Priority to CN201610150148.2A priority Critical patent/CN105797379A/en
Publication of CN105797379A publication Critical patent/CN105797379A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6009Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a game video processing method based on the virtual reality technology. The method comprises the steps of scene mapping, locating, motion mode mapping and specific information triggering. The virtually realized world is related to the real world, and real-time display is carried out by means of location of a small map of the virtually realized world; meanwhile, users can observe their own behaviors in real time; specifically, the small-map mode is adopted for displaying buildings in the real world in a three-dimensional view and related to location, and display is vivid and visual; the reality scene sensing technology and the corresponding operation processing technology are applied for understanding and analyzing the real environment, and mapping some features in the real environment to a virtual scene displayed to the users, so that user experience is improved.

Description

A kind of game video processing method based on virtual reality technology
Technical field
The present invention relates to a kind of game video processing method based on virtual reality technology.
Background technology
Virtual reality technology is a kind of can to create with the computer simulation system in the experiencing virtual world it and utilize computer to generate a kind of simulated environment to be the interactively Three-Dimensional Dynamic what comes into a driver's of a kind of Multi-source Information Fusion and the system emulation of entity behavior makes user be immersed in this environment.Simultaneously, virtual reality is to utilize computer simulation to produce a three-dimensional virtual world, there is provided user about the simulation of the sense organs such as vision, audition, sense of touch, allow user as being personally on the scene, it is possible to observe the things in three-dimensional space in time, without limitation.Virtual reality is the comprehensive of multiple technologies, including real-time three-dimensional computer graphics techniques, Radix Rumicis (wide the visual field) stereo display technique, the tracking technique to observer's head, eye and hands, and sense of touch/power feel feedback, stereo, network transmission, phonetic entry export technique etc..
In virtual reality technology, when user carry out position move time, computer can carry out immediately complexity computing, pass accurate 3D world image back generation telepresenc.This Integration ofTechnology later development of the technology such as computer graphical (CG) technology, computer simulation technique, artificial intelligence, sensing technology, Display Technique, network parallel process, is a kind of high-tech analog systems generated by computer technology auxiliary.
But existing virtual reality technology cannot be associated with real real world, the world in virtual reality cannot be coupled together by user with real world, therefore always produces distance perspective.
Summary of the invention
It is an object of the invention to overcome the deficiencies in the prior art, it is provided that a kind of game video processing method based on virtual reality technology, by the world of virtual reality, the real world is associated with reality.
It is an object of the invention to be achieved through the following technical solutions: a kind of game video processing method based on virtual reality technology, it includes scene mapping step, positioning step, motion mode mapping step and customizing messages triggered step;Described scene mapping step shows for the region around virtual scene and user is carried out virtual reality in virtual reality terminal, including for being shown among GIS-Geographic Information System and form the first scene mapping sub-step of composite space and be used for being mapped as scene around the second scene mapping step of virtual scene by the entity object of virtual network element with reality;
Described GIS-Geographic Information System includes electronic three-dimensional map, and the first described scene mapping sub-step includes following sub-step:
S111: described network element carries out GISization, described network element is the dummy object being not present in reality;
S112: described composite space is carried out three-dimensional visualization;
S113: virtual reality terminal presents the composite space after three-dimensional visualization and dummy object position;
The second described scene mapping step includes following sub-step:
S121: caught the reality scene information of user's surrounding enviroment by reality scene sensing module;
S122: computing module extracts reality scene feature from described reality scene information, based on mapping relations set in advance, by the feature that described reality scene Feature Mapping is for building virtual scene, and based on the described feature construction virtual reality scenario information for building virtual scene;
S123: virtual reality terminal presents described virtual reality scenario information;
Described positioning step includes:
S21: initialize indoor reference point, is loaded into reference point information in data base;
S22: arrange queue and filter parameter, gathers WIFI signal data to queue;
S23: utilize the data queue gathered, calculates RSSI average corresponding for each AP on current location;
Whether S24: travel through all reference points, according to the step S23 RSSI average calculated within corresponding AP is about the RSSI interval of certain reference point, it is judged that whether this reference point is concentrated in the judgement of corresponding A P;
S25: ask each AP to judge the common factor of collection:
(1) if occur simultaneously in only one of which reference point, this reference point coordinate is exported as the estimation of algorithm, and terminates;
(2) if more than one reference point in occuring simultaneously, then calculate RSSI error vector, according to error to the reference point sequence in occuring simultaneously, and select minimum k the point of wherein error, with weighting k nearest neighbor algorithm calculating estimated result, and terminate;
(3) if occuring simultaneously for empty set, calculate each center judging collection, and concentrate the center of the hearts as Global center these, Euler's distance is utilized to get rid of the judgement collection that centre-to-centre spacing Global center is farthest, and remaining judgement collection is made the intersection operation of sub-step in step S25 (1), sub-step (2) and sub-step (3), until obtaining estimated result, and terminate;If going to last layer still can not get result, perform sub-step (4);
(4) if sub-step (3) goes to last layer, occuring simultaneously is still empty set, then utilize the error distance between current RSSI average and reference point RSSI average, according to RSSI error minimum principle, utilizes weighting k nearest neighbor algorithm to calculate estimated result;
S26: the composite space after location information and three-dimensional visualization is mapped, shows current location information in composite space;
Described motion mode positioning step includes following sub-step:
S31: multiple and virtual reality terminal association sensory package is set at person joint;
S32: the information of each sensory package sends in real time to virtual reality terminal;
S33: virtual reality terminal resolves after receiving information, and be presented in described virtual reality scenario information;
Described customizing messages triggered step includes: when user arrives specific region, virtual reality terminal triggers particular message, wherein said the judging by comprising default appointment information in location determination or the picture frame that occurred by virtual reality terminal of specific region of arriving.
Described virtual reality terminal is virtual implementing helmet or mobile terminal.
Described positioning step also includes an off-line training step:
S201: discretization area to be targeted, takes N number of position uniformly as a reference point in area to be targeted;
S202: scan WIFI signal in the reference point described in each step S201, records the received signal strength indicator value RSSI of each AP in continuous a period of time;
S203: process the RSSI vector of gained in step 202, calculate each AP interval at the RSSI average of this reference point, variance and minimax, these parameters are saved in data base together with the mark SSID of corresponding A P;
S204: all of reference point carries out S203 and step S204 operation, until all of reference point is all trained complete, thus setting up the RSSI distribution map that area to be targeted is complete.
The described composite space after three-dimensional visualization is the 3-D view of building.
Described virtual reality terminal presents the visual angle adjustable of the composite space after three-dimensional visualization.
The reality scene information catching user's surrounding enviroment described in step S121 is catch the time series frame data of user's surrounding enviroment image;Described computing module extracts reality scene feature from described reality scene information and described time series frame data is carried out pattern recognition analysis, to extract reality scene feature.
Described reality scene sensing module includes: depth camera sensor, depth camera sensor and one or more the combination in the binding entity of RGB image sensor, ultrasonic locating sensing module, thermal imaging orientation sensing module and electromagnetic location sensing module.
Described sensory package include 3-axis acceleration sensor, three axis angular rate sensors, three axle geomagnetic sensors one or more.
Described customizing messages triggered step includes following sub-step:
S41: judge the appointment information whether comprising target person in going game picture frame;
S42: if comprising the appointment information of described target person in going game picture frame, obtain the display position of described appointment information;
S43: based on the display position specifying information described in going game picture frame, adds and specifies animation.
The invention has the beneficial effects as follows:
The world of Virtual Realization and real world are associated by the present invention, show in real time by being positioned at the little map in the world of Virtual Realization, simultaneously user can Real Time Observation to factum action.
Specifically, adopt the mode of little map that the building of real world carries out the displaying of 3-D view, and be associated with location, visual in image;Adopt application reality scene sensing technology and corresponding calculation process technology to understand and analyze actual environment, by some Feature Mapping in actual environment to the virtual scene being presented to user, thus improve Consumer's Experience;And, location therein mode realizes the location of mobile target (people, equipment) and three-dimensional position shows, position-based for virtual reality terminal is served by providing coordinate to estimate, possesses higher precision and relatively low time delay (time delay can be arranged indirectly) by the scan period;Trigger it addition, present invention additionally comprises a customizing messages, customizing messages can be triggered when user moves to particular range.
Accompanying drawing explanation
Fig. 1 is the inventive method flow chart.
Detailed description of the invention
Below in conjunction with accompanying drawing, technical scheme is described in further detail:
As it is shown in figure 1, a kind of game video processing method based on virtual reality technology, it includes scene mapping step, positioning step, motion mode mapping step and customizing messages triggered step;Described scene mapping step shows for the region around virtual scene and user is carried out virtual reality in virtual reality terminal, including for being shown among GIS-Geographic Information System and form the first scene mapping sub-step of composite space and be used for being mapped as scene around the second scene mapping step of virtual scene by the entity object of virtual network element with reality;
Described GIS-Geographic Information System includes electronic three-dimensional map, and the first described scene mapping sub-step includes following sub-step:
S111: described network element carries out GISization, described network element is the dummy object being not present in reality;
S112: described composite space is carried out three-dimensional visualization;
S113: virtual reality terminal presents the composite space after three-dimensional visualization and dummy object position;
The second described scene mapping step includes following sub-step:
S121: caught the reality scene information of user's surrounding enviroment by reality scene sensing module;
S122: computing module extracts reality scene feature from described reality scene information, based on mapping relations set in advance, by the feature that described reality scene Feature Mapping is for building virtual scene, and based on the described feature construction virtual reality scenario information for building virtual scene;
S123: virtual reality terminal presents described virtual reality scenario information;
Described positioning step includes:
S21: initialize indoor reference point, is loaded into reference point information in data base;
S22: arrange queue and filter parameter, gathers WIFI signal data to queue;
S23: utilize the data queue gathered, calculates RSSI average corresponding for each AP on current location;
Whether S24: travel through all reference points, according to the step S23 RSSI average calculated within corresponding AP is about the RSSI interval of certain reference point, it is judged that whether this reference point is concentrated in the judgement of corresponding A P;
S25: ask each AP to judge the common factor of collection:
(1) if occur simultaneously in only one of which reference point, this reference point coordinate is exported as the estimation of algorithm, and terminates;
(2) if more than one reference point in occuring simultaneously, then calculate RSSI error vector, according to error to the reference point sequence in occuring simultaneously, and select minimum k the point of wherein error, with weighting k nearest neighbor algorithm calculating estimated result, and terminate;
(3) if occuring simultaneously for empty set, calculate each center judging collection, and concentrate the center of the hearts as Global center these, Euler's distance is utilized to get rid of the judgement collection that centre-to-centre spacing Global center is farthest, and remaining judgement collection is made the intersection operation of sub-step in step S25 (1), sub-step (2) and sub-step (3), until obtaining estimated result, and terminate;If going to last layer still can not get result, perform sub-step (4);
(4) if sub-step (3) goes to last layer, occuring simultaneously is still empty set, then utilize the error distance between current RSSI average and reference point RSSI average, according to RSSI error minimum principle, utilizes weighting k nearest neighbor algorithm to calculate estimated result;
S26: the composite space after location information and three-dimensional visualization is mapped, shows current location information in composite space;
Described motion mode positioning step includes following sub-step:
S31: multiple and virtual reality terminal association sensory package is set at person joint;
S32: the information of each sensory package sends in real time to virtual reality terminal;
S33: virtual reality terminal resolves after receiving information, and be presented in described virtual reality scenario information;
Described customizing messages triggered step includes: when user arrives specific region, virtual reality terminal triggers particular message, wherein said the judging by comprising default appointment information in location determination or the picture frame that occurred by virtual reality terminal of specific region of arriving.
Described virtual reality terminal is virtual implementing helmet or mobile terminal.
Described positioning step also includes an off-line training step:
S201: discretization area to be targeted, takes N number of position uniformly as a reference point in area to be targeted;
S202: scan WIFI signal in the reference point described in each step S201, records the received signal strength indicator value RSSI of each AP in continuous a period of time;
S203: process the RSSI vector of gained in step 202, calculate each AP interval at the RSSI average of this reference point, variance and minimax, these parameters are saved in data base together with the mark SSID of corresponding A P;
S204: all of reference point carries out S203 and step S204 operation, until all of reference point is all trained complete, thus setting up the RSSI distribution map that area to be targeted is complete.
The described composite space after three-dimensional visualization is the 3-D view of building.
Described virtual reality terminal presents the visual angle adjustable of the composite space after three-dimensional visualization.
The reality scene information catching user's surrounding enviroment described in step S121 is catch the time series frame data of user's surrounding enviroment image;Described computing module extracts reality scene feature from described reality scene information and described time series frame data is carried out pattern recognition analysis, to extract reality scene feature.
Described reality scene sensing module includes: depth camera sensor, depth camera sensor and one or more the combination in the binding entity of RGB image sensor, ultrasonic locating sensing module, thermal imaging orientation sensing module and electromagnetic location sensing module.
Described sensory package include 3-axis acceleration sensor, three axis angular rate sensors, three axle geomagnetic sensors one or more.
Described customizing messages triggered step includes following sub-step:
S41: judge the appointment information whether comprising target person in going game picture frame;
S42: if comprising the appointment information of described target person in going game picture frame, obtain the display position of described appointment information;
S43: based on the display position specifying information described in going game picture frame, adds and specifies animation.
The present embodiment is for being applied to market activity, and activity is held in certain market, it is necessary to use to virtual reality, and user requires over the method for the present invention and searches out the particular artifact of ad-hoc location.Such as, virtual NPC etc. is found.
First, user obtains the first scene and maps, i.e. the ad-hoc location of the shape in whole market and floor and virtual NPC.
S111: network element carries out GISization, described network element is the dummy object being not present in reality, and network element in the present embodiment is virtual NPC;
S112: described composite space is carried out three-dimensional visualization, namely obtains shape and the floor in whole market, it is also possible to include the part landform outside market;
S113: virtual reality terminal presents the shape in the whole market after three-dimensional visualization and floor and virtual NPC certain position in market, realizes (picture namely occupied in virtual reality terminal is a corner) by the mode of little map in the present embodiment.
Described virtual reality terminal presents the visual angle adjustable of the composite space after three-dimensional visualization.
Then, user obtains the second scene and maps, and namely obtains the virtual reality information of surrounding.
S121: caught the reality scene information of user's surrounding enviroment by reality scene sensing module;
S122: computing module extracts reality scene feature from described reality scene information, based on mapping relations set in advance, by the feature that described reality scene Feature Mapping is for building virtual scene, and based on the described feature construction virtual reality scenario information for building virtual scene;
S123: virtual reality terminal presents described virtual reality scenario information, in the present embodiment, carries out all pictures realizing and occupying whole picture except little map segment by the form of virtual animation.
Wherein, the reality scene information catching user's surrounding enviroment described in step S121 is catch the time series frame data of user's surrounding enviroment image;Described computing module extracts reality scene feature from described reality scene information and described time series frame data is carried out pattern recognition analysis, to extract reality scene feature.
Then, oneself is positioned by user.
Described positioning step includes:
S21: initialize indoor reference point, is loaded into reference point information in data base;
S22: arrange queue and filter parameter, gathers WIFI signal data to queue;
S23: utilize the data queue gathered, calculates RSSI average corresponding for each AP on current location;
Whether S24: travel through all reference points, according to the step S23 RSSI average calculated within corresponding AP is about the RSSI interval of certain reference point, it is judged that whether this reference point is concentrated in the judgement of corresponding A P;
S25: ask each AP to judge the common factor of collection:
(1) if occur simultaneously in only one of which reference point, this reference point coordinate is exported as the estimation of algorithm, and terminates;
(2) if more than one reference point in occuring simultaneously, then calculate RSSI error vector, according to error to the reference point sequence in occuring simultaneously, and select minimum k the point of wherein error, with weighting k nearest neighbor algorithm calculating estimated result, and terminate;
(3) if occuring simultaneously for empty set, calculate each center judging collection, and concentrate the center of the hearts as Global center these, Euler's distance is utilized to get rid of the judgement collection that centre-to-centre spacing Global center is farthest, and remaining judgement collection is made the intersection operation of sub-step in step S25 (1), sub-step (2) and sub-step (3), until obtaining estimated result, and terminate;If going to last layer still can not get result, perform sub-step (4);
(4) if sub-step (3) goes to last layer, occuring simultaneously is still empty set, then utilize the error distance between current RSSI average and reference point RSSI average, according to RSSI error minimum principle, utilizes weighting k nearest neighbor algorithm to calculate estimated result;
S26: the composite space after location information and three-dimensional visualization is mapped, shows current location information in composite space.Namely oneself position location is shown in real time by user in little map.
Wherein, described data base needs an off-line training step:
S201: discretization area to be targeted, takes N number of position uniformly as a reference point in area to be targeted;
S202: scan WIFI signal in the reference point described in each step S201, records the received signal strength indicator value RSSI of each AP in continuous a period of time;
S203: process the RSSI vector of gained in step 202, calculate each AP interval at the RSSI average of this reference point, variance and minimax, these parameters are saved in data base together with the mark SSID of corresponding A P;
S204: all of reference point carries out S203 and step S204 operation, until all of reference point is all trained complete, thus setting up the RSSI distribution map that area to be targeted is complete.
In addition, it is necessary in real time the motion mode of oneself is embodied in composite space:
S31: multiple and virtual reality terminal association sensory package is set at person joint;
S32: the information of each sensory package sends in real time to virtual reality terminal;
S33: virtual reality terminal resolves after receiving information, and be presented in described virtual reality scenario information.
Described sensory package include 3-axis acceleration sensor, three axis angular rate sensors, three axle geomagnetic sensors one or more.
The now action of user can embody in virtual reality scenario information.
When above-mentioned all complete after, user can proceed by virtual NPC place mobile.
Finally, when user and virtual NPC place close to time, the broadcasting of animation can be carried out.
When user arrives specific region, virtual reality terminal triggers particular message, wherein said the judging by comprising default appointment information in location determination or the picture frame that occurred by virtual reality terminal of specific region of arriving.
Described customizing messages triggered step includes following sub-step:
S41: judge the appointment information whether comprising target person in going game picture frame;
S42: if comprising the appointment information of described target person in going game picture frame, obtain the display position of described appointment information;
S43: based on the display position specifying information described in going game picture frame, adds and specifies animation.
In the present embodiment, described virtual reality terminal is virtual implementing helmet or mobile terminal.The specifically chosen cost according to businessman is considered.
If employing virtual implementing helmet, it is necessary to purchase special equipment, but effect is more preferably.User can put on virtual implementing helmet and carry out virtual NPC searching.And this kind of method is applicable in the less situation of personnel.
If employing mobile terminal, such as mobile phone or panel computer, then need to install corresponding software, but convenient and swift effect is poor relative to the method adopting virtual implementing helmet.This kind of method be applicable to personnel more when.

Claims (9)

1. the game video processing method based on virtual reality technology, it is characterised in that: it includes scene mapping step, positioning step, motion mode mapping step and customizing messages triggered step;Described scene mapping step shows for the region around virtual scene and user is carried out virtual reality in virtual reality terminal, including for being shown among GIS-Geographic Information System and form the first scene mapping sub-step of composite space and be used for being mapped as scene around the second scene mapping step of virtual scene by the entity object of virtual network element with reality;
Described GIS-Geographic Information System includes electronic three-dimensional map, and the first described scene mapping sub-step includes following sub-step:
S111: described network element carries out GISization, described network element is the dummy object being not present in reality;
S112: described composite space is carried out three-dimensional visualization;
S113: virtual reality terminal presents the composite space after three-dimensional visualization and dummy object position;
The second described scene mapping step includes following sub-step:
S121: caught the reality scene information of user's surrounding enviroment by reality scene sensing module;
S122: computing module extracts reality scene feature from described reality scene information, based on mapping relations set in advance, by the feature that described reality scene Feature Mapping is for building virtual scene, and based on the described feature construction virtual reality scenario information for building virtual scene;
S123: virtual reality terminal presents described virtual reality scenario information;
Described positioning step includes:
S21: initialize indoor reference point, is loaded into reference point information in data base;
S22: arrange queue and filter parameter, gathers WIFI signal data to queue;
S23: utilize the data queue gathered, calculates RSSI average corresponding for each AP on current location;
Whether S24: travel through all reference points, according to the step S23 RSSI average calculated within corresponding AP is about the RSSI interval of certain reference point, it is judged that whether this reference point is concentrated in the judgement of corresponding A P;
S25: ask each AP to judge the common factor of collection:
(1) if occur simultaneously in only one of which reference point, this reference point coordinate is exported as the estimation of algorithm, and terminates;
(2) if more than one reference point in occuring simultaneously, then calculate RSSI error vector, according to error to the reference point sequence in occuring simultaneously, and select minimum k the point of wherein error, with weighting k nearest neighbor algorithm calculating estimated result, and terminate;
(3) if occuring simultaneously for empty set, calculate each center judging collection, and concentrate the center of the hearts as Global center these, Euler's distance is utilized to get rid of the judgement collection that centre-to-centre spacing Global center is farthest, and remaining judgement collection is made the intersection operation of sub-step in step S25 (1), sub-step (2) and sub-step (3), until obtaining estimated result, and terminate;If going to last layer still can not get result, perform sub-step (4);
(4) if sub-step (3) goes to last layer, occuring simultaneously is still empty set, then utilize the error distance between current RSSI average and reference point RSSI average, according to RSSI error minimum principle, utilizes weighting k nearest neighbor algorithm to calculate estimated result;
S26: the composite space after location information and three-dimensional visualization is mapped, shows current location information in composite space;
Described motion mode positioning step includes following sub-step:
S31: multiple and virtual reality terminal association sensory package is set at person joint;
S32: the information of each sensory package sends in real time to virtual reality terminal;
S33: virtual reality terminal resolves after receiving information, and be presented in described virtual reality scenario information;
Described customizing messages triggered step includes: when user arrives specific region, virtual reality terminal triggers particular message, wherein said the judging by comprising default appointment information in location determination or the picture frame that occurred by virtual reality terminal of specific region of arriving.
2. a kind of game video processing method based on virtual reality technology according to claim 1, it is characterised in that: described virtual reality terminal is virtual implementing helmet or mobile terminal.
3. a kind of game video processing method based on virtual reality technology according to claim 1, it is characterised in that: described positioning step also includes an off-line training step:
S201: discretization area to be targeted, takes N number of position uniformly as a reference point in area to be targeted;
S202: scan WIFI signal in the reference point described in each step S201, records the received signal strength indicator value RSSI of each AP in continuous a period of time;
S203: process the RSSI vector of gained in step 202, calculate each AP interval at the RSSI average of this reference point, variance and minimax, these parameters are saved in data base together with the mark SSID of corresponding A P;
S204: all of reference point carries out S203 and step S204 operation, until all of reference point is all trained complete, thus setting up the RSSI distribution map that area to be targeted is complete.
4. a kind of game video processing method based on virtual reality technology according to claim 1, it is characterised in that: the described composite space after three-dimensional visualization is the 3-D view of building.
5. a kind of game video processing method based on virtual reality technology according to claim 1, it is characterised in that: described virtual reality terminal presents the visual angle adjustable of the composite space after three-dimensional visualization.
6. a kind of game video processing method based on virtual reality technology according to claim 1, it is characterised in that: the reality scene information catching user's surrounding enviroment described in step S121 is catch the time series frame data of user's surrounding enviroment image;Described computing module extracts reality scene feature from described reality scene information and described time series frame data is carried out pattern recognition analysis, to extract reality scene feature.
7. a kind of game video processing method based on virtual reality technology according to claim 1, it is characterised in that: described reality scene sensing module includes: depth camera sensor, depth camera sensor and one or more the combination in the binding entity of RGB image sensor, ultrasonic locating sensing module, thermal imaging orientation sensing module and electromagnetic location sensing module.
8. a kind of game video processing method based on virtual reality technology according to claim 1, it is characterised in that: described sensory package include 3-axis acceleration sensor, three axis angular rate sensors, three axle geomagnetic sensors one or more.
9. a kind of game video processing method based on virtual reality technology according to claim 1, it is characterised in that: described customizing messages triggered step includes following sub-step:
S41: judge the appointment information whether comprising target person in going game picture frame;
S42: if comprising the appointment information of described target person in going game picture frame, obtain the display position of described appointment information;
S43: based on the display position specifying information described in going game picture frame, adds and specifies animation.
CN201610150148.2A 2016-03-16 2016-03-16 Game video processing method based on virtual reality technology Pending CN105797379A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610150148.2A CN105797379A (en) 2016-03-16 2016-03-16 Game video processing method based on virtual reality technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610150148.2A CN105797379A (en) 2016-03-16 2016-03-16 Game video processing method based on virtual reality technology

Publications (1)

Publication Number Publication Date
CN105797379A true CN105797379A (en) 2016-07-27

Family

ID=56468571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610150148.2A Pending CN105797379A (en) 2016-03-16 2016-03-16 Game video processing method based on virtual reality technology

Country Status (1)

Country Link
CN (1) CN105797379A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106693365A (en) * 2017-02-06 2017-05-24 福州市马尾区朱雀网络信息技术有限公司 Method and device for rapidly transferring game object
CN110719532A (en) * 2018-02-23 2020-01-21 索尼互动娱乐欧洲有限公司 Apparatus and method for mapping virtual environment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102681661A (en) * 2011-01-31 2012-09-19 微软公司 Using a three-dimensional environment model in gameplay
CN103384358A (en) * 2013-06-25 2013-11-06 云南大学 Indoor positioning method based on virtual reality and WIFI space field strength
CN103885788A (en) * 2014-04-14 2014-06-25 焦点科技股份有限公司 Dynamic WEB 3D virtual reality scene construction method and system based on model componentization
CN104035760A (en) * 2014-03-04 2014-09-10 苏州天魂网络科技有限公司 System capable of realizing immersive virtual reality over mobile platforms
CN104063466A (en) * 2014-06-27 2014-09-24 深圳先进技术研究院 Virtuality-reality integrated three-dimensional display method and virtuality-reality integrated three-dimensional display system
CN104536763A (en) * 2015-01-08 2015-04-22 炫彩互动网络科技有限公司 Method for implementing online game simulating reality

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102681661A (en) * 2011-01-31 2012-09-19 微软公司 Using a three-dimensional environment model in gameplay
CN103384358A (en) * 2013-06-25 2013-11-06 云南大学 Indoor positioning method based on virtual reality and WIFI space field strength
CN104035760A (en) * 2014-03-04 2014-09-10 苏州天魂网络科技有限公司 System capable of realizing immersive virtual reality over mobile platforms
CN103885788A (en) * 2014-04-14 2014-06-25 焦点科技股份有限公司 Dynamic WEB 3D virtual reality scene construction method and system based on model componentization
CN104063466A (en) * 2014-06-27 2014-09-24 深圳先进技术研究院 Virtuality-reality integrated three-dimensional display method and virtuality-reality integrated three-dimensional display system
CN104536763A (en) * 2015-01-08 2015-04-22 炫彩互动网络科技有限公司 Method for implementing online game simulating reality

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106693365A (en) * 2017-02-06 2017-05-24 福州市马尾区朱雀网络信息技术有限公司 Method and device for rapidly transferring game object
CN106693365B (en) * 2017-02-06 2018-08-21 福州市马尾区朱雀网络信息技术有限公司 A kind of quick transfer approach of game object and device
CN110719532A (en) * 2018-02-23 2020-01-21 索尼互动娱乐欧洲有限公司 Apparatus and method for mapping virtual environment
CN110719532B (en) * 2018-02-23 2023-10-31 索尼互动娱乐欧洲有限公司 Apparatus and method for mapping virtual environment

Similar Documents

Publication Publication Date Title
CN105608746B (en) A method of reality is subjected to Virtual Realization
CN105807931B (en) A kind of implementation method of virtual reality
CN105824416B (en) A method of by virtual reality technology in conjunction with cloud service technology
CN105797378A (en) Game video realizing method based on virtual reality technology
EP2579128B1 (en) Portable device, virtual reality system and method
CN105824417B (en) human-object combination method adopting virtual reality technology
CN104699247B (en) A kind of virtual reality interactive system and method based on machine vision
US9690376B2 (en) Wireless wrist computing and control device and method for 3D imaging, mapping, networking and interfacing
US9600067B2 (en) System and method for generating a mixed reality environment
CN105094335B (en) Situation extracting method, object positioning method and its system
US20150070274A1 (en) Methods and systems for determining 6dof location and orientation of head-mounted display and associated user movements
CN109671118A (en) A kind of more people's exchange methods of virtual reality, apparatus and system
WO2015180497A1 (en) Motion collection and feedback method and system based on stereoscopic vision
CN103635891A (en) Massive simultaneous remote digital presence world
CN109358754B (en) Mixed reality head-mounted display system
CN110969905A (en) Remote teaching interaction and teaching aid interaction system for mixed reality and interaction method thereof
US11156830B2 (en) Co-located pose estimation in a shared artificial reality environment
JP2023126474A (en) Systems and methods for augmented reality
CN105797379A (en) Game video processing method based on virtual reality technology
CN109445596A (en) A kind of integral type mixed reality wears display system
JP6695997B2 (en) Information processing equipment
KR101905272B1 (en) Apparatus for user direction recognition based on beacon cooperated with experiential type content providing apparatus and method thereof
KR101060998B1 (en) User Location Based Networking Virtual Space Simulator System
JP6739539B2 (en) Information processing equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160727

RJ01 Rejection of invention patent application after publication