WO2019037515A1 - 一种基于虚拟空间场景的信息交互方法、计算机设备及计算机可读存储介质 - Google Patents

一种基于虚拟空间场景的信息交互方法、计算机设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2019037515A1
WO2019037515A1 PCT/CN2018/090437 CN2018090437W WO2019037515A1 WO 2019037515 A1 WO2019037515 A1 WO 2019037515A1 CN 2018090437 W CN2018090437 W CN 2018090437W WO 2019037515 A1 WO2019037515 A1 WO 2019037515A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
real
current terminal
obtaining
location
Prior art date
Application number
PCT/CN2018/090437
Other languages
English (en)
French (fr)
Inventor
郭金辉
李斌
邓智文
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019037515A1 publication Critical patent/WO2019037515A1/zh
Priority to US16/780,742 priority Critical patent/US11195332B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user

Definitions

  • the present application relates to information interaction technologies, and in particular, to an information interaction method based on a virtual space scenario, a computer device, and a computer readable storage medium.
  • VR Virtual Reality
  • AR Augmented Reality
  • VR technology uses computer simulation to generate a virtual world of three dimensions, providing users with simulations of visual, auditory, tactile and other senses, allowing users to experience the situation.
  • the scenes and characters seen in VR are all fake, and the human consciousness is substituted into a virtual world.
  • AR it is a technique for calculating the position and angle of a camera image in real time and adding corresponding images, which can put the virtual world on the screen and interact in the real world.
  • an information interaction method based on a virtual space scenario, a computer device, and a computer storage medium are provided.
  • An information interaction method based on a virtual space scenario is implemented on a computer device, where the computer device includes a memory and a processor, and the method includes:
  • a computer device comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor such that the processor performs the following steps:
  • a non-transitory computer readable storage medium storing computer readable instructions, when executed by one or more processors, causes the one or more processors to perform the following steps:
  • FIG. 1 is a schematic diagram of interaction between hardware entities in an embodiment of the present application.
  • FIG. 2 is a flowchart of an information interaction method based on a virtual space scenario in an embodiment of the present application
  • FIG. 3 is a diagram showing an example of a real world in an embodiment of the present application.
  • FIG. 4 is a diagram showing an example of a virtual space obtained by simulating a real world in an embodiment of the present application
  • FIG. 5 is a diagram showing an example of obtaining a virtual space by real world perspective, mapping, or projection in the embodiment of the present application
  • 6-7 are schematic diagrams of group chat in a virtual space in the embodiment of the present application.
  • FIG. 8 is a schematic diagram of information interaction in a scenario in which Embodiment 1 of the present application is applied;
  • FIG. 9 is a schematic structural diagram of a device according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of hardware entities of a server according to an embodiment of the present application.
  • first, second, etc. are used herein to describe various elements (or various thresholds or various applications or various instructions or various operations), etc., these elements (or thresholds) Or application or instruction or operation) should not be limited by these terms. These terms are only used to distinguish one element (or threshold or application or instruction or operation) and another element (or threshold or application or instruction or operation).
  • first operation may be referred to as a second operation
  • second operation may also be referred to as a first operation
  • the first operation and the second operation are operations, but the two are not the same The operation is only.
  • the steps in the embodiments of the present application are not necessarily processed in the order of the steps described.
  • the steps may be selectively arranged to be reordered according to requirements, or the steps in the embodiment may be deleted, or the steps in the embodiment may be added.
  • the description of the steps in the application examples is only an optional combination of the steps, and does not represent a combination of the steps in the embodiments of the present application.
  • the order of the steps in the embodiments is not to be construed as limiting the present application.
  • the intelligent terminal (such as a mobile terminal) of the embodiment of the present application can be implemented in various forms.
  • the mobile terminal described in the embodiments of the present application may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (PDA, Personal Digital Assistant), a tablet (PAD), a portable multimedia player ( Mobile terminals such as PMP (Portable Media Player), navigation devices, and the like, and fixed terminals such as digital TVs, desktop computers, and the like.
  • PDA Personal Digital Assistant
  • PAD tablet
  • PMP Portable Media Player
  • navigation devices and the like
  • fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that configurations in accordance with embodiments of the present application can be applied to fixed type terminals in addition to elements that are specifically for mobile purposes.
  • FIG. 1 is a schematic diagram of hardware entities of each party performing information interaction in the embodiment of the present application.
  • FIG. 1 includes: a terminal device 1 and a server 2.
  • the terminal device 1 is composed of the terminal device 11-14.
  • the terminal device includes a mobile phone, a desktop computer, a PC, an all-in-one, and the like. In the built virtual scenario, the mobile phone has more practicality.
  • the second world is an application scenario of the present application, in which a friend can be pulled through WeChat or the like, or a stranger can be pulled through the "exploration module".
  • the virtual space drawn by the real geographical location data is used for the information interaction between users in the second world according to the function of the exploration module.
  • the terminal device reports the real-time location to the server, and the server stores the real geographical location, draws a map in the constructed three-dimensional space according to the real geographical location, simulates the real environment of the current location of the terminal, and obtains a virtual space for information interaction. So that each end user performs information interaction such as a session, a game, and the like in the virtual space.
  • the data related to the virtual space can be pushed to the terminal device for display on the terminal device.
  • the method includes: 1) acquiring real geographical location data, that is, the first step is “real”, and the map data of the range of the user may be obtained according to the current location of the user (current latitude and longitude information obtained by the mobile phone GPS positioning). Map data includes basic data (such as building information) and more detailed auxiliary data (roads/streets, rivers, bus stops, etc.). 2) Draw a map in the virtual space based on the real geographic location data. That is: the second step is “virtual”, thus achieving the “virtual and real combination effect” according to steps 1) and 2).
  • step 2) when drawing the map, the drawn map is drawn and later modified according to the reusable preset model (such as the height/width of the building model, the length/width of the road model, etc., at the bus stop) Drawing/adjusting the length/width, etc.).
  • the reusable preset model such as the height/width of the building model, the length/width of the road model, etc., at the bus stop
  • the real map data can be dynamically loaded and the map is drawn in the virtual space; and other users (represented by the second end user) according to their respective Location for mapping.
  • the user represented by the first end user
  • other users represented by the second end user
  • the system can also randomly assign the location of the other provinces to the user, and needs to consider load balancing to assign a room to the user.
  • the image editing processing logic 10 executed on the server side is as shown in FIG. 1.
  • the processing logic 10 includes: S1, based on location information of the first terminal and/or the second terminal (current real-time location information and/or load balancing based considerations)
  • the target data of the range of the current terminal (the first terminal and/or the second terminal) is obtained by randomly switching the terminal to a target location information obtained by other provinces;
  • S2 the map data obtained by the analysis (the map data is obtained by means of A plurality of specific implementations are respectively described by the first map data, the second map data, the third map data, and the fourth map data, and the “first to fourth” are not representative of the chronological order of the map data acquisition, but merely
  • the map data obtained by the analysis may be the same or different, and the map is drawn in the constructed three-dimensional space, and the real environment of the current location of the terminal is simulated in the three-dimensional space.
  • S3 collecting operations triggered by the plurality of terminals
  • FIG. 1 is only a system architecture example of the embodiment of the present application.
  • the embodiment of the present application is not limited to the system structure described in FIG. 1 , and various embodiments of the method of the present application are proposed based on the system architecture described in FIG. 1 . .
  • the method includes: acquiring location information (101) of the current terminal, and obtaining a range of the current terminal according to the location information of the current terminal.
  • Mapping data (102) drawing a map in the constructed three-dimensional space according to the map data, obtaining a drawing result (103), and simulating a real environment of a geographical location of the current terminal in the three-dimensional space according to the drawing result,
  • Obtaining a virtual space for information interaction (104) and collecting operations triggered by at least two terminals in the virtual space, and controlling information interaction processing (105) of at least two terminals according to the generated operation instruction.
  • a virtual space is constructed for the user, and the virtual space is a simulation of the real environment, so that the user interacts with the real environment by realizing information interaction such as instant messaging, real-time conversation, and games through the virtual space. The result of processing the information interaction.
  • adding a map drawn according to the real geographical location data in the three-dimensional space may be in a virtual three-dimensional space.
  • Simulating the real environment of the current location of the terminal in addition to simulating the geographic location of the current terminal (such as represented by the first terminal), the geographic location of multiple other terminals (such as represented by the second terminal)
  • the location is simulated, and a virtual space is obtained as information interaction between at least two terminals, where the virtual space is a simulation of the real environment, that is, the embodiment of the present application synchronizes the virtual space with the real world geographic information, so as to A plurality of end users have an experience of interacting with each other in a real environment when interacting.
  • the virtual space may be a virtual community composed of a plurality of end users, wherein the virtual community is a virtual community based on panoramic real image and geographic information, enabling the user to interact with the environment in the virtual community, and the current user interacts with other users.
  • Information interaction such as instant messaging, real-time conversation, games, etc. can be realized.
  • Virtual ground can be included in the virtual community, and various three-dimensional entities can be added on the virtual ground, such as virtual buildings, virtual roads/streets, virtual bridges, virtual bus stops, etc., any in the real world. The same thing can be presented in the virtual community.
  • This virtual community because it is different from the real world (first world), can be called the second world.
  • Figure 3 shows a schematic representation of the buildings, rivers, roads, trees, etc.
  • FIG. 4 is a schematic diagram of a virtual space obtained by simulating buildings, rivers, roads, trees, and the like that exist in the first world in the second world, in which the latitude and longitude are represented by horizontal and vertical dashed lines (only To illustrate, including the end users A1-A5, the location of the end users A1-A5 in the virtual space, and the relative distance to nearby buildings 3D objects, roads/blocks, rivers, trees, etc., are associated with each end user The position and relative distance in a world are the same.
  • Figure 5 is a schematic diagram showing the relationship between the first world and the second world.
  • the second world is simulated according to the real map data of the first world, and can be based on the real map data through the first world mapping, perspective or projection relationship. , preset model files, model examples, etc. get the second world.
  • a plurality of end users can have a one-to-one conversation, or a group chat between multiple people in the information interaction state as shown in FIG. 6-7.
  • voice chat, text chat, and An emotion symbol, a limb picture, or the like can be displayed and displayed in the display area 1 of the respective terminal.
  • the second world is a virtual space (or virtual community) built using VR technology that can be used for social and sharing among multiple end users. It will generate its own virtual 3D image based on the user's photo, and at the same time has a rich social gameplay, which can chat with a single person or a group of people in real time.
  • the end user who performs real-time chat may be a first information group formed by friends acquired by a social application such as WeChat/qq, and a group for real-time chat is constructed according to the first information group.
  • the end user performing real-time chat may be a second information group formed by pulling the acquired stranger, and constructing a group for real-time chat according to the second information group. Pulling strangers can make users more aware of new friends. For example, based on the location information of the current terminal (such as the first terminal), the map data of the range of the current terminal is obtained, and according to the other terminals (such as the second terminal) that are closer to the current terminal, the range is Other terminals (such as the second terminal) that meet the geographic location requirements are added to the second information group. Further, it is also required to determine whether the current terminal (such as the first terminal) and other terminals (such as the second terminal) have the same feature. For the same feature, both may generate their own virtual 3D image in the second world described above based on the photo of the end user.
  • the stranger is drawn by exploring the channel, and the exploration channel can be divided according to different provinces in the country, and the explored room is randomly created using the real-time geographic location to obtain the current terminal.
  • Map information of the location including building information, roads, rivers, trees, bus stops, etc.
  • preset corresponding model files according to the map information type including models such as buildings, roads, rivers, trees, etc., and identify the models according to the map data.
  • the size and position are drawn. After that, according to the movement of the current terminal corresponding character, the new map data generated according to the current position movement change can be loaded in real time, so that the exploration map can be expanded indefinitely.
  • the user also knows more real friends based on the location information in the infinitely extended virtual space scene, and enters the session state in the virtual scene. Not only supports one-to-one conversations, but also supports many-to-many (multiple people) entering a certain group at the same time (such as the first information group, the second information group, the intersection of the first information group and the second information group, the union, The third set of information formed by the collection) to enter the group chat state.
  • the latitude and longitude information of the current terminal is obtained according to the GPS, and the location information of the current terminal is identified by the latitude and longitude information, and the data of the predetermined range is radiated to the center of the latitude and longitude information as the current terminal.
  • the map data of the range, and thus, the map data of the range of the current terminal is obtained according to the location information of the current terminal.
  • the scheme for drawing a map in the constructed three-dimensional space according to the map data to obtain the drawing result also has at least one combination of the following, including:
  • the first model file is obtained according to the first location information (for example, the model file corresponding to the Prefab model instance. For example, the length, width, and height of the building information can generate a 3D building, and the length, width, and height can be adjusted later.).
  • the first model instance is created according to the first model file (such as the Prefab model instance, and the Prefab model instance corresponds to different map data types, such as the Prefab model instances of the base class data and the auxiliary class data are different).
  • the first model instance and the second model instance respectively according to the latitude and longitude information corresponding to the first location information and the latitude and longitude information corresponding to the second location information, and basic attributes (such as length, width, height) and Auxiliary identification information (such as an identification near a building) is mapped in the three-dimensional space.
  • the maps to be drawn are divided into different regions (such as provinces, municipalities, municipalities, autonomous prefectures, districts, counties, etc.), and the real-time position changes of the current terminal in the current region are monitored.
  • the map is updated and expanded according to the data of the change amount of the first to second real-time positions, and the advantage is that the update of the local data is more efficient than the first scheme, and the calculation accuracy may have an error.
  • the real-time location of the current terminal is moved from the first real-time location to the second real-time location, the current terminal is loaded in real time according to the location change parameter generated by the first real-time location moving to the second real-time location.
  • the second map data generated by the position change pulls the second map data. Parsing the second map data to obtain basic class data and auxiliary class data, the base class data is used to represent third location information including building information, and the auxiliary class information is used to represent roads, streets, and rivers
  • the fourth location information of the bus stop Obtaining a third model file according to the third location information, and establishing a third model instance according to the third model file. And acquiring a fourth model file according to the fourth location information, and establishing a fourth model instance according to the fourth model file.
  • the third model instance and the fourth model instance are respectively performed in the three-dimensional space according to the latitude and longitude information corresponding to the third location information and the latitude and longitude information corresponding to the fourth location information, and basic attributes and auxiliary identification information. Map drawing.
  • the third map data is obtained by obtaining the latitude and longitude information corresponding to the current terminal in the second real-time position according to the GPS, and determining, according to the latitude and longitude information, the data radiated to the predetermined range as the map data of the current terminal range, the map data For the third map data.
  • Parsing the third map data to obtain basic class data and auxiliary class data the base class data is used to represent fifth location information including building information, and the auxiliary class information is used to represent roads, streets, and rivers
  • the sixth location information of the bus stop Obtaining a fifth model file according to the fifth location information, and establishing a fifth model instance according to the fifth model file. Acquiring a sixth model file according to the sixth location information, and establishing a sixth model instance according to the sixth model file.
  • the fifth model instance and the sixth model instance are respectively performed in the three-dimensional space according to the latitude and longitude information corresponding to the fifth location information and the latitude and longitude information corresponding to the sixth location information, and basic attributes and auxiliary identification information. Map drawing.
  • the maps to be drawn are divided into different regions (such as provinces, municipalities, municipalities, autonomous prefectures, districts, counties, etc.), and the current terminal is switched to the other provinces in real time. specific area.
  • the current terminal randomly allocates the target location in the designated area. For example, users in the same province are randomly assigned to a room with a maximum of 50 people, and the room is an area that conforms to a latitude and longitude interval.
  • the device of the embodiment of the present application randomly assigns a target to the user.
  • the location, and the data that radiates a predetermined range from the latitude and longitude information corresponding to the target location is determined as the map data of the range of the current terminal, and the map data is the fourth map data.
  • Parsing the fourth map data to obtain basic class data and auxiliary class data the base class data is used to represent seventh location information including building information, and the auxiliary class information is used to represent roads, streets, and rivers
  • the eighth location information of the bus stop Acquiring a seventh model file according to the seventh location information, and establishing a seventh model instance according to the seventh model file. Acquiring an eighth model file according to the eighth location information, and establishing an eighth model instance according to the eighth model file. And arranging the latitude and longitude information corresponding to the seventh position information and the latitude and longitude information corresponding to the eighth position information, and the basic attribute and the auxiliary identification information in the three-dimensional space, respectively. Map in the middle.
  • a plurality of model files corresponding to different location information, and a plurality of model instances corresponding to the plurality of model files respectively are involved.
  • the data of different map types base class data and auxiliary class data
  • Pre-set the corresponding models including models such as buildings, roads, rivers, parks, trees, etc., based on the preset model files to build model instances (such as Prefab, Prefab is a resource reference file of Unity, the same object can Created by a preset body).
  • the corresponding model instances are selected according to different map type data (base class data and auxiliary class data), and the basic attributes of the length, width and height of the plurality of model instances may be different, but the same model instance, for example
  • the 3D object representing a building can be reused multiple times.
  • a 3D object representing a commercial building can be reused multiple times, indicating that a 3D object of a road/block can be reused multiple times. and many more. That is, the same object can be created with a preset body and supports repeated instantiation to save overhead.
  • the model instance may be mapped according to the latitude and longitude coordinates, basic attributes, and auxiliary identification information of the map data.
  • the stranger can be pulled through the exploration channel in the second world, and the search channel can be divided according to different provinces in the country, and the explored room is randomly created by using the real-time geographic location to obtain the current terminal location.
  • Map information including building information, roads, rivers, trees, bus stops, etc.
  • preset corresponding model files according to map information types including models such as buildings, roads, rivers, trees, etc.
  • map the models according to the map data Draw the size and position.
  • the new map data generated according to the current position movement change can be loaded in real time, so that the exploration map can be expanded indefinitely.
  • the user also knows more real friends based on the location information in the infinitely extended virtual space scene, and enters the session state in the virtual scene.
  • a map is drawn, and a virtual space is simulated.
  • a 3D graphics program interface may be used to create a three-dimensional environment of the virtual space, and a scene display screen is drawn in the three-dimensional environment of the virtual space, specifically, according to the map information.
  • Model files corresponding to type presets including models such as buildings, roads, rivers, trees, etc., the models are drawn according to the size and position of the map data, which can be seen as buildings, shops, roads, etc., for better Simulating and displaying map data in three-dimensional space, there must be a difference between the building and the building. Therefore, the collected panoramic real-time image data can be transmitted from the server to the 3D display platform in the terminal, and the texture is mapped to the scene display. Curtain, get the virtual space based on the scene display.
  • the real geographical location is combined with the virtual preset module to generate a virtual space.
  • the real-time geographic location may be used in the second world exploration module to obtain a map of the current location module.
  • Information including building information, roads, rivers, trees, bus stops, etc.
  • preset the corresponding model according to the type of information obtained including models such as buildings, roads, rivers, parks, trees, etc.
  • the model file is created to Prefab, and the model prefab is selected according to the map data type to be instantiated, and the model is drawn according to the latitude and longitude coordinates, basic attributes, and auxiliary identification information of the map data.
  • the method includes: 1) acquiring the latitude and longitude information of the current user in real time through the location management and synchronization component, and synchronously synchronizing to the server, acquiring the location coordinates of other users, and drawing in the map; 2) acquiring the component through the map API, according to the current The latitude and longitude information of the location where the user is located acquires the map data under the current region zoom factor, and maps the map data information returned by the map API interface; 3) through the preset Prefab model instance, according to the elements drawn on the map, the design is advanced in advance.
  • Corresponding model files using Prefab to instantiate the same resource to reuse the model to draw the map, can greatly reduce the performance consumption; 4) character collision detection, group member area collision, friend profile and chat scene management, convenient for users There are different options for establishing interactions. In the process of combining virtual and real, all users can be divided into different provinces according to geographical location. Users in the same province are allocated 50 people as a ceiling, and rooms are randomly allocated according to the current user model. The map data allows the exploration map to be expanded indefinitely. The user will get to know more real friends according to the location in the scene, and enter the conversation state in the virtual scene, and also support multiple people to enter the group chat state at the same time.
  • Virtual space can be generated by VR and AR technology, real world and virtual space can be combined, and new users can be more effectively recognized based on location. Because it is divided according to different provinces in the country, and the real-time location is used to randomly create the explored room, and the new map data generated by the current location change is loaded in real time according to the movement of the character, the exploration map can be expanded infinitely, and the user will be in the scene. Get to know more real friends based on location, reduce the cost of meeting new friends, and create social relationships very easily. Click on the user to view the user's personal information and image, or explore the group and participate in group chats. Find friends you are interested in.
  • VR technology uses computer simulation to generate a three-dimensional virtual world, providing users with simulations of visual, auditory, tactile and other senses, allowing users to observe the three dimensions in a timely and unrestricted manner. thing.
  • the computer can perform complex calculations immediately, and return accurate 3D world images to produce a sense of presence. The scenes and characters seen are all fake.
  • AR augmented reality technology
  • AR is a new technology that integrates real world information and virtual world information "seamlessly". It is an entity information (visual information) that is difficult to experience in a certain time and space of the real world. , sound, taste, touch, etc.), through computer and other science and technology, simulation and then superimposition, the virtual information is applied to the real world, perceived by human senses, thus achieving a sensory experience beyond reality.
  • the real environment and virtual objects are superimposed in real time on the same picture or space. It displays the information of the real world, and displays the virtual information at the same time. The two kinds of information complement each other and superimpose.
  • visual augmented reality users can use the helmet display to combine the real world with computer graphics to see the real world around it.
  • an operation triggered by at least two terminals in the virtual space may be collected, and when the operation conforms to the collision detection and/or the group member area collision policy, the information interaction processing is triggered. And generating, by the collision detection policy, a first operation instruction, where the first operation is used to control at least two terminals to perform a one-to-one interaction mode, and in response to the first operation instruction, enter a one-to-one user session state between the terminals. Generating a second operation instruction according to the group member area collision policy, where the second operation is used to control at least two terminals to perform a one-to-many interaction mode, and in response to the second operation instruction, enter a one-to-many group chat state between the terminals .
  • the session state is entered by means of the virtual scene. Not only supports one-to-one conversations, but also supports many-to-many (multiple people) entering a certain group at the same time (such as the first information group, the second information group, the intersection of the first information group and the second information group, the union, The third set of information formed by the collection) to enter the group chat state.
  • the specific information interaction may be triggered by character collision detection, group member area collision, friend profile and chat scene management, so as to facilitate different interactions between users.
  • the virtual space can be sent to at least two terminals for display.
  • This virtual space is set with a default perspective.
  • the default viewing angle may be a viewing angle of a 3D entity (such as a virtual object) in a normal mode after entering the VR mode.
  • the virtual object can be static and can be moving. When the virtual object is in a static state, browsing with the default angle of view is no problem, but when the virtual object is in motion, browsing with the default perspective is far. Not required for browsing.
  • the terminal can adjust the viewing angle according to the current browsing requirement, that is, any terminal can send the viewing angle control instruction to a certain server or a control processor in the server or a certain one of the server clusters for processing the control instruction.
  • the entity, the receiving party (such as a control processor in a server or a server or a hardware entity in the server cluster for processing the control instruction) receives the view control instruction, generates a corresponding virtual space according to the view control instruction, and sends the corresponding virtual space. Give the corresponding terminal.
  • the receiving party such as a control processor in a server or a server or a hardware entity in the server cluster for processing the control instruction
  • the receiving party such as a control processor in a server or a server or a hardware entity in the server cluster for processing the control instruction
  • the virtual object can be invisible, can jump, etc., and wants to accurately capture the specific operation and
  • the terminal and other terminals perform response processing such as information interaction, and the terminal may send a display control command to the receiver, and the receiver generates a corresponding virtual space according to the display control command, and then sends the corresponding virtual space to the other terminal.
  • the display control instruction controls real-time display of the terminal on the virtual space, and the real-time display includes a combination of one or more of an avatar display, an action display, a text identification, and a stealth display that can be controlled in real time.
  • a control entity in a server or a server or a hardware cluster in a server cluster for processing the control instruction may also collect any terminal in a virtual space and a 3D entity (such as a virtual object).
  • a virtual object is a cartoon character, a cartoon character parkour game scene
  • the terminal can be controlled according to the generated operation instruction (up, down, left, right movement, jump, etc.)
  • the information interaction between the virtual objects enables the cartoon characters to perform corresponding upper, lower, left, and right movements in the preset game scene according to the control of the operation instructions (up, down, left, and right movements, jumps, etc.). , jump and other operations.
  • the process of restoring the real-time interrupt is re-initiated; the process includes the drawing process of the map, the generation of the virtual space, and the information of the at least two terminals.
  • Interactive processing When the condition of the real-time interrupt satisfies the notification policy, a notification is sent to the terminal of the participating process.
  • the attribute information of the current terminal and the attribute information of other terminals in the virtual space may be acquired.
  • a virtual space adapted to different attribute information is generated according to the acquired different attribute information.
  • different types of terminals can set different definitions for different terminals.
  • a control processor in a server or a server or a server cluster is used to process the hardware entity of the control command.
  • the virtual space sent to the terminal may vary in amount of data or on the adaptation screen.
  • FIG. 2 is a schematic flow chart of a method according to an embodiment of the present application. It should be understood that although the various steps in the flowchart of FIG. 2 are sequentially displayed as indicated by the arrows, these steps are not necessarily performed in the order indicated by the arrows. Except as explicitly stated herein, the execution of these steps is not strictly limited, and may be performed in other sequences. Moreover, at least some of the steps in FIG. 2 may include a plurality of sub-steps or stages, which are not necessarily performed at the same time, but may be executed at different times, and the order of execution thereof is not necessarily This may be performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of the other steps.
  • the information interaction device based on the virtual space scenario in the embodiment of the present application, as shown in FIG. 9, the device includes: a terminal 41 and a server 42. All or part of the processing logic may be executed at server 42, as shown by processing logic 10 in FIG.
  • the server includes: an obtaining unit 421, configured to acquire location information of the current terminal; a map data determining unit 422, configured to obtain map data of a range of the current terminal according to the location information of the current terminal; and a map drawing unit 423, configured to: Draw a map in the constructed three-dimensional space according to the map data to obtain a drawing result; the simulation unit 424 is configured to simulate a real environment of a geographical location of the current terminal in the three-dimensional space according to the drawing result, and obtain The virtual space of the information interaction; the control unit 425 is configured to collect operations triggered by the at least two terminals in the virtual space, and control information interaction processing of the at least two terminals according to the generated operation instruction.
  • the map data determining unit is further configured to: obtain latitude and longitude information of the current terminal according to the GPS of the global positioning system, and identify the location information of the current terminal by using the latitude and longitude information;
  • the data radiated to the predetermined range centering on the latitude and longitude information is determined as the map data of the range in which the current terminal is located.
  • the map drawing unit is further configured to: divide different regions for the map to be drawn according to different provinces in the constructed three-dimensional space; acquire real-time location of the current terminal in the current region, and identify according to the latitude and longitude information
  • the real-time location pulls the first map data; parses the first map data to obtain basic class data and auxiliary class data, and the basic class data is used to represent first location information including building information, the auxiliary class
  • the information is used to represent second location information including roads, streets, rivers, and bus stops; acquiring a first model file according to the first location information, and establishing a first model instance according to the first model file; And acquiring the second model file according to the second model file, and determining the latitude and longitude information corresponding to the first location information and the second model instance according to the first location information, respectively. Longitude and latitude information corresponding to the second location information, and basic attributes and auxiliary identification information in the three-dimensional space Map drawing.
  • the device further includes: a monitoring unit, configured to: monitor real-time location change of the current terminal in the current area; and monitor that the real-time location of the current terminal is moved from the first real-time location to the first Notifying the map drawing unit when the real-time position is; the map drawing unit is further configured to load, in real time, the current terminal based on the current real-time position change according to the position change parameter generated by the first real-time position moving to the second real-time position Second map data, pull the second map data.
  • a monitoring unit configured to: monitor real-time location change of the current terminal in the current area; and monitor that the real-time location of the current terminal is moved from the first real-time location to the first Notifying the map drawing unit when the real-time position is
  • the map drawing unit is further configured to load, in real time, the current terminal based on the current real-time position change according to the position change parameter generated by the first real-time position moving to the second real-time position Second map data, pull the second map data.
  • the device further includes: a monitoring unit, configured to: monitor real-time location change of the current terminal in the current area; and monitor that the real-time location of the current terminal is moved from the first real-time location to the first
  • the map drawing unit is configured to notify the map drawing unit, and the map drawing unit is further configured to generate the third map data according to the second real-time location, and pull the third map data.
  • the device further includes: a position random switching unit, configured to: divide different regions for the map to be drawn according to different provinces in the constructed three-dimensional space; switch the current terminal in the real-time position of the current region to The designated area of other provinces; according to the end user upper limit requirement of the same province, the current terminal randomly allocates the target position in the designated area.
  • a position random switching unit configured to: divide different regions for the map to be drawn according to different provinces in the constructed three-dimensional space; switch the current terminal in the real-time position of the current region to The designated area of other provinces; according to the end user upper limit requirement of the same province, the current terminal randomly allocates the target position in the designated area.
  • control unit is further configured to: collect an operation triggered by the at least two terminals in the virtual space, and trigger information interaction processing when the operation conforms to a collision detection and/or a group member area collision policy.
  • Generating a first operation instruction according to the collision detection policy the first operation is for controlling at least two terminals to perform a one-to-one interaction mode, and responding to the first operation instruction, entering a one-to-one user session state between the terminals;
  • the group member area collision policy generates a second operation instruction, where the second operation is used to control at least two terminals to perform a one-to-many interaction mode, and in response to the second operation instruction, enter a one-to-many group chat state between the terminals.
  • control unit is further configured to: collect an operation triggered by the at least two terminals in the virtual space, and trigger information interaction processing when the operation conforms to a collision detection and/or a group member area collision policy. . And generating, by the collision detection policy, a first operation instruction, where the first operation is used to control at least two terminals to perform a one-to-one interaction mode, and in response to the first operation instruction, enter a one-to-one user session state between the terminals.
  • control unit is further configured to: collect an operation triggered by the at least two terminals in the virtual space, and trigger information interaction processing when the operation conforms to a collision detection and/or a group member area collision policy. Generating a second operation instruction according to the group member area collision policy, where the second operation is used to control at least two terminals to perform a one-to-many interaction mode, and in response to the second operation instruction, enter a one-to-many group chat between the terminals status.
  • the device further includes: a first sending unit, configured to send the virtual space to the at least two terminals for display, where the virtual space is set with a default viewing angle.
  • the first receiving unit is configured to receive a view control command sent by any terminal.
  • a second sending unit configured to generate a corresponding virtual space according to the view control instruction, and send the corresponding virtual space to the terminal.
  • the device further includes: a second receiving unit, configured to receive a display control instruction sent by any terminal. And a third sending unit, configured to generate a corresponding virtual space according to the display control instruction, and send the corresponding virtual space to the other terminal, wherein the display control instruction controls real-time display of the terminal on the virtual space, the real-time display It includes a combination of one or more of avatar display, action display, text identification, and stealth display that can be controlled in real time.
  • the device further includes: an information control unit, configured to collect an interaction operation between the virtual space and the virtual object by any terminal, and control the terminal and the virtual according to the generated operation instruction. Information interaction between objects.
  • the device further includes: a process monitoring unit, configured to re-initiate a process of restoring the real-time interrupt when any process occurs in real time; the process includes the mapping process of the map, the The generation of the virtual space and the information interaction processing of the at least two terminals.
  • the device further includes: a notification unit, configured to: when the condition of the real-time interruption meets the notification policy, send a notification to the terminal of the participating process.
  • the device further includes: a first information acquiring unit, configured to acquire attribute information of the current terminal.
  • the second information acquiring unit is configured to acquire attribute information of other terminals in the virtual space.
  • a space generating unit configured to generate a virtual space adapted to different attribute information according to the acquired different attribute information.
  • a computer apparatus comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor, causing the processor to perform the step of: obtaining a location of the current terminal Information; according to the current terminal location information, obtain the map data of the current terminal range; draw the map in the constructed three-dimensional space according to the map data, and obtain the drawing result; according to the drawing result, simulate the geographic location of the current terminal in the three-dimensional space
  • the real environment obtains a virtual space for information interaction; and collects operations triggered by at least two terminals in the virtual space, and controls information interaction processing of at least two terminals according to the generated operation instruction.
  • the processor when the computer readable instructions are executed by the processor, when the processor performs the step of obtaining the map data of the range of the current terminal according to the location information of the current terminal, performing the following steps: acquiring the current terminal according to the positioning system The latitude and longitude information identifies the location information of the current terminal by the latitude and longitude information; and determines the data radiated to the predetermined range centering on the latitude and longitude information as the map data of the range of the current terminal.
  • the processor when the computer readable instructions are executed by the processor, causing the processor to perform the step of drawing the map in the constructed three-dimensional space according to the map data to obtain a rendering result, performing the following steps: in the constructed three-dimensional space Dividing different regions for the map to be drawn; obtaining the real-time location of the current terminal in the current region, and extracting the first map data according to the real-time location identified by the latitude and longitude information; parsing the first map data to obtain basic class data and auxiliary data
  • the basic class data includes first location information
  • the auxiliary class information includes second location information
  • the first model file is obtained according to the first location information
  • the first model instance is established according to the first model file
  • the second model is obtained according to the second location information.
  • the second model instance is created according to the second model file; and the latitude and longitude information corresponding to the first location information and the second location information respectively, the latitude and longitude information corresponding to the second location information, and the basic attribute and the auxiliary identifier Information is mapped in three dimensions.
  • the computer readable instructions cause the processor to further perform the steps of: monitoring a real-time location change of the current terminal in the current region; and monitoring that the real-time location of the current terminal is moved from the first real-time location to the second real-time location And loading the second map data generated by the current terminal based on the current real-time position change in real time according to the position change parameter generated by the first real-time position moving to the second real-time position.
  • the computer readable instructions cause the processor to further perform the steps of: monitoring a real-time location change of the current terminal in the current region; and monitoring that the real-time location of the current terminal is moved from the first real-time location to the second real-time location And generating third map data according to the second real-time location, and pulling the third map data.
  • the computer readable instructions cause the processor to further perform the steps of: dividing the different regions in the constructed three-dimensional space; switching the current terminal to the other designated regions in the real-time location of the current region; and terminating the user according to the same region
  • the upper limit requires that the current terminal randomly allocates a target location in a specified area.
  • the processor when the computer readable instructions are executed by the processor, causing the processor to perform an operation of acquiring at least two terminals triggered in a virtual space, and controlling the step of information interaction processing of the at least two terminals according to the generated operation instructions Performing the following steps: collecting an operation triggered by at least two terminals in a virtual space, triggering information interaction processing when the operation conforms to the collision detection and/or the group member area collision policy; and generating a first operation instruction according to the collision detection policy, first The operation is used to control at least two terminals to perform a one-to-one interaction mode, and respond to the first operation instruction to enter a one-to-one user session state between the terminals.
  • the processor when the computer readable instructions are executed by the processor, causing the processor to perform an operation of acquiring at least two terminals triggered in a virtual space, and controlling the step of information interaction processing of the at least two terminals according to the generated operation instructions Performing the following steps: collecting operations triggered by at least two terminals in the virtual space, triggering information interaction processing when the operation conforms to the collision detection and/or group member area collision strategy; generating a second operation instruction according to the group member area collision policy, The second operation is used to control at least two terminals to perform a one-to-many interaction mode, and respond to the second operation instruction to enter a one-to-many group chat state between the terminals.
  • the computer readable instructions cause the processor to further perform the steps of: transmitting a virtual space to at least two terminals for display, the virtual space being set with a default view angle; receiving a view control command sent by any of the terminals; The perspective control instruction generates a corresponding virtual space and sends it to the terminal.
  • the computer readable instructions cause the processor to further perform the steps of: receiving a display control instruction sent by any terminal; and generating a corresponding virtual space according to the display control instruction, and transmitting the corresponding virtual space to the other terminal, wherein the control instruction is displayed
  • Control real-time display of the terminal on the virtual space and the real-time display includes one or more combinations of avatar display, action display, text identification, and stealth display that can be controlled in real time.
  • the computer readable instructions cause the processor to perform the following steps: collecting an interaction operation between the virtual space and the virtual object by any terminal, and controlling information interaction processing between the terminal and the virtual object according to the generated operation instruction.
  • the computer readable instructions cause the processor to perform the steps of: reinitiating the process of restoring the real time interrupt when any process occurs in real time; the process includes a map drawing process, virtual space generation, and at least two terminals Information interaction processing.
  • the computer readable instructions cause the processor to further perform the step of transmitting a notification to the participating process terminal when the condition of the real time interrupt satisfies the notification policy.
  • the computer readable instructions cause the processor to perform the following steps: acquiring attribute information of the current terminal; acquiring attribute information of other terminals in the virtual space; and generating different attribute information according to the acquired different attribute information Adapted virtual space.
  • a non-transitory computer readable storage medium storing computer readable instructions that, when executed by one or more processors, cause one or more processors to perform the steps of: obtaining a location of a current terminal Information; according to the current terminal location information, obtain the map data of the current terminal range; draw the map in the constructed three-dimensional space according to the map data, and obtain the drawing result; according to the drawing result, simulate the geographic location of the current terminal in the three-dimensional space
  • the real environment obtains a virtual space for information interaction; and collects operations triggered by at least two terminals in the virtual space, and controls information interaction processing of at least two terminals according to the generated operation instruction.
  • the processor when the computer readable instructions are executed by the processor, when the processor performs the step of obtaining the map data of the range of the current terminal according to the location information of the current terminal, performing the following steps: acquiring the current terminal according to the positioning system The latitude and longitude information identifies the location information of the current terminal by the latitude and longitude information; and determines the data radiated to the predetermined range centering on the latitude and longitude information as the map data of the range of the current terminal.
  • the processor when the computer readable instructions are executed by the processor, causing the processor to perform the step of drawing the map in the constructed three-dimensional space according to the map data to obtain a rendering result, performing the following steps: in the constructed three-dimensional space Dividing different regions for the map to be drawn; obtaining the real-time location of the current terminal in the current region, and extracting the first map data according to the real-time location identified by the latitude and longitude information; parsing the first map data to obtain basic class data and auxiliary data
  • the basic class data includes first location information
  • the auxiliary class information includes second location information
  • the first model file is obtained according to the first location information
  • the first model instance is established according to the first model file
  • the second model is obtained according to the second location information.
  • the second model instance is created according to the second model file; and the latitude and longitude information corresponding to the first location information and the second location information respectively, the latitude and longitude information corresponding to the second location information, and the basic attribute and the auxiliary identifier Information is mapped in three dimensions.
  • the computer readable instructions cause the processor to further perform the steps of: monitoring a real-time location change of the current terminal in the current region; and monitoring that the real-time location of the current terminal is moved from the first real-time location to the second real-time location And loading the second map data generated by the current terminal based on the current real-time position change in real time according to the position change parameter generated by the first real-time position moving to the second real-time position.
  • the computer readable instructions cause the processor to further perform the steps of: monitoring a real-time location change of the current terminal in the current region; and monitoring that the real-time location of the current terminal is moved from the first real-time location to the second real-time location And generating third map data according to the second real-time location, and pulling the third map data.
  • the computer readable instructions cause the processor to further perform the steps of: dividing the different regions in the constructed three-dimensional space; switching the current terminal to the other designated regions in the real-time location of the current region; and terminating the user according to the same region
  • the upper limit requires that the current terminal randomly allocates a target location in a specified area.
  • the processor when the computer readable instructions are executed by the processor, causing the processor to perform an operation of acquiring at least two terminals triggered in a virtual space, and controlling the step of information interaction processing of the at least two terminals according to the generated operation instructions Performing the following steps: collecting an operation triggered by at least two terminals in a virtual space, triggering information interaction processing when the operation conforms to the collision detection and/or the group member area collision policy; and generating a first operation instruction according to the collision detection policy, first The operation is used to control at least two terminals to perform a one-to-one interaction mode, and respond to the first operation instruction to enter a one-to-one user session state between the terminals.
  • the processor when the computer readable instructions are executed by the processor, causing the processor to perform an operation of acquiring at least two terminals triggered in a virtual space, and controlling the step of information interaction processing of the at least two terminals according to the generated operation instructions Performing the following steps: collecting operations triggered by at least two terminals in the virtual space, triggering information interaction processing when the operation conforms to the collision detection and/or group member area collision strategy; generating a second operation instruction according to the group member area collision policy, The second operation is used to control at least two terminals to perform a one-to-many interaction mode, and respond to the second operation instruction to enter a one-to-many group chat state between the terminals.
  • the computer readable instructions cause the processor to further perform the steps of: transmitting a virtual space to at least two terminals for display, the virtual space being set with a default view angle; receiving a view control command sent by any of the terminals; The perspective control instruction generates a corresponding virtual space and sends it to the terminal.
  • the computer readable instructions cause the processor to further perform the steps of: receiving a display control instruction sent by any terminal; and generating a corresponding virtual space according to the display control instruction, and transmitting the corresponding virtual space to the other terminal, wherein the control instruction is displayed
  • Control real-time display of the terminal on the virtual space and the real-time display includes one or more combinations of avatar display, action display, text identification, and stealth display that can be controlled in real time.
  • the computer readable instructions cause the processor to perform the following steps: collecting an interaction operation between the virtual space and the virtual object by any terminal, and controlling information interaction processing between the terminal and the virtual object according to the generated operation instruction.
  • the computer readable instructions cause the processor to perform the steps of: reinitiating the process of restoring the real time interrupt when any process occurs in real time; the process includes a map drawing process, virtual space generation, and at least two terminals Information interaction processing.
  • the computer readable instructions cause the processor to further perform the step of transmitting a notification to the participating process terminal when the condition of the real time interrupt satisfies the notification policy.
  • the computer readable instructions cause the processor to perform the following steps: acquiring attribute information of the current terminal; acquiring attribute information of other terminals in the virtual space; and generating different attribute information according to the acquired different attribute information Adapted virtual space.
  • the server functions as a hardware entity, including the processor 51, the computer storage medium 52, and at least one external communication interface 53; the processor 51, the computer storage medium 52, and external communication
  • the interfaces 53 are all connected by a bus 54.
  • the memory comprises a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium of the computer device stores an operating system and can also store computer readable instructions that, when executed by the processor, cause the processor to implement a method of information interaction based on the virtual space scene.
  • Also stored in the internal memory is computer readable instructions that, when executed by the processor, cause the processor to perform a method of information interaction based on the virtual space scene.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk.
  • optical disk A medium that can store program code.
  • the above-described integrated unit of the present application may be stored in a computer readable storage medium if it is implemented in the form of a software functional unit and sold or used as a stand-alone product.
  • the technical solution of the embodiments of the present application may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disk.

Abstract

一种基于虚拟空间场景的信息交互方法,包括:获取当前终端的位置信息;根据所述当前终端的位置信息,得到当前终端所在范围的地图数据;根据所述地图数据在构建的三维空间中绘制地图,得到绘制结果;根据所述绘制结果,在所述三维空间中模拟当前终端所处地理位置的真实环境,得到用于信息交互的虚拟空间;采集至少两个终端在所述虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理。

Description

一种基于虚拟空间场景的信息交互方法、计算机设备及计算机可读存储介质
相关申请的交叉引用
本申请要求于2017年08月23日提交中国专利局,申请号为201710730847.9、发明名称为“一种基于虚拟空间场景的信息交互方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及信息交互技术,尤其涉及一种基于虚拟空间场景的信息交互方法、计算机设备及计算机可读存储介质。
背景技术
随着科学技术的不断发展,电子技术也得到了飞速的发展,电子产品的种类也越来越多,人们也享受到了科技发展带来的各种便利。现在人们可以通过各种类型的电子设备或终端,以及安装在终端上的各种功能的应用享受随着科技发展带来的舒适生活。
而虚拟实境(VR,Virtual Reality)和增强现实(AR,Augmented Reality)技术的发展,除了用户所在的真实空间,还可以为用户构建一个虚拟空间,并且在该虚拟空间中为用户提供各种服务。就VR技术而言,它是利用电脑模拟产生一个三度空间的虚拟世界,提供使用者关于视觉、听觉、触觉等感官的模拟,让使用者身历其境。在VR中看到的场景和人物全是假的,是把人的意识代入一个虚拟的世界。就AR而言,它是一种实时地计算摄影机影像的位置及角度并加上相应图像的技术,可以在屏幕上把虚拟世界套在现实世界并进行互动。
如何为用户构建一个虚拟空间,该虚拟空间是真实环境的模拟,以便通过该虚拟空间来实现诸如即时通讯、实时会话、游戏等信息交互时,达到让用户彼此身在真实环境进行信息交互的处理结果,是要解决的技术问题。然 而,相关技术中,对此问题,并未存在有效的解决方案。
发明内容
根据本申请的各种实施例提供了一种基于虚拟空间场景的信息交互方法、计算机设备及计算机存储介质。
一种基于虚拟空间场景的信息交互方法,执行于计算机设备,所述计算机设备包括存储器和处理器,所述方法包括:
获取当前终端的位置信息;
根据所述当前终端的位置信息,得到当前终端所在范围的地图数据;
根据所述地图数据在构建的三维空间中绘制地图,得到绘制结果;
根据所述绘制结果,在所述三维空间中模拟当前终端所处地理位置的真实环境,得到用于信息交互的虚拟空间;及
采集至少两个终端在所述虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理。
一种计算机设备,包括存储器和处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:
获取当前终端的位置信息;
根据所述当前终端的位置信息,得到当前终端所在范围的地图数据;
根据所述地图数据在构建的三维空间中绘制地图,得到绘制结果;
根据所述绘制结果,在所述三维空间中模拟当前终端所处地理位置的真实环境,得到用于信息交互的虚拟空间;及
采集至少两个终端在所述虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理。
一种非易失性的计算机可读存储介质,存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
获取当前终端的位置信息;
根据所述当前终端的位置信息,得到当前终端所在范围的地图数据;
根据所述地图数据在构建的三维空间中绘制地图,得到绘制结果;
根据所述绘制结果,在所述三维空间中模拟当前终端所处地理位置的真实环境,得到用于信息交互的虚拟空间;及
采集至少两个终端在所述虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为本申请实施例中各硬件实体间交互的示意图;
图2为本申请实施例中基于虚拟空间场景的信息交互方法流程图;
图3为本申请实施例中真实世界的示例图;
图4为本申请实施例中模拟真实世界得到的虚拟空间示例图;
图5为本申请实施例中由真实世界透视、映射或投影得到虚拟空间的示例图;
图6-7为本申请实施例中在虚拟空间进行群聊的示意图;
图8为应用本申请实施例一场景中信息交互的示意图;
图9为本申请实施例一装置组成示意图;
图10为本申请实施例服务器的硬件实体示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行 清楚、完整地描述。
现在将参考附图描述实现本申请各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本申请实施例的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。
在下面的详细说明中,陈述了众多的具体细节,以便彻底理解本申请。不过,对于本领域的普通技术人员来说,显然可在没有这些具体细节的情况下实践本申请。在其他情况下,没有详细说明公开的公知方法、过程、组件、电路和网络,以避免不必要地使实施例的各个方面模糊不清。
另外,本文中尽管多次采用术语“第一”、“第二”等来描述各种元件(或各种阈值或各种应用或各种指令或各种操作)等,不过这些元件(或阈值或应用或指令或操作)不应受这些术语的限制。这些术语只是用于区分一个元件(或阈值或应用或指令或操作)和另一个元件(或阈值或应用或指令或操作)。例如,第一操作可以被称为第二操作,第二操作也可以被称为第一操作,而不脱离本申请的范围,第一操作和第二操作都是操作,只是二者并不是相同的操作而已。
本申请实施例中的步骤并不一定是按照所描述的步骤顺序进行处理,可以按照需求有选择的将步骤打乱重排,或者删除实施例中的步骤,或者增加实施例中的步骤,本申请实施例中的步骤描述只是可选的顺序组合,并不代表本申请实施例的所有步骤顺序组合,实施例中的步骤顺序不能认为是对本申请的限制。
本申请实施例中的术语“和/或”指的是包括相关联的列举项目中的一个或多个的任何和全部的可能组合。还要说明的是:当用在本说明书中时,“包括/包含”指定所陈述的特征、整数、步骤、操作、元件和/或组件的存在,但是不排除一个或多个其他特征、整数、步骤、操作、元件和/或组件和/或它们的组群的存在或添加。
本申请实施例的智能终端(如移动终端)可以以各种形式来实施。例如,本申请实施例中描述的移动终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、个人数字助理(PDA,Personal Digital Assistant)、 平板电脑(PAD)、便携式多媒体播放器(PMP,Portable Media Player)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本申请的实施方式的构造也能够应用于固定类型的终端。
图1为本申请实施例中进行信息交互的各方硬件实体的示意图,图1中包括:终端设备1和服务器2。其中,终端设备1由终端设备11-14构成,终端设备包括手机、台式机、PC机、一体机等类型,在构建的虚拟场景中,手机更具备实用性。采用本申请实施例,可以达到将真实地理位置与虚拟世界进行虚拟现实结合的目的。第二世界(虚拟世界)是本申请的一个应用场景,在该场景中,可以通过微信等拉取好友,也可以通过“探索模块”来拉取陌生人。在第二世界中可以根据“探索频道”,具体是根据探索模块的功能,将真实的地理位置数据所绘制得到的虚拟空间,用于上述第二世界中的用户间信息交互。终端设备上报各自的实时位置给服务器,服务器存储真实的地理位置,根据真实的地理位置在构建的三维空间中绘制地图,模拟当前终端所处地理位置的真实环境,得到用于信息交互的虚拟空间,使得各终端用户在该虚拟空间中进行诸如会话、游戏等信息交互。该虚拟空间相关的数据可以推送给终端设备,在终端设备上进行显示。具体的,包括:1)获取真实地理位置数据,即:第一步是“实”,可以根据用户当前位置(手机GPS定位得到的当前经纬度信息)来获取用户所在范围的地图数据。地图数据包括基础数据(如建筑信息)和更为细化的辅助数据(道路/街道、河流、公交站点等)。2)根据真实地理位置数据来绘制虚拟空间中的地图。即:第二步是“虚”,从而,根据第1)步和第2)步达到“虚实结合效果”。在第2)步中,在绘制地图时,根据可复用的预置模型对绘制的地图进行绘制和后期修改(如对建筑模型的高度/宽度、道路模型的长度/宽度等、公交站点的长度/宽度等进行绘制和调整)。
在信息交互中,根据用户(以第一终端用户表示)自身位置的移动可以动态加载真实的地图数据,并进行虚拟空间中的地图绘制;以及其他用户(以第二终端用户表示)根据各自的位置进行地图绘制。
在信息交互中,用户(以第一终端用户表示)和其他用户(以第二终端用户表示)可以进行信息交互,如通过碰撞触发二者的交互,或者进入一个群会话中等等。
除了上述针对用户所处省份当前位置的地图绘制,系统还可以随机给用户配置其它省份的位置,需要考虑负载均衡,给用户分配一个房间。
在服务器侧执行的该图像编辑处理逻辑10如图1所示,处理逻辑10包括:S1、根据第一终端和/或第二终端的位置信息(当前实时位置信息和/或基于负载均衡的考虑将终端随机切换到其它省份时给定的一个目标位置信息)得到当前终端(第一终端和/或第二终端)所在范围的地图数据;S2、解析得到的地图数据(地图数据的获取方式有多种具体实现,后续以第一地图数据、第二地图数据、第三地图数据、第四地图数据等分别描述,“第一~第四”并不是代表地图数据获取的时间先后顺序,仅仅是为了区别通过不同获取方式得到的地图数据,可以相同,也可以不相同),根据解析得到的地图数据在构建的三维空间中绘制地图,在三维空间中模拟当前终端所处地理位置的真实环境,得到虚拟空间;S3、采集多个终端在该虚拟空间触发的操作,根据生成的操作指令控制终端间的信息交互处理(一对一交互的模式和/或一对多交互的模式)。
上述图1的例子只是实现本申请实施例的一个系统架构实例,本申请实施例并不限于上述图1所述的系统结构,基于上述图1所述的系统架构,提出本申请方法各个实施例。
本申请实施例的基于虚拟空间场景的信息交互方法,如图2所示,所述方法包括:获取当前终端的位置信息(101),根据所述当前终端的位置信息,得到当前终端所在范围的地图数据(102),根据所述地图数据在构建的三维空间中绘制地图,得到绘制结果(103),根据所述绘制结果,在所述三维空间中模拟当前终端所处地理位置的真实环境,得到用于信息交互的虚拟空间(104),及采集至少两个终端在所述虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理(105)。本申请实施例中,为用户构建一个虚拟空间,该虚拟空间是真实环境的模拟,以便通过该虚拟空间来实现诸如即时通讯、实时会话、游戏等信息交互时,达到让用户彼此身在真实 环境进行信息交互的处理结果。
具体的,本实施例的一个实际应用中,通过将真实的地理位置数据与虚拟的三维空间结合,在该三维空间中加入根据该真实的地理位置数据所绘制的地图,可以在虚拟的三维空间中模拟出当前终端所处地理位置的真实环境,除了对当前终端(如采用第一终端来表示)的地理位置进行模拟,还可以对多个其他终端(如采用第二终端来表示)的地理位置进行模拟,得到作为至少两个终端进行信息交互的虚拟空间,该虚拟空间是真实环境的模拟,也就是说,本申请实施例是将虚拟空间与现实世界的地理信息进行数据同步,以便让多个终端用户在交互时有彼此身在真实环境进行信息交互的体验。该虚拟空间可以是一个由多个终端用户构成的虚拟社区,该虚拟社区中为基于全景实景图像和地理信息的虚拟社区,使用户在该虚拟社区中与环境交互、以及当前用户与其他用户交互可以实现诸如即时通讯、实时会话、游戏等信息交互。在该虚拟社区中可以包括虚拟地面,在虚拟地面上可以添加各种三维实体,如虚拟的楼房、虚拟的道路/街道、虚拟的桥梁、虚拟的公交车站等等,任何在真实世界中存在的一起事物,都可以在该虚拟社区中呈现。这个虚拟社区,由于有别于真实存在的世界(第一世界),可以称之为第二世界。如图3所示为在第一世界中真实存在的建筑,河流,道路,树木等的示意图。如图4所示为在第二世界中对第一世界真实存在的建筑、河流、道路、树木等进行模拟得到的虚拟空间的示意图,在该虚拟空间中以横向和纵向的虚线表示经纬度(仅仅为示意),包括终端用户A1-A5,终端用户A1-A5在该虚拟空间中的位置,及与附近建筑3D对象、道路/街区、河流、树木等的相对距离,都与各个终端用户在第一世界中的位置及相对距离是一样的。如图5所示为第一世界与第二世界的关系示意图,第二世界是根据第一世界的真实地图数据模拟得到的,可以通过第一世界映射、透视或投影关系等,根据真实地图数据、预置模型文件、模型实例等得到第二世界。在该第二世界中,多个终端用户间可以一对一会话,或者如图6-7所示的信息交互状态中多人之间群聊,群聊中,可以语音聊天,文字聊天,还可以发表情符号或肢体图片等,显示于各自终端的显示区域1中。
在一个实施例中,第二世界是利用VR技术所构建的一个虚拟空间(或 称虚拟社区),可以用于多个终端用户间的社交及分享等。会根据用户的照片生成自己的虚拟3D形象,同时又具有丰富的社交玩法,可以和单人或一群人实时聊天。
在一个实施例中,进行实时聊天的终端用户,可以是通过微信/qq等社交应用所拉取得到的好友所构成的第一信息组,根据第一信息组构建用于实时聊天的群组。
在一个实施例中,进行实时聊天的终端用户,可以是拉取得到的陌生人所构成的第二信息组,根据第二信息组构建用于实时聊天的群组。拉取陌生人,可以使得用户更有效的认识新朋友。比如,基于上述当前终端(如第一终端)的位置信息得到当前终端所在范围的地图数据,并据此来分析出距离当前终端比较近的其他终端(如第二终端),从而将在该范围内符合地理位置要求的其他终端(如第二终端)加入到第二信息组中。进一步的,还需要判断当前终端(如第一终端)和其他终端(如第二终端)是否具备同一特征。对于同一特征来说,可以是二者都根据终端用户的照片在上述第二世界中生成了自己的虚拟3D形象。
在一个实施例中,在第二世界中,是通过探索频道来拉取陌生人的,在探索频道中可以根据全国不同省份来划分,并使用实时的地理位置随机创建探索的房间,获取当前终端位置的地图信息(包括建筑信息、道路、河流、树木、公交站台等),根据地图信息类型预置对应的模型文件,包括像建筑、道路、河流、树林等模型,将模型按照地图数据上标识的大小和位置进行绘制。之后,还可以根据当前终端对应人物的移动,来实时加载根据当前位置移动变化所产生的新的地图数据,让探索地图可以无限扩展。用户在无限扩展的虚拟空间场景中也会根据位置信息来结识更多真实的好友,并在虚拟场景中进入会话状态。不仅支持一对一的会话,也支持多对多(多人)同时进入某个群组(如第一信息组、第二信息组、由第一信息组合第二信息组的交集、并集、合集所构成的第三信息组)以进入群聊状态。
本申请实施例中,根据GPS得到当前终端的经纬度信息,通过所述经纬度信息来标识当前终端的所述位置信息,将以经纬度信息为中心向外辐射预定范围的数据确定为所述当前终端所在范围的地图数据,从而,根据当前终 端的位置信息得到了当前终端所在范围的地图数据。
本申请实施例中,地图数据的获取方式有多种,根据地图数据在构建的三维空间中绘制地图以得到绘制结果的方案也有如下至少一种的组合,包括:
一、在构建的三维空间中根据全国不同省份为待绘制的地图划分不同区域(如省、市、直辖市、自治州、区、县等),由于本申请实施例是对真实世界的模拟,因此,在该构建三维空间中为待绘制地图所划分的区域,与真实的地图信息的区域划分是一致的。获取当前终端在当前区域(如北京)的实时位置,根据由经纬度信息标识的实时位置拉取第一地图数据(指当前地图数据,后续有地图数据更新的实施例,区别于后续位置移动产生的新地图数据,新地图数据以第二地图数据表示)。对第一地图数据进行解析,得到基础类数据(如建筑信息)和辅助类数据(如道路/街道、河流、公交站点等),所述基础类数据用于表征包括建筑信息在内的第一位置信息,所述辅助类信息用于表征包括道路、街道、河流、公交站点的第二位置信息。根据第一位置信息获取第一模型文件(如Prefab模型实例对应的模型文件。比如,建筑信息的长宽高可以生成一个3D楼房,这个长宽高是后期可以调整的。)。根据第一模型文件建立第一模型实例(如Prefab模型实例,Prefab模型实例对应不同的地图数据类型,如基础类数据和辅助类数据的Prefab模型实例是不同的)。根据第二位置信息获取第二模型文件,根据所述第二模型文件建立第二模型实例。将所述第一模型实例和所述第二模型实例分别根据所述第一位置信息对应的经纬度信息和所述第二位置信息对应的经纬度信息、及基本属性(如长、宽、高)和辅助标识信息(如建筑物附近的标识)在所述三维空间中进行地图绘制。
二、在构建的三维空间中根据全国不同省份为待绘制的地图划分不同区域(如省、市、直辖市、自治州、区、县等),对所述当前终端在当前区域的实时位置变化进行监测。一种场景中,根据第一到第二实时位置的变化量的数据来更新和扩展地图,好处是:局部数据的更新,比之上述第一种方案效率更高,计算精确度可能会有误差。具体的,监测到所述当前终端的实时位置由第一实时位置移动至第二实时位置时,根据由第一实时位置移动至第二实时位置产生的位置变化参数,实时加载当前终端基于当前实时位置变化产 生的第二地图数据,拉取第二地图数据。对第二地图数据进行解析,得到基础类数据和辅助类数据,所述基础类数据用于表征包括建筑信息在内的第三位置信息,所述辅助类信息用于表征包括道路、街道、河流、公交站点的第四位置信息。根据所述第三位置信息获取第三模型文件,根据所述第三模型文件建立第三模型实例。根据所述第四位置信息获取第四模型文件,根据所述第四模型文件建立第四模型实例。将第三模型实例和所述第四模型实例分别根据所述第三位置信息对应的经纬度信息和所述第四位置信息对应的经纬度信息、及基本属性和辅助标识信息在所述三维空间中进行地图绘制。
三、在构建的三维空间中根据全国不同省份为待绘制的地图划分不同区域(如省、市、直辖市、自治州、区、县等),对所述当前终端在当前区域的实时位置变化进行监测。另一种场景中,以第二实时位置重新构建地图并根据重新构建的地图数据进行刷新,全部刷新。好处是,全部数据的刷新,计算更准确。具体的,监测到所述当前终端的实时位置由第一实时位置移动至第二实时位置时,根据第二实时位置产生第三地图数据,拉取第三地图数据。第三地图数据,是根据GPS得到当前终端在第二实时位置对应的经纬度信息,可以将以所述经纬度信息为中心向外辐射预定范围的数据确定为当前终端所在范围的地图数据,该地图数据为第三地图数据。
对第三地图数据进行解析,得到基础类数据和辅助类数据,所述基础类数据用于表征包括建筑信息在内的第五位置信息,所述辅助类信息用于表征包括道路、街道、河流、公交站点的第六位置信息。根据述第五位置信息获取第五模型文件,根据所述第五模型文件建立第五模型实例。根据述第六位置信息获取第六模型文件,根据所述第六模型文件建立第六模型实例。将第五模型实例和所述第六模型实例分别根据所述第五位置信息对应的经纬度信息和所述第六位置信息对应的经纬度信息、及基本属性和辅助标识信息在所述三维空间中进行地图绘制。
四、在构建的三维空间中根据全国不同省份为待绘制的地图划分不同区域(如省、市、直辖市、自治州、区、县等),将当前终端在当前区域的实时位置切换到其它省份的指定区域。根据同一省份的终端用户上限要求,为当前终端在所述指定区域中随机分配目标位置。比如,在同一省份的用户按50 人为上限,随机分配房间,房间是符合一个经纬度区间的区域。获取当前终端的目标位置,根据由经纬度信息标识的目标位置拉取第四地图数据。比如,用户实时位置在北京,可以将用户切换到其它省份,比如切换到上海,此时的经纬度坐标不是用于标识用户的实时位置,而是本申请实施例的装置为用户随机分配的一个目标位置,并以目标位置对应经纬度信息为中心向外辐射预定范围的数据确定为当前终端所在范围的地图数据,该地图数据为第四地图数据。
对第四地图数据进行解析,得到基础类数据和辅助类数据,所述基础类数据用于表征包括建筑信息在内的第七位置信息,所述辅助类信息用于表征包括道路、街道、河流、公交站点的第八位置信息。根据所述第七位置信息获取第七模型文件,根据所述第七模型文件建立第七模型实例。根据所述第八位置信息获取第八模型文件,根据所述第八模型文件建立第八模型实例。将所述第七模型实例和所述第八模型实例分别根据所述第七位置信息对应的经纬度信息和所述第八位置信息对应的经纬度信息、及基本属性和辅助标识信息在所述三维空间中进行地图绘制。
上述多个实施例中,涉及对应不同位置信息的多个模型文件,以及多个模型文件分别对应的多个模型实例,这里需要指出的是,为不同地图类型数据(基础类数据和辅助类数据)分别预置对应的模型,包括像建筑、道路、河流、公园、树林等模型等,是根据预置的模型文件建立模型实例(如Prefab,Prefab是Unity的一个资源引用文件,相同的对象可以通过一个预设体来创建)。在绘制地图的过程中,根据不同地图类型数据(基础类数据和辅助类数据)选择对应的模型实例,多个模型实例其长、宽、高等基本属性可能不同,但是,同一个模型实例,比如表示一个楼房的建筑3D对象是可以多次复用的,又如,表示一个商厦的建筑3D对象也是可以多次复用的,表示一个道路/街区的建筑3D对象是可以多次复用的,等等。也就是说,相同的对象可以通过一个预设体来创建,并支持重复实例化以节省开销。对应模型文件的模型进行实例化时,可以将模型实例按照地图数据上经纬度坐标、基本属性、辅助标识信息进行绘制地图。
本申请实施例中,可以通过上述第二世界中的探索频道来拉取陌生人, 在探索频道中可以根据全国不同省份来划分,并使用实时的地理位置随机创建探索的房间,获取当前终端位置的地图信息(包括建筑信息、道路、河流、树木、公交站台等),根据地图信息类型预置对应的模型文件,包括像建筑、道路、河流、树林等模型,将模型按照地图数据上标识的大小和位置进行绘制。之后,还可以根据当前终端对应人物的移动,来实时加载根据当前位置移动变化所产生的新的地图数据,让探索地图可以无限扩展。用户在无限扩展的虚拟空间场景中也会根据位置信息来结识更多真实的好友,并在虚拟场景中进入会话状态
本申请实施例中,绘制地图,模拟得到虚拟空间一个场景中,可以采用3D图形程序接口创建虚拟空间的三维环境,并在虚拟空间的三维环境中绘制场景显示外幕,具体是,根据地图信息类型预置对应的模型文件,包括像建筑、道路、河流、树林等模型,将模型按照地图数据上标识的大小和位置进行绘制,可以看出是楼房、商铺、道路等,由于为了更好的在三维空间中模拟及显示地图数据,楼房和楼房之间要有所区别,因此,还可以将采集到的全景实景图像数据从服务器传送到终端中的3D显示平台,并纹理映射到场景显示外幕,获得基于场景显示外幕的虚拟空间。
本申请实施例中,将真实地理位置与虚拟预置模块相结合生成虚拟空间的场景中,如图8所示,可以在第二世界探索模块中使用实时的地理位置,获取当前位置模块的地图信息(包括建筑信息、道路、河流、树木、公交站台等),根据获取的信息类型预置对应的模型(包括像建筑、道路、河流、公园、树林等模型),在这之前我们会将预置的模型文件建立Prefab,根据地图数据类型选择模型prefab进行实例化,将模型按照地图数据上经纬度坐标、基本属性、辅助标识信息进行绘制地图。具体的,包括:1)通过位置管理与同步组件,实时获取当前用户的经纬度信息,并定时同步到服务器,获取其它用户的位置坐标,在地图中绘制;2)通过地图API获取组件,根据当前用户所处位置的经纬度信息获取当前区域缩放系数下的地图数据,通过地图API接口返回的地图数据信息进行地图绘制;3)通过预置的Prefab模型实例,根据地图上绘制的元素,提前设计好相应的模型文件,通过Prefab同一资源实例化来复用模型来绘制地图,可以极大的减少性能的消耗;4)人物碰撞检 测、群成员区域碰撞,好友Profile和聊天场景管理,方便用户之间建立交互有不同的选择。在虚实结合的过程中,还可以把所有用户根据地理位置划分到不同的省份中,在同一省份的用户按50人为上限,随机分配房间,根据当前用户模型的移动实时加载当前位置变化产生的新的地图数据,让探索地图可以无限扩展,用户在场景中会根据位置结识更多真实的好友,并在虚拟场景中进入会话状态,也支持多人同时进入群聊状态。可以借助VR和AR技术生成虚拟空间,将真实世界与虚拟空间相结合,基于位置来更有效的认识新用户。由于是根据全国不同省份来划分,并使用实时的地理位置随机创建探索的房间,并根据人物的移动实时加载当前位置变化产生的新的地图数据,可以让探索地图无限扩展,用户在场景中会根据位置结识更多真实的好友,降低认识新朋友的成本,而且可以非常方便的建立社交关系,点击用户可查看这个用户的个人信息和形象,也可以通过探索进入会话和参加他人的群聊,寻找自己感兴趣的好友。
就VR和AR而言,除了用户所在的真实空间,还可以为用户构建一个虚拟空间。VR技术是利用电脑模拟产生一个三度空间的虚拟世界,提供使用者关于视觉、听觉、触觉等感官的模拟,让使用者如同身历其境一般,可以及时、没有限制地观察三度空间内的事物。使用者进行位置移动时,电脑可以立即进行复杂的运算,将精确的3D世界影像传回产生临场感。看到的场景和人物全是假的,是把人的意识代入一个虚拟的世界,包括多种技术的综合,如实时三维计算机图形技术,广角(宽视野)立体显示技术,对观察者头、眼和手的跟踪技术,以及触觉/力觉反馈、立体声、网络传输、语音输入输出技术等。AR也被称为扩增现实技术,是将真实世界信息和虚拟世界信息“无缝”集成的新技术,是把原本在现实世界的一定时间空间范围内很难体验到的实体信息(视觉信息、声音、味道、触觉等),通过电脑等科学技术,模拟仿真后再叠加,将虚拟的信息应用到真实世界,被人类感官所感知,从而达到超越现实的感官体验。真实的环境和虚拟的物体实时地叠加到了同一个画面或空间同时存在。展现了真实世界的信息,而且将虚拟的信息同时显示出来,两种信息相互补充、叠加。在视觉化的增强现实中,用户利用头盔显示器,把真实世界与电脑图形多重合成在一起,便可以看到真实的世界围 绕着它。
本申请实施例中,可以采集至少两个终端在所述虚拟空间触发的操作,当所述操作符合碰撞检测和/或群成员区域碰撞策略时,触发信息交互处理。根据碰撞检测策略生成第一操作指令,所述第一操作用于控制至少两个终端进行一对一交互模式,响应所述第一操作指令,进入终端间一对一的用户会话状态。根据群成员区域碰撞策略生成第二操作指令,所述第二操作用于控制至少两个终端进行一对多交互模式,响应所述第二操作指令,进入终端间一对多的用户群聊状态。在根据位置信息结识更多真实好友的过程中,借助虚拟场景中进入会话状态。不仅支持一对一的会话,也支持多对多(多人)同时进入某个群组(如第一信息组、第二信息组、由第一信息组合第二信息组的交集、并集、合集所构成的第三信息组)以进入群聊状态。具体的,可以是通过人物碰撞检测、群成员区域碰撞,好友Profile和聊天场景管理等触发具体的信息交互,以方便用户之间建立交互有不同的选择。
本申请实施例中,可以发送虚拟空间到至少两个终端上进行显示。该虚拟空间设置有默认的视角。一实例中,当终端为VR眼镜时,该默认的视角可以是进入VR模式后在正常模式所能浏览3D实体(如虚拟物体)等的视角。虚拟物体可以是静止的,可以是运动的,当虚拟物体处于静止状态时,用默认的视角进行浏览是没问题的,可是当虚拟物体处于运动状态时,用默认的视角来浏览就远远达不到浏览要求了。这种情况下,终端可以根据当前的浏览需求调整视角,即任一终端可以发送视角控制指令给某一服务器或者服务器中的控制处理器或者服务器集群中的某一个用于处理该控制指令的硬件实体,接收方(如某一服务器或者服务器中的控制处理器或者服务器集群中的某一个用于处理该控制指令的硬件实体)接收视角控制指令,根据视角控制指令生成相应的虚拟空间,并发送给对应的终端。
本申请实施例中,一实例中,当终端为VR眼镜时,接收方(如某一服务器或者服务器中的控制处理器或者服务器集群中的某一个用于处理该控制指令的硬件实体)还可以接收任一终端发送的显示控制指令。根据终端用户当前所处的场景、所执行的操作,比如打游戏中的浏览操作和普通浏览是不同的,游戏中可以虚拟物体可以隐身、可以跳跃等,想要精准的捕捉具体的 操作并在本终端与其他终端间进行信息互动等响应处理,终端可以发送显示控制指令给上述接收方,接收方根据显示控制指令生成相应的虚拟空间后发送给其他终端。其中,显示控制指令控制所述终端在虚拟空间上的实时显示,所述实时显示包括可实时控制的头像显示、动作显示、文本标识、隐身显示的一种或多种的结合。
本申请实施例中,某一服务器或者服务器中的控制处理器或者服务器集群中的某一个用于处理该控制指令的硬件实体,还可以采集任一终端在虚拟空间与3D实体(如虚拟物体)间的交互操作,比如,虚拟物体为卡通人物,一个卡通人物跑酷的游戏场景中,可以根据生成的操作指令(上、下、左、右的移动、跳跃等)控制所述终端与所述虚拟物体间的信息交互处理,使卡通人物按照操作指令(上、下、左、右的移动、跳跃等)的控制,在预设的游戏场景中执行对应的上、下、左、右的移动、跳跃等操作。
本申请实施例中,当任一进程发生实时中断,重新发起恢复所述实时中断的进程;所述进程包括所述地图的绘制过程、所述虚拟空间的生成及所述至少两个终端的信息交互处理。当所述实时中断的条件满足通知策略时,发送通知至所述参与进程的终端。
本申请实施例中,可以获取当前终端的属性信息,及虚拟空间内的其他终端的属性信息。根据获取的不同属性信息,生成与不同属性信息相适配的虚拟空间。比如:对于终端不同尺寸的屏幕大小,不同型号的终端,可以为不同的终端分别设置不同的清晰度。当一个终端是手机,一个是车载终端,或者不同的终端覆盖于不同的信号网络模式下,某一服务器或者服务器中的控制处理器或者服务器集群中的某一个用于处理该控制指令的硬件实体发送给终端的虚拟空间,可能在数据量上或者适配屏幕上有所不同。
图2为本申请一个实施例的方法的流程示意图。应该理解的是,虽然图2的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,图2中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必 然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
本申请实施例的基于虚拟空间场景的信息交互装置,如图9所示,所述装置包括:终端41和服务器42。处理逻辑的全部或部分可以都在服务器42执行,如图1中的处理逻辑10所示。服务器中包括:获取单元421,用于获取当前终端的位置信息;地图数据确定单元422,用于根据所述当前终端的位置信息,得到当前终端所在范围的地图数据;地图绘制单元423,用于根据所述地图数据在构建的三维空间中绘制地图,得到绘制结果;模拟单元424,用于根据所述绘制结果,在所述三维空间中模拟当前终端所处地理位置的真实环境,得到用于信息交互的虚拟空间;控制单元425,用于采集至少两个终端在所述虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理。
本申请实施例中,所述地图数据确定单元,进一步用于:根据全球定位系统GPS得到当前终端的经纬度信息,通过所述经纬度信息来标识当前终端的所述位置信息;
将以所述经纬度信息为中心向外辐射预定范围的数据确定为所述当前终端所在范围的地图数据。
本申请实施例中,所述地图绘制单元,进一步用于:在构建的三维空间中根据全国不同省份为待绘制的地图划分不同区域;获取当前终端在当前区域的实时位置,根据由经纬度信息标识的实时位置拉取第一地图数据;对第一地图数据进行解析,得到基础类数据和辅助类数据,所述基础类数据用于表征包括建筑信息在内的第一位置信息,所述辅助类信息用于表征包括道路、街道、河流、公交站点的第二位置信息;根据所述第一位置信息获取第一模型文件,根据所述第一模型文件建立第一模型实例;根据所述第二位置信息获取第二模型文件,根据所述第二模型文件建立第二模型实例;将所述第一模型实例和所述第二模型实例分别根据所述第一位置信息对应的经纬度信息和所述第二位置信息对应的经纬度信息、及基本属性和辅助标识信息在所述三维空间中进行地图绘制。
本申请实施例中,所述装置还包括:监测单元,用于:对所述当前终端 在当前区域的实时位置变化进行监测;监测到所述当前终端的实时位置由第一实时位置移动至第二实时位置时通知所述地图绘制单元;所述地图绘制单元,进一步用于根据由第一实时位置移动至第二实时位置产生的位置变化参数,实时加载当前终端基于当前实时位置变化产生的第二地图数据,拉取第二地图数据。
本申请实施例中,所述装置还包括:监测单元,用于:对所述当前终端在当前区域的实时位置变化进行监测;监测到所述当前终端的实时位置由第一实时位置移动至第二实时位置时通知所述地图绘制单元;所述地图绘制单元,进一步用于根据第二实时位置产生第三地图数据,拉取第三地图数据。
本申请实施例中,所述装置还包括:位置随机切换单元,用于:在构建的三维空间中根据全国不同省份为待绘制的地图划分不同区域;将当前终端在当前区域的实时位置切换到其它省份的指定区域;根据同一省份的终端用户上限要求,为当前终端在所述指定区域中随机分配目标位置。
本申请实施例中,所述控制单元,进一步用于:采集至少两个终端在所述虚拟空间触发的操作,当所述操作符合碰撞检测和/或群成员区域碰撞策略时,触发信息交互处理;根据碰撞检测策略生成第一操作指令,所述第一操作用于控制至少两个终端进行一对一交互模式,响应所述第一操作指令,进入终端间一对一的用户会话状态;根据群成员区域碰撞策略生成第二操作指令,所述第二操作用于控制至少两个终端进行一对多交互模式,响应所述第二操作指令,进入终端间一对多的用户群聊状态。
本申请实施例中,所述控制单元,进一步用于:采集至少两个终端在所述虚拟空间触发的操作,当所述操作符合碰撞检测和/或群成员区域碰撞策略时,触发信息交互处理。根据碰撞检测策略生成第一操作指令,所述第一操作用于控制至少两个终端进行一对一交互模式,响应所述第一操作指令,进入终端间一对一的用户会话状态。
本申请实施例中,所述控制单元,进一步用于:采集至少两个终端在所述虚拟空间触发的操作,当所述操作符合碰撞检测和/或群成员区域碰撞策略时,触发信息交互处理;根据群成员区域碰撞策略生成第二操作指令,所述第二操作用于控制至少两个终端进行一对多交互模式,响应所述第二操作指 令,进入终端间一对多的用户群聊状态。
本申请实施例中,所述装置还包括:第一发送单元,用于发送所述虚拟空间至所述至少两个终端上进行显示,所述虚拟空间设置有默认的视角。第一接收单元,用于接收任一终端发送的视角控制指令。以及,第二发送单元,用于根据所述视角控制指令生成相应的虚拟空间,并发送给所述终端。
本申请实施例中,所述装置还包括:第二接收单元,用于接收任一终端发送的显示控制指令。以及第三发送单元,用于根据所述显示控制指令,生成相应的虚拟空间,并发送给其他终端,其中,所述显示控制指令控制所述终端在虚拟空间上的实时显示,所述实时显示包括可实时控制的头像显示、动作显示、文本标识、隐身显示的一种或多种的结合。
本申请实施例中,所述装置还包括:信息控制单元,用于采集任一终端在所述虚拟空间与所述虚拟物体间的交互操作,根据生成的操作指令控制所述终端与所述虚拟物体间的信息交互处理。
本申请实施例中,所述装置还包括:进程监控单元,用于当任一进程发生实时中断时,重新发起恢复所述实时中断的进程;所述进程包括所述地图的绘制过程、所述虚拟空间的生成及所述至少两个终端的信息交互处理。
本申请实施例中,所述装置还包括:通知单元,用于当所述实时中断的条件满足通知策略时,发送通知至所述参与进程的终端。
本申请实施例中,所述装置还包括:第一信息获取单元,用于获取当前终端的属性信息。第二信息获取单元,用于获取所述虚拟空间内的其他终端的属性信息。以及,空间生成单元,用于根据所述获取的不同属性信息,生成与不同属性信息相适配的虚拟空间。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中储存有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行以下步骤:获取当前终端的位置信息;根据当前终端的位置信息,得到当前终端所在范围的地图数据;根据地图数据在构建的三维空间中绘制地图,得到绘制结果;根据绘制结果,在三维空间中模拟当前终端所处地理位置的真实环境,得到用于信息交互的虚拟空间;及采集至少两个终端在虚拟空间 触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行根据当前终端的位置信息,得到当前终端所在范围的地图数据的步骤时,执行以下步骤:根据定位系统获取当前终端的经纬度信息,通过经纬度信息来标识当前终端的位置信息;及将以经纬度信息为中心向外辐射预定范围的数据确定为当前终端所在范围的地图数据。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行根据地图数据在构建的三维空间中绘制地图,得到绘制结果的步骤时,执行以下步骤:在构建的三维空间中为待绘制的地图划分不同区域;获取当前终端在当前区域的实时位置,根据由经纬度信息标识的实时位置拉取第一地图数据;对第一地图数据进行解析,得到基础类数据和辅助类数据,基础类数据包括第一位置信息,辅助类信息包括第二位置信息;根据第一位置信息获取第一模型文件,根据第一模型文件建立第一模型实例;根据第二位置信息获取第二模型文件,根据第二模型文件建立第二模型实例;及将第一模型实例和第二模型实例分别根据第一位置信息对应的经纬度信息和第二位置信息对应的经纬度信息、及基本属性和辅助标识信息在三维空间中进行地图绘制。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:对当前终端在当前区域的实时位置变化进行监测;及监测到当前终端的实时位置由第一实时位置移动至第二实时位置时,根据由第一实时位置移动至第二实时位置产生的位置变化参数,实时加载当前终端基于当前实时位置变化产生的第二地图数据,拉取第二地图数据。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:对当前终端在当前区域的实时位置变化进行监测;及监测到当前终端的实时位置由第一实时位置移动至第二实时位置时,根据第二实时位置产生第三地图数据,拉取第三地图数据。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:在构建的三维空间中划分不同区域;将当前终端在当前区域的实时位置切换到其它指定区域;及根据同一区域的终端用户上限要求,为当前终端在指定区域中随机分配目标位置。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行采集至少两个终端在虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理的步骤时,执行以下步骤:采集至少两个终端在虚拟空间触发的操作,当操作符合碰撞检测和/或群成员区域碰撞策略时,触发信息交互处理;及根据碰撞检测策略生成第一操作指令,第一操作用于控制至少两个终端进行一对一交互模式,响应第一操作指令,进入终端间一对一的用户会话状态。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行采集至少两个终端在虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理的步骤时,执行以下步骤:采集至少两个终端在虚拟空间触发的操作,当操作符合碰撞检测和/或群成员区域碰撞策略时,触发信息交互处理;根据群成员区域碰撞策略生成第二操作指令,第二操作用于控制至少两个终端进行一对多交互模式,响应第二操作指令,进入终端间一对多的用户群聊状态。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:发送虚拟空间至至少两个终端上进行显示,虚拟空间设置有默认的视角;接收任一终端发送的视角控制指令;及根据视角控制指令生成相应的虚拟空间,并发送给终端。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:接收任一终端发送的显示控制指令;及根据显示控制指令,生成相应的虚拟空间,并发送给其他终端,其中显示控制指令控制终端在虚拟空间上的实时显示,实时显示包括可实时控制的头像显示、动作显示、文本标识、隐身显示的一种或多种的结合。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:采集任一终端在虚拟空间与虚拟物体间的交互操作,根据生成的操作指令控制终端与虚拟物体间的信息交互处理。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:当任一进程发生实时中断,重新发起恢复实时中断的进程;进程包括地图的绘制过程、虚拟空间的生成及至少两个终端的信息交互处理。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:当实时中断的条件满足通知策略时,发送通知至参与进程的终端。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:获取当前终端的属性信息;获取虚拟空间内的其他终端的属性信息;及根据获取的不同属性信息,生成与不同属性信息相适配的虚拟空间。
一种非易失性的计算机可读存储介质,存储有计算机可读指令,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:获取当前终端的位置信息;根据当前终端的位置信息,得到当前终端所在范围的地图数据;根据地图数据在构建的三维空间中绘制地图,得到绘制结果;根据绘制结果,在三维空间中模拟当前终端所处地理位置的真实环境,得到用于信息交互的虚拟空间;及采集至少两个终端在虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行根据当前终端的位置信息,得到当前终端所在范围的地图数据的步骤时,执行以下步骤:根据定位系统获取当前终端的经纬度信息,通过经纬度信息来标识当前终端的位置信息;及将以经纬度信息为中心向外辐射预定范围的数据确定为当前终端所在范围的地图数据。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行根据地图数据在构建的三维空间中绘制地图,得到绘制结果的步骤时,执行以下步骤:在构建的三维空间中为待绘制的地图划分不同区域;获取当前终端在当前区域的实时位置,根据由经纬度信息标识的实时位置拉取第一地图数据;对第一地图数据进行解析,得到基础类数据和辅助类数据,基础类数据包括第一位置信息,辅助类信息包括第二位置信息;根据第一位置信息获取第一模型文件,根据第一模型文件建立第一模型实例;根据第二位置信息获取第二模型文件,根据第二模型文件建立第二模型实例;及将第一模型实例和第二模型实例分别根据第一位置信息对应的经纬度信息和第二位置信息对应的经纬度信息、及基本属性和辅助标识信息在三维空间中进行地图绘制。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:对当前 终端在当前区域的实时位置变化进行监测;及监测到当前终端的实时位置由第一实时位置移动至第二实时位置时,根据由第一实时位置移动至第二实时位置产生的位置变化参数,实时加载当前终端基于当前实时位置变化产生的第二地图数据,拉取第二地图数据。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:对当前终端在当前区域的实时位置变化进行监测;及监测到当前终端的实时位置由第一实时位置移动至第二实时位置时,根据第二实时位置产生第三地图数据,拉取第三地图数据。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:在构建的三维空间中划分不同区域;将当前终端在当前区域的实时位置切换到其它指定区域;及根据同一区域的终端用户上限要求,为当前终端在指定区域中随机分配目标位置。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行采集至少两个终端在虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理的步骤时,执行以下步骤:采集至少两个终端在虚拟空间触发的操作,当操作符合碰撞检测和/或群成员区域碰撞策略时,触发信息交互处理;及根据碰撞检测策略生成第一操作指令,第一操作用于控制至少两个终端进行一对一交互模式,响应第一操作指令,进入终端间一对一的用户会话状态。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行采集至少两个终端在虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理的步骤时,执行以下步骤:采集至少两个终端在虚拟空间触发的操作,当操作符合碰撞检测和/或群成员区域碰撞策略时,触发信息交互处理;根据群成员区域碰撞策略生成第二操作指令,第二操作用于控制至少两个终端进行一对多交互模式,响应第二操作指令,进入终端间一对多的用户群聊状态。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:发送虚拟空间至至少两个终端上进行显示,虚拟空间设置有默认的视角;接收任一终端发送的视角控制指令;及根据视角控制指令生成相应的虚拟空间,并发 送给终端。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:接收任一终端发送的显示控制指令;及根据显示控制指令,生成相应的虚拟空间,并发送给其他终端,其中显示控制指令控制终端在虚拟空间上的实时显示,实时显示包括可实时控制的头像显示、动作显示、文本标识、隐身显示的一种或多种的结合。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:采集任一终端在虚拟空间与虚拟物体间的交互操作,根据生成的操作指令控制终端与虚拟物体间的信息交互处理。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:当任一进程发生实时中断,重新发起恢复实时中断的进程;进程包括地图的绘制过程、虚拟空间的生成及至少两个终端的信息交互处理。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:当实时中断的条件满足通知策略时,发送通知至参与进程的终端。
在一个实施例中,计算机可读指令使得处理器还执行以下步骤:获取当前终端的属性信息;获取虚拟空间内的其他终端的属性信息;及根据获取的不同属性信息,生成与不同属性信息相适配的虚拟空间。
如图10所示,计算机存储介质位于服务器的情况下,服务器作为硬件实体,包括处理器51、计算机存储介质52以及至少一个外部通信接口53;所述处理器51、计算机存储介质52以及外部通信接口53均通过总线54连接。其中,存储器包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质存储有操作系统,还可存储有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器实现基于虚拟空间场景的信息交互的方法。该内存储器中也可储存有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器执行基于虚拟空间场景的信息交互的方法。
这里需要指出的是:以上涉及终端和服务器项的描述,与上述方法描述是类似的,同方法的有益效果描述,不做赘述。对于本申请终端和服务器实施例中未披露的技术细节,请参照本申请方法流程描述的实施例所描述内容。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种基于虚拟空间场景的信息交互方法,执行于计算机设备,所述计算机设备包括存储器和处理器,所述方法包括:
    获取当前终端的位置信息;
    根据所述当前终端的位置信息,得到当前终端所在范围的地图数据;
    根据所述地图数据在构建的三维空间中绘制地图,得到绘制结果;
    根据所述绘制结果,在所述三维空间中模拟当前终端所处地理位置的真实环境,得到用于信息交互的虚拟空间;及
    采集至少两个终端在所述虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述当前终端的位置信息,得到当前终端所在范围的地图数据,包括:
    根据定位系统获取当前终端的经纬度信息,通过所述经纬度信息来标识当前终端的位置信息;及
    将以所述经纬度信息为中心向外辐射预定范围的数据确定为所述当前终端所在范围的地图数据。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述地图数据在构建的三维空间中绘制地图,得到绘制结果,包括:
    在构建的三维空间中为待绘制的地图划分不同区域;
    获取当前终端在当前区域的实时位置,根据由经纬度信息标识的实时位置拉取第一地图数据;
    对第一地图数据进行解析,得到基础类数据和辅助类数据,所述基础类数据包括第一位置信息,所述辅助类信息包括第二位置信息;
    根据所述第一位置信息获取第一模型文件,根据所述第一模型文件建立第一模型实例;
    根据所述第二位置信息获取第二模型文件,根据所述第二模型文件建立第二模型实例;及
    将所述第一模型实例和所述第二模型实例分别根据所述第一位置信息对应的经纬度信息和所述第二位置信息对应的经纬度信息、及基本属性和辅助 标识信息在所述三维空间中进行地图绘制。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    对所述当前终端在当前区域的实时位置变化进行监测;及
    监测到所述当前终端的实时位置由第一实时位置移动至第二实时位置时,根据由第一实时位置移动至第二实时位置产生的位置变化参数,实时加载当前终端基于当前实时位置变化产生的第二地图数据,拉取第二地图数据。
  5. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    对所述当前终端在当前区域的实时位置变化进行监测;及
    监测到所述当前终端的实时位置由第一实时位置移动至第二实时位置时,根据第二实时位置产生第三地图数据,拉取第三地图数据。
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在构建的三维空间中划分不同区域;
    将当前终端在当前区域的实时位置切换到其它指定区域;及
    根据同一区域的终端用户上限要求,为当前终端在所述指定区域中随机分配目标位置。
  7. 根据权利要求1至6任一项所述的方法,其特征在于,所述采集至少两个终端在所述虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理,包括:
    采集至少两个终端在所述虚拟空间触发的操作,当所述操作符合碰撞检测和/或群成员区域碰撞策略时,触发信息交互处理;及
    根据碰撞检测策略生成第一操作指令,所述第一操作用于控制至少两个终端进行一对一交互模式,响应所述第一操作指令,进入终端间一对一的用户会话状态。
  8. 根据权利要求1至6任一项所述的方法,其特征在于,所述采集至少两个终端在所述虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理,包括:
    采集至少两个终端在所述虚拟空间触发的操作,当所述操作符合碰撞检测和/或群成员区域碰撞策略时,触发信息交互处理;根据群成员区域碰撞策略生成第二操作指令,所述第二操作用于控制至少两个终端进行一对多交互 模式,响应所述第二操作指令,进入终端间一对多的用户群聊状态。
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    发送所述虚拟空间至所述至少两个终端上进行显示,所述虚拟空间设置有默认的视角;
    接收任一终端发送的视角控制指令;及
    根据视角控制指令生成相应的虚拟空间,并发送给所述终端。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    接收任一终端发送的显示控制指令;及
    根据所述显示控制指令,生成相应的虚拟空间,并发送给其他终端,其中所述显示控制指令控制所述终端在虚拟空间上的实时显示,所述实时显示包括可实时控制的头像显示、动作显示、文本标识、隐身显示的一种或多种的结合。
  11. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    采集任一终端在所述虚拟空间与所述虚拟物体间的交互操作,根据生成的操作指令控制所述终端与所述虚拟物体间的信息交互处理。
  12. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    当任一进程发生实时中断,重新发起恢复所述实时中断的进程;所述进程包括所述地图的绘制过程、所述虚拟空间的生成及所述至少两个终端的信息交互处理。
  13. 根据权利要求12所述的方法,其特征在于,所述方法还包括:
    当所述实时中断的条件满足通知策略时,发送通知至所述参与进程的终端。
  14. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取当前终端的属性信息;
    获取所述虚拟空间内的其他终端的属性信息;及
    根据获取的不同属性信息,生成与不同属性信息相适配的虚拟空间。
  15. 一种计算机设备,包括存储器和处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:
    获取当前终端的位置信息;
    根据所述当前终端的位置信息,得到当前终端所在范围的地图数据;
    根据所述地图数据在构建的三维空间中绘制地图,得到绘制结果;
    根据所述绘制结果,在所述三维空间中模拟当前终端所处地理位置的真实环境,得到用于信息交互的虚拟空间;及
    采集至少两个终端在所述虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理。
  16. 根据权利要求15所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行所述根据所述当前终端的位置信息,得到当前终端所在范围的地图数据的步骤时,执行以下步骤:
    根据定位系统获取当前终端的经纬度信息,通过所述经纬度信息来标识当前终端的位置信息;及
    将以所述经纬度信息为中心向外辐射预定范围的数据确定为所述当前终端所在范围的地图数据。
  17. 根据权利要求15所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行所述根据所述地图数据在构建的三维空间中绘制地图,得到绘制结果的步骤时,执行以下步骤:
    在构建的三维空间中为待绘制的地图划分不同区域;
    获取当前终端在当前区域的实时位置,根据由经纬度信息标识的实时位置拉取第一地图数据;
    对第一地图数据进行解析,得到基础类数据和辅助类数据,所述基础类数据包括第一位置信息,所述辅助类信息包括第二位置信息;
    根据所述第一位置信息获取第一模型文件,根据所述第一模型文件建立第一模型实例;
    根据所述第二位置信息获取第二模型文件,根据所述第二模型文件建立第二模型实例;及
    将所述第一模型实例和所述第二模型实例分别根据所述第一位置信息对应的经纬度信息和所述第二位置信息对应的经纬度信息、及基本属性和辅助标识信息在所述三维空间中进行地图绘制。
  18. 一种非易失性的计算机可读存储介质,存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    获取当前终端的位置信息;
    根据所述当前终端的位置信息,得到当前终端所在范围的地图数据;
    根据所述地图数据在构建的三维空间中绘制地图,得到绘制结果;
    根据所述绘制结果,在所述三维空间中模拟当前终端所处地理位置的真实环境,得到用于信息交互的虚拟空间;及
    采集至少两个终端在所述虚拟空间触发的操作,根据生成的操作指令控制至少两个终端的信息交互处理。
  19. 根据权利要求18所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行所述根据所述当前终端的位置信息,得到当前终端所在范围的地图数据的步骤时,执行以下步骤:
    根据定位系统获取当前终端的经纬度信息,通过所述经纬度信息来标识当前终端的位置信息;及
    将以所述经纬度信息为中心向外辐射预定范围的数据确定为所述当前终端所在范围的地图数据。
  20. 根据权利要求18所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行所述根据所述地图数据在构建的三维空间中绘制地图,得到绘制结果的步骤时,执行以下步骤:
    在构建的三维空间中为待绘制的地图划分不同区域;
    获取当前终端在当前区域的实时位置,根据由经纬度信息标识的实时位置拉取第一地图数据;
    对第一地图数据进行解析,得到基础类数据和辅助类数据,所述基础类数据包括第一位置信息,所述辅助类信息包括第二位置信息;
    根据所述第一位置信息获取第一模型文件,根据所述第一模型文件建立第一模型实例;
    根据所述第二位置信息获取第二模型文件,根据所述第二模型文件建立第二模型实例;及
    将所述第一模型实例和所述第二模型实例分别根据所述第一位置信息对应的经纬度信息和所述第二位置信息对应的经纬度信息、及基本属性和辅助标识信息在所述三维空间中进行地图绘制。
PCT/CN2018/090437 2017-08-23 2018-06-08 一种基于虚拟空间场景的信息交互方法、计算机设备及计算机可读存储介质 WO2019037515A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/780,742 US11195332B2 (en) 2017-08-23 2020-02-03 Information interaction method based on virtual space scene, computer equipment and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710730847.9 2017-08-23
CN201710730847.9A CN109426333B (zh) 2017-08-23 2017-08-23 一种基于虚拟空间场景的信息交互方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/780,742 Continuation US11195332B2 (en) 2017-08-23 2020-02-03 Information interaction method based on virtual space scene, computer equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2019037515A1 true WO2019037515A1 (zh) 2019-02-28

Family

ID=65439779

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/090437 WO2019037515A1 (zh) 2017-08-23 2018-06-08 一种基于虚拟空间场景的信息交互方法、计算机设备及计算机可读存储介质

Country Status (3)

Country Link
US (1) US11195332B2 (zh)
CN (1) CN109426333B (zh)
WO (1) WO2019037515A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111061929A (zh) * 2019-12-17 2020-04-24 用友网络科技股份有限公司 终端的场地档案管理方法、系统、终端及存储介质
CN112435346A (zh) * 2020-11-19 2021-03-02 苏州亿歌网络科技有限公司 一种多类型场景共存的添加方法、装置、终端及存储介质

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264393B (zh) * 2019-05-15 2023-06-23 联想(上海)信息技术有限公司 一种信息处理方法、终端和存储介质
CN111625533B (zh) * 2019-10-21 2023-08-15 哈尔滨哈工智慧嘉利通科技股份有限公司 一种特殊人群位置查询方法、装置及存储介质
CN115151948A (zh) 2019-12-20 2022-10-04 奈安蒂克公司 合并来自建图设备的局部地图
EP4064712A4 (en) * 2019-12-31 2022-12-07 Huawei Technologies Co., Ltd. COMMUNICATION METHOD AND DEVICE
CN111243080A (zh) * 2020-01-03 2020-06-05 郭宝宇 一种空间数据的获取方法及电子设备
CN111652977A (zh) * 2020-04-17 2020-09-11 国网山西省电力公司晋中供电公司 一种变电站三维场景智能漫游方法
CN111966216B (zh) * 2020-07-17 2023-07-18 杭州易现先进科技有限公司 空间位置的同步方法、装置、系统、电子装置和存储介质
CN112085854B (zh) * 2020-09-11 2023-01-20 中德(珠海)人工智能研究院有限公司 一种云信息同步展示系统及方法
CN112699223B (zh) * 2021-01-13 2023-09-01 腾讯科技(深圳)有限公司 数据搜索方法、装置、电子设备及存储介质
CN114910086A (zh) * 2021-02-09 2022-08-16 华为技术有限公司 仿真高精度地图生成方法、装置和计算机可读存储介质
CN112991551A (zh) * 2021-02-10 2021-06-18 深圳市慧鲤科技有限公司 图像处理方法、装置、电子设备和存储介质
CN113101648B (zh) * 2021-04-14 2023-10-24 北京字跳网络技术有限公司 一种基于地图的交互方法、设备及存储介质
US20240096033A1 (en) * 2021-10-11 2024-03-21 Meta Platforms Technologies, Llc Technology for creating, replicating and/or controlling avatars in extended reality
CN114185437A (zh) * 2021-12-16 2022-03-15 浙江小族智能科技有限公司 游乐车与现实场景交互方法及装置、存储介质、终端
CN114528043B (zh) * 2022-02-11 2023-07-14 腾讯科技(深圳)有限公司 文件加载方法、装置、设备及计算机可读存储介质
WO2023158344A1 (ru) * 2022-02-15 2023-08-24 Общество с ограниченной ответственностью "Биганто" Способ и система автоматизированного построения виртуальной сцены на основании трехмерных панорам
CN116999806A (zh) * 2022-04-29 2023-11-07 腾讯科技(深圳)有限公司 虚拟对象显示方法、装置、设备及存储介质
CN115454257B (zh) * 2022-11-10 2023-09-22 一站发展(北京)云计算科技有限公司 一种沉浸式交互方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635705A (zh) * 2008-07-23 2010-01-27 上海赛我网络技术有限公司 基于三维虚拟地图和人物的交互方法及实现该方法的系统
KR20120100433A (ko) * 2011-03-04 2012-09-12 삼성에스디에스 주식회사 사용자 정보와 3차원 지아이에스 데이터를 활용한 모바일정보 제공 시스템
TW201335887A (zh) * 2012-02-24 2013-09-01 Chun-Ching Yang 虛擬實境互動系統及方法
CN106982240A (zh) * 2016-01-18 2017-07-25 腾讯科技(北京)有限公司 信息的显示方法和装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102054121B (zh) * 2009-11-04 2012-12-05 沈阳迅景科技有限公司 3d全景实景网络游戏平台构建方法
CN105843396B (zh) * 2010-03-05 2019-01-01 索尼电脑娱乐美国公司 在共享的稳定虚拟空间上维持多视图的方法
JP2013196616A (ja) * 2012-03-22 2013-09-30 Sharp Corp 情報端末装置、情報処理方法
US20150245168A1 (en) * 2014-02-25 2015-08-27 Flock Inc. Systems, devices and methods for location-based social networks
WO2016172870A1 (zh) * 2015-04-29 2016-11-03 北京旷视科技有限公司 视频监控方法、视频监控系统以及计算机程序产品
US10486068B2 (en) * 2015-05-14 2019-11-26 Activision Publishing, Inc. System and method for providing dynamically variable maps in a video game
US20170153787A1 (en) * 2015-10-14 2017-06-01 Globalive Xmg Jv Inc. Injection of 3-d virtual objects of museum artifact in ar space and interaction with the same
CN105373224B (zh) * 2015-10-22 2016-06-22 山东大学 一种基于普适计算的混合现实游戏系统及方法
CN105807931B (zh) * 2016-03-16 2019-09-17 成都电锯互动科技有限公司 一种虚拟现实的实现方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635705A (zh) * 2008-07-23 2010-01-27 上海赛我网络技术有限公司 基于三维虚拟地图和人物的交互方法及实现该方法的系统
KR20120100433A (ko) * 2011-03-04 2012-09-12 삼성에스디에스 주식회사 사용자 정보와 3차원 지아이에스 데이터를 활용한 모바일정보 제공 시스템
TW201335887A (zh) * 2012-02-24 2013-09-01 Chun-Ching Yang 虛擬實境互動系統及方法
CN106982240A (zh) * 2016-01-18 2017-07-25 腾讯科技(北京)有限公司 信息的显示方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111061929A (zh) * 2019-12-17 2020-04-24 用友网络科技股份有限公司 终端的场地档案管理方法、系统、终端及存储介质
CN112435346A (zh) * 2020-11-19 2021-03-02 苏州亿歌网络科技有限公司 一种多类型场景共存的添加方法、装置、终端及存储介质

Also Published As

Publication number Publication date
CN109426333B (zh) 2022-11-04
US11195332B2 (en) 2021-12-07
CN109426333A (zh) 2019-03-05
US20200175760A1 (en) 2020-06-04

Similar Documents

Publication Publication Date Title
WO2019037515A1 (zh) 一种基于虚拟空间场景的信息交互方法、计算机设备及计算机可读存储介质
WO2018113639A1 (zh) 用户终端之间的互动方法、终端、服务器、系统及存储介质
EP3332565B1 (en) Mixed reality social interaction
US11043033B2 (en) Information processing device and information processing method capable of deciding objects arranged in virtual space generated based on real space
EP4191385A1 (en) Surface aware lens
RU2621644C2 (ru) Мир массового одновременного удаленного цифрового присутствия
US20170084084A1 (en) Mapping of user interaction within a virtual reality environment
US9824274B2 (en) Information processing to simulate crowd
US20120192088A1 (en) Method and system for physical mapping in a virtual world
WO2017143303A1 (en) Apparatuses, methods and systems for sharing virtual elements
JP2020091504A (ja) 仮想空間中のアバター表示システム、仮想空間中のアバター表示方法、コンピュータプログラム
EP2974509B1 (en) Personal information communicator
CN102054289A (zh) 一种基于全景实景和地理信息的3d虚拟社区构建方法
KR20150026367A (ko) 화면 미러링을 이용한 서비스 제공 방법 및 그 장치
CN111462334A (zh) 一种智能互动展馆系统
CN109427219B (zh) 基于增强现实教育场景转换模型的防灾学习方法和装置
JP2022507502A (ja) 拡張現実(ar)のインプリント方法とシステム
JP7304639B2 (ja) デジタル現実内の強化されたユーザ間通信を可能にするための方法及びシステム
CN116758201B (zh) 三维场景的渲染处理方法、设备、系统及计算机存储介质
CN111918114A (zh) 图像显示方法、装置、显示设备及计算机可读存储介质
Yao et al. Development overview of augmented reality navigation
CN112468865B (zh) 一种视频处理方法、vr终端及计算机可读存储介质
CN211044184U (zh) 一种vr导航系统
US20230316659A1 (en) Traveling in time and space continuum
WO2024037001A1 (zh) 互动数据处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18848477

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18848477

Country of ref document: EP

Kind code of ref document: A1