WO2018113639A1 - 用户终端之间的互动方法、终端、服务器、系统及存储介质 - Google Patents

用户终端之间的互动方法、终端、服务器、系统及存储介质 Download PDF

Info

Publication number
WO2018113639A1
WO2018113639A1 PCT/CN2017/117058 CN2017117058W WO2018113639A1 WO 2018113639 A1 WO2018113639 A1 WO 2018113639A1 CN 2017117058 W CN2017117058 W CN 2017117058W WO 2018113639 A1 WO2018113639 A1 WO 2018113639A1
Authority
WO
WIPO (PCT)
Prior art keywords
avatar
location information
server
user terminal
scene image
Prior art date
Application number
PCT/CN2017/117058
Other languages
English (en)
French (fr)
Inventor
李斌
陈晓波
陈郁
罗程
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018113639A1 publication Critical patent/WO2018113639A1/zh
Priority to US16/364,370 priority Critical patent/US10636221B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute
    • G06Q30/0271Personalized advertisement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0277Online advertisement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the embodiments of the present invention relate to the field of communications technologies, and specifically relate to an interaction method between a user terminal, a terminal, a server, a system, and a storage medium.
  • the embodiments of the present invention provide an interaction method, a terminal, a server, a system, and a storage medium between user terminals, which can implement interaction between virtual images.
  • the method for the interaction between the user terminals provided by the embodiment of the present invention is applied to the terminal, including: acquiring geographic location information of the first virtual image corresponding to the first user terminal, and transmitting the geographical location information of the first virtual image to the server a mapping table between the avatar and the geographic location information is stored in the server;
  • an interactive scene image where the interactive scene image is a real scene image corresponding to the first user terminal, and rendering the first avatar and the second avatar into the interactive scene image; acquiring interactive content, The interactive content is sent to the server through the first connection, so that the server sends the interactive content to the second user terminal through the second connection.
  • the embodiment of the invention further provides a method for interacting between user terminals, which is applied to a server, and includes:
  • An embodiment of the present invention provides a terminal including one or more memories, one or more processors, wherein the one or more memories store one or more instruction modules configured by the one or more Executing by the processor; wherein the one or more instruction modules include:
  • An obtaining unit configured to acquire geographic location information of the first virtual image corresponding to the first user terminal
  • a sending unit configured to send geographic location information of the first avatar to a server; the server stores a mapping table between the avatar and the geographic location information;
  • a receiving unit configured to receive information about a second avatar within a preset distance range corresponding to the geographic location information sent by the server;
  • the sending unit is further configured to send, according to the information of the second avatar, an interaction request for the second avatar to the server, so that the server establishes a first connection with the first user terminal and Establishing a second connection between the second user terminals corresponding to the second avatar;
  • the acquiring unit is further configured to acquire an interactive scene image, where the interactive scene image is a real scene image corresponding to the first user terminal;
  • a processing unit configured to render the first avatar and the second avatar into the interactive scene image
  • the obtaining unit is further configured to acquire interactive content
  • the sending unit is further configured to send the interactive content to the server by using the first connection, so that the server sends the interactive content to the second user terminal by using the second connection.
  • Embodiments of the present invention provide a server including one or more memories, one or more processors, wherein the one or more memories store one or more instruction modules configured by the one or more Executing by the processor; wherein the one or more instruction modules include:
  • a receiving unit configured to receive geographic location information of the first avatar corresponding to the first user terminal sent by the first user terminal;
  • a searching unit configured to search, in a mapping table between the avatar and the geographic location information, information of the second avatar within a preset distance range corresponding to the geographic location information;
  • a sending unit configured to send information about the second avatar to the first user terminal
  • a receiving unit configured to receive an interaction request that is sent by the first user terminal to the second avatar according to the information of the second avatar
  • the generating unit is further configured to establish a first connection between the first user terminal and a second user terminal corresponding to the second avatar according to the interaction request;
  • the interactive request generates an interactive scene image, where the interactive scene image is a real scene image corresponding to the first user terminal;
  • the sending unit is further configured to send the interactive scene image to the first user terminal, so that the first user terminal renders the first avatar and the second avatar to the interactive scene image in.
  • the receiving unit is further configured to receive the interactive content sent by the first user terminal by using the first connection, and send the interactive content to the second user terminal by using the second connection.
  • An embodiment of the present invention further provides an interaction system between avatars, including the foregoing terminal, and the foregoing server.
  • Embodiments of the present invention also provide a non-transitory computer readable storage medium storing computer readable instructions that cause at least one processor to perform the method as described above for a terminal.
  • Embodiments of the present invention also provide a non-transitory computer readable storage medium storing computer readable instructions that cause at least one processor to perform the method as described above for application to a server.
  • FIG. 1A is a schematic diagram of a scenario of an interaction method between avatars according to an embodiment of the present invention
  • FIG. 1B is a schematic diagram of a system according to another embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a method for interaction between avatars according to an embodiment of the present invention
  • 2a-2 is a flowchart of a method for interacting between user terminals applied to a terminal by another example of the present invention
  • FIG. 2b is a schematic diagram of a dressing interface on the terminal side according to an embodiment of the present invention.
  • 2c is a schematic diagram of an interactive scenario rendered by an embodiment of the present invention.
  • 2d is a schematic diagram of another interactive scenario rendered by an embodiment of the present invention.
  • 2e is a schematic diagram of another interactive scenario rendered by an embodiment of the present invention.
  • FIG. 3a is another schematic flowchart of a method for interaction between avatars according to an embodiment of the present invention.
  • FIG. 3b is a flowchart of a method for interaction between user terminals applied to a server by another example of the present invention.
  • FIG. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
  • FIG. 5 is another schematic structural diagram of a terminal according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a server according to an embodiment of the present invention.
  • FIG. 7 is another schematic structural diagram of a server according to an embodiment of the present invention.
  • Embodiments of the present invention provide an interaction method, a terminal, a server, and a system between virtual images, which can realize interaction between virtual images.
  • a specific implementation scenario of the avatar according to the embodiment of the present invention may be as shown in FIG. 1A, including a terminal and a server.
  • the location information may be, for example, a latitude and longitude value or a geographic coordinate value, and the server may report each avatar and its location reported by each terminal.
  • the information is stored in the database.
  • the first terminal may Sending the location information of the first avatar to the server, the server may feed back, to the first terminal, the information of the second avatar within a preset distance (for example, 50 meters, 100 meters, etc.) of the location indicated by the location information, After receiving the information of the second avatar, the first terminal may acquire an interaction request initiated by the first avatar to the second avatar, and then send the interaction request to the server, and the receiving server according to the interaction request.
  • the first terminal may render the first avatar and the second avatar into the interaction scene to implement interaction between avatars, where the interaction may include voice and video Chat interaction, such as text, may also include location traversal interaction, and since interaction between avatars can be realized, the embodiment of the present invention Scene method can be used include, but are not limited to, social networking, advertising, games, shops and so on.
  • the system 100 may include: a first user terminal 101, a second user terminal 102, and an interaction platform 103.
  • the interactive platform 103 includes a map server 105 and an interactive server 106.
  • the system 100 may include a plurality of first user terminals and a plurality of second user terminals.
  • the first user terminal 101 and the second user terminal 102 are used as an example to describe the solutions of the embodiments.
  • Each user terminal can create an avatar on the user terminal.
  • the first avatar corresponding to the first user terminal may be the first user terminal.
  • the user creates a virtual character based on his or her facial information, and then transmits the feature data of the avatar created by the user to the interactive platform 103.
  • the interactive server 106 in the interactive platform 103 saves the feature data of the first avatar.
  • the interactive platform includes an interactive server and a map server, and the interactive server can be an interactive server providing an augmented reality interactive service, and the map server can be a server for providing a map service to a third party.
  • the terminal sends the feature data of the created avatar to the interactive server in the interactive platform, and the feature data of the reported avatar includes, for example, facial feature data, facial map, and dressing data.
  • the interactive server associates the identity of the avatar with the feature data of the avatar.
  • the user can create a first avatar in the interactive application.
  • the interactive application is an augmented reality interactive APP, in which the user can create an avatar and send the created avatar feature data to the interactive server.
  • the interactive application may also integrate a map function.
  • the interactive application sends the geographic location information of the first avatar corresponding to the first user terminal to the interactive platform 103.
  • the geographic location information may be, for example, a latitude and longitude value or a geographic coordinate value.
  • the interaction server 106 in the interactive platform 103 transmits the geographic location of the first avatar sent by the first terminal 101.
  • the location information is saved in association with the identity of the first avatar.
  • the interactive server 106 also stores the identifiers of the respective avatars reported by the respective terminals and their geographical location information in association with each other in the database. That is, the interaction server 106 stores a mapping table between the avatar and the geographical location information.
  • the server After the interaction server 106 receives the geographical location information of the first avatar sent by the first user terminal 101, the server searches for the geographical location information of the first avatar in a mapping table between the avatar and the geographic location information.
  • the second avatar in the range of the preset distance (for example, 50 meters, 100 meters, etc.) of the indicated position may be one or more.
  • Acquiring information of the one or more second avatars, the information of each second avatar may include feature data and geographic location information, and the interaction server 106 sends the acquired information of the one or more second avatars.
  • the first user terminal 101 is given.
  • the map server 105 in the interactive platform 103 After receiving the geographical location information of the first avatar, the map server 105 in the interactive platform 103 acquires the map data corresponding to the geographical location information. The map server 105 transmits the acquired map data to the first user terminal 101.
  • the first user terminal 101 displays a map according to the received map data, and displays the one or more second avatars on the map according to the received information of the one or more second avatars.
  • the user selects one of the second avatars, and the interactive application on the first user terminal 101 obtains the interactive scene image in response to the user's selection operation, and renders the first avatar and the selected second avatar into the interactive scene image.
  • the interaction server 106 Simultaneously sending an interaction request to the interaction server 106 in the interactive platform, the interaction server 106 establishes a first connection with the first user terminal in response to the interaction request, and establishes a second user corresponding to the selected second avatar.
  • the second connection between the terminals.
  • the first user terminal sends the interactive content to the interactive server 106 via the first connection, and the interactive server 106 transmits the interactive content to the second user terminal via the second connection.
  • FIG. 2a-1 will describe the interaction method between the avatars provided by the embodiment of the present invention from the perspective of the terminal.
  • the method is applied to the first terminal or the second terminal in FIG. 1A, as shown in FIG. 2a-1.
  • the method of this embodiment includes the following steps:
  • Step 201 Obtain location information of the first avatar established by the user.
  • the first avatar may be established by the user according to his own facial information. That is, the user can scan the face using the face scanning system of the terminal to obtain facial feature data and facial maps, and the facial feature data may include feature data of the mouth, nose, eyes, eyebrows, face, chin, etc.; The feature data and the face map are merged into the face of the preset avatar model; finally, the dressing is selected from the dressing interface provided by the terminal, and the selected dress is merged into the corresponding part of the preset avatar model.
  • the dressing interface can be as shown in FIG. 2b, and the dressing provided in the dressing interface includes, but is not limited to, hairstyles, clothes, pants, shoes, and the like.
  • the first avatar may also be a related avatar that the user directly selects from the system.
  • the first avatar may be a character, an animal, or another cartoon image.
  • the first avatar may be three-dimensional or planar, and is not specifically limited herein.
  • the corresponding terminal may obtain the location information of the avatar established by the user, and the location information may be the location information of the terminal, or may be the location after the location of the avatar corresponding to the avatar.
  • the information, the location information may be a latitude and longitude value or a geographic coordinate value.
  • the terminal may obtain the location information by using a software development kit (SDK) of the map application, or may adopt a location-related application provided by the system (for example, a social application). , game application, life service application, etc.) Application Programming Interface (API) to obtain location information, which is not specifically limited here.
  • SDK software development kit
  • API Application Programming Interface
  • each terminal may send the avatar established by its user and its location information to the server.
  • the server itself can also set some avatars and set location information for these avatars.
  • the server may establish a mapping table of ⁇ virtual image, location information> according to the avatar information reported by each terminal and the avatar information set by itself, and store the established mapping table in the database.
  • the avatar created by the user of the terminal may be referred to as a first avatar, and the other avatars may be referred to as a second avatar.
  • the second avatar may be established by a user of another terminal (the avatar may be manipulated by the other terminal, for example, changing the location of the avatar, replacing the avatar's dress, changing the avatar's upper and lower line states, etc. ), it can also be set by the server (the avatar can be manipulated by the server).
  • the terminal may send the location information of the first avatar to the server.
  • Step 202 Send location information of the first avatar to a server.
  • the terminal can also send preset distance information to the server, and the preset distance can be customized according to actual needs, for example, 40 meters, 80 meters, and the like.
  • Step 203 Receive information of a second avatar within a preset distance range of the location indicated by the location information.
  • the server may query the preset distance range of the location indicated by the location information according to the mapping table of the ⁇ virtual image, location information> stored in the data.
  • the second avatar After receiving the location information and the preset distance information, the server may query the preset distance range of the location indicated by the location information according to the mapping table of the ⁇ virtual image, location information> stored in the data. The second avatar.
  • the server will query the distance from the east longitude 113°49', and the north latitude 22°34' position 40 meters to exist. All of the second avatars; when the preset distance information is 80 meters, the server will query all the second avatars that are within the range of 113°49' east longitude and 80 meters north latitude. There may be multiple second avatars, and the server sends the information of the queried second avatar to the terminal.
  • the server may determine specific information returned to the terminal according to the size of the preset distance and/or the number of the second avatars that are queried. For example, when the preset distance is less than a certain distance threshold (for example, 50, 100, etc.), the terminal may return the character location list information of the second avatar (including the specific second avatar and its location information); When the preset distance is greater than or equal to the distance threshold, the terminal may return the location quantity aggregation information of the second avatar (which may include only the number of second avatars aggregated by each location, excluding specific avatar information), For example, it is possible to carry out polymerization by ⁇ latitude and longitude center, quantity>.
  • a certain distance threshold for example, 50, 100, etc.
  • the terminal may return the character location list information of the second avatar (including the specific second avatars). And the location information); and when the number of the second avatars found is greater than or equal to the number threshold, the terminal may return the location number aggregation information of the second avatar (which may include only the second virtuality of each location aggregation) The number of images does not include specific avatar information).
  • a certain threshold for example, 5, 10, etc.
  • the terminal may acquire the specific second avatar and detailed location of the specific location aggregation according to the operation of the user (for example, clicking on an aggregation location). information.
  • the user can select a second avatar from the plurality of second avatars, and the terminal determines the second avatar selected by the user.
  • Step 204 Acquire an interaction request initiated by the first avatar to the second avatar.
  • Step 205 Receive an interaction scenario generated by the server according to the interaction request.
  • the interaction request may be sent to the server, and the server may determine, according to the online status of each avatar maintained, whether the second avatar is online, and if not online, directly return a request to the terminal.
  • the failure notification message if online, the server generates an interaction scenario according to the interaction request, and the terminal receives the interaction scenario generated by the server.
  • the terminal may send the location information of the first avatar to the server.
  • the interaction request may be a chat interaction request (such as voice, video, text, and the like), and correspondingly, the generated interaction scene may default to a real scene image corresponding to the location information of the first avatar.
  • the real scene image may be a map image corresponding to the position information, or a street view image corresponding to the position information, or a real scene image corresponding to the position information.
  • the interaction request may be a location traversing interaction request, where the interaction request may include specified location information, and correspondingly, the generated interaction scenario may be a real scene image corresponding to the specified location information, and the specified location may be any on the map.
  • a location, or a designated location may also be any location on the map that has a street view.
  • the real scene image corresponding to the specified location information may be a map image or a street view image.
  • Step 206 Render the first avatar and the second avatar into the interactive scene to implement interaction between avatars.
  • the rendered interactive scene may be a fusion display of the avatar and the map.
  • the first avatar S1 and the selected second avatar S2 are rendered to the map image around the first user terminal.
  • the rendered interactive scene may be a fusion display of the avatar and the street scene, for example, as shown in FIG. 2d, rendering the first avatar S1 and the second avatar S2 onto the streetscape image between the two buildings; or rendering
  • the interactive scene may be a fusion display of the avatar and the real scene.
  • the first avatar S1 and the second avatar S2 are rendered into the office scene.
  • FIGS. 2c to 2e are only some effect display diagrams of the interactive scene, and in practice, it does not constitute a limitation on the final display effect.
  • each avatar may perform chat interaction in the interactive scene, for example, performing interactions such as voice, text, video, etc.; each avatar may also perform location traversal interaction in the interactive scene, for example, each avatar may traverse Go to the same place with street views to simulate the effect of exploring the streetscape together.
  • the terminal will receive an update notification message sent by the server, and the terminal may notify the update according to the update.
  • the message updates the second avatar rendered in the interactive scene.
  • the scenarios applicable to the method of the embodiment include, but are not limited to, social, game, advertisement, retail, life service, travel, etc., and the methods provided in this embodiment are as follows. The application scenario is explained.
  • the method in this embodiment can be applied in social interaction, and different users can establish respective avatars on the same social application, and avatars can add friends to each other, and become avatars between friends, and can initiate text, voice, and the like. Chat interaction.
  • the terminal can render the interactive avatar to an interactive scene such as a map, or a street view, or a real scene, to present an interactive effect of virtual and real.
  • the method of the embodiment can be applied in a game, and the user can establish his own avatar in the game application, and then compete with other users in the avatar established in the game application, match, or configure the server with himself.
  • the avatar competes and competes.
  • the terminal can render the interactive avatar into interactive scenes such as street scene and real scene, so as to present an interactive effect of virtual and real.
  • the method of the embodiment can be applied to an advertisement, and the user can establish an avatar in the advertisement application.
  • the terminal can configure the avatar and the server of the user to the avatar of the user and render together
  • the street view screen is updated in real time with the user's operation (for example, forward, backward, turn, etc.), so as to present the effect of viewing and playing the virtual image together, thereby achieving the effect of simulating the scenic spot tourism experience.
  • the terminal may acquire location information of the first avatar established by the user, and send the location information of the first avatar to the server; and then receive the preset distance within the location indicated by the location information.
  • the second avatar is rendered into the interactive scene, and the embodiment realizes the interaction between the avatars by rendering different avatars into the same real interactive scene, and expands the application scenario of the virtual reality combination related technology.
  • FIG. 2a-2 is a flowchart of a method for interaction between user terminals according to an embodiment of the present application. The method is applied to the first user terminal 101 of Figure 1B. As shown in Figure 2a-2, the method can include the following steps.
  • S211 Acquire geographical location information of the first avatar corresponding to the first user terminal, and send the geographic location information of the first avatar to the server; the server stores a mapping table between the avatar and the geographic location information. .
  • the interactive application may also integrate a map function, when the user operates the map icon in the interactive application, for example, when the map icon is clicked to open the map,
  • the interactive application sends the geographic location information of the first avatar to the interactive platform 103.
  • the interactive application in each terminal can transmit the geographic location information of its corresponding avatar to the interactive platform 103 in response to the user's operation on the map icon.
  • the interaction server 106 may establish a mapping table of the avatar and the location information according to the geographic location information of the avatar reported by each terminal, where the avatar is the avatar ID and the location information is the avatar corresponding to the avatar. Location data.
  • the geographic location data of the avatar is sent to the interactive platform 103, and the server establishes a mapping table of the ⁇ virtual image, location information> of the avatar.
  • the server sends an exit request to the server, and the server will The corresponding mapping table of the ⁇ virtual image, location information> is deleted.
  • the interaction server 106 also records the status information of each avatar.
  • the interaction server 106 sets the state of the corresponding avatar to be online, and marks the corresponding avatar on the map.
  • the interactive server 106 sets the state of the corresponding avatar to an offline state, indicating that the corresponding avatar is not on the map.
  • the interactive server 106 also stores feature data of each avatar, and includes IDs of the avatars, facial feature data of the avatar, face maps, and dressing feature data.
  • S212 Receive information of the second avatar within a preset distance range corresponding to the geographical location information sent by the server.
  • the interaction server 106 searches for one or more second virtual information within a preset distance range corresponding to the geographic location information according to the saved mapping table of the ⁇ virtual image, location information>, and acquires the one or more first Two avatar information.
  • S213 Send, according to the information of the second avatar, an interaction request for the second avatar to the server, so that the server establishes a first connection with the first user terminal, and establishes and the second A second connection between the second user terminals corresponding to the avatar.
  • the map server 105 acquires corresponding map data according to the geographic location information of the first avatar, and the map data is obtained.
  • the interactive server 106 transmits the found information of the one or more second avatars to the first user terminal.
  • the interactive application on the first user terminal presents a map according to the map data, and renders the one or more second avatars onto the map according to the information of the one or more second avatars.
  • the user can select one of the second avatars to interact.
  • the interactive application on the first user terminal sends an interaction request of the first avatar and the selected second avatar to the interaction server 106.
  • the interaction server 106 establishes a first link between the first user terminal and the interaction server 106 according to the interaction request, and establishes a second connection between the second user terminal corresponding to the selected second avatar and the interaction server 106.
  • the user may also select a specific interaction form, and the corresponding interactive request carries a specific interactive form identifier, wherein the interaction form includes a voice chat interaction, a text interaction, and the like.
  • the interaction server 106 determines whether the selected second avatar is online before establishing the first connection and the second connection, and establishes the first avatar when the second avatar is online. A connection and a second connection, when the second avatar is not online, directly returning a notification message of the interaction request failure to the first user terminal.
  • S214 Acquire an interactive scene image, where the interactive scene image is a real scene image corresponding to the first user terminal, and the first avatar and the second avatar are rendered into the interactive scene image; acquiring interactive content Transmitting the interactive content to the server through the first connection, so that the server sends the interactive content to the second user terminal by using the second connection.
  • the interactive application on the first user terminal sends an interaction request to the interaction server 106
  • the interactive application acquires an interactive scene image, and renders the first avatar and the selected second avatar into the interactive scene image.
  • the first user terminal sends the interactive content to the interaction server 106 through the first connection, and the interaction server 106 sends the interaction content to the second user terminal corresponding to the second avatar.
  • the first user terminal may acquire the location information of the first avatar, and send the location information of the first avatar to the interaction server on the interaction platform; and then receive the location of the location indicated by the location information. Setting the information of the second avatar within the distance range, and sending an interaction request by the first avatar to the second avatar to the interaction server; so that the server establishes the first connection and the second connection, the first user terminal And acquiring the interactive scene image, and rendering the first avatar and the second avatar into the interactive scene image, the first user terminal sends the interaction content to the interaction server through the first connection, and the interaction server sends the interaction content to the interaction interface through the second connection
  • the second avatar corresponds to the second user terminal.
  • the interactive avatar is rendered into the same interactive scene image, and then the subsequent interaction is performed, and the virtual reality combination application scene is applied in the social interaction scenario.
  • the transmitting the geographic location information of the first avatar to the server includes:
  • the information of the second avatar includes geographic location information of the second avatar
  • the method further includes:
  • the sending, according to the information of the second avatar, the interaction request for the second avatar to the server includes:
  • the geographic location information of the second avatar includes geographic location information of each second avatar, and the information of the second avatar further includes each Characteristic data of the second avatar;
  • the rendering the second avatar on the map according to the geographic location information of the second avatar includes:
  • the interactive server 106 can determine the specific information returned to the terminal according to the size of the preset distance and/or the number of the second avatars queried. For example, when the preset distance is less than a certain distance threshold (for example, 50, 100, etc.), the information of the second avatar returned to the terminal includes the person location list information (including the specific feature data of each of the second avatars and location information). For another example, when the number of the found second avatar is less than a certain threshold (for example, 5, 10, etc.), the information of the second avatar returned to the terminal includes the person location list information (including the specific second The avatar's feature data and its location information).
  • a certain distance threshold for example, 50, 100, etc.
  • the information of the second avatar returned to the terminal includes the person location list information (including the specific feature data of each of the second avatars and location information).
  • a certain threshold for example, 50, 100, etc.
  • the information of the second avatar returned to the terminal includes the person location list information (including the specific feature data of each of the second avatars and
  • the terminal receives the information of the second avatar fed back by the interaction server 106 as the character location list information of the second avatar, and the character location list information includes the feature data and the location data of each second avatar, a second avatar, determining a position of the second avatar on the map according to the location data of the second avatar, and displaying the second avatar at the location according to the feature data of the second avatar.
  • the geographic location information of the second avatar includes one or more geographic location information, and the information of the second avatar further includes the one or The number of second avatars corresponding to each geographic location information in the plurality of geographic location information;
  • the rendering the second avatar on the map according to the geographic location information of the second avatar includes:
  • Each second avatar is displayed on the map according to feature data and geographic location information of each second avatar.
  • the interactive server 106 can determine the specific information returned to the terminal according to the size of the preset distance and/or the number of the second avatars queried.
  • the second image information returned to the first user terminal includes one or more geographical location information, where the location information is a plurality of second avatar aggregated locations, for example, The location is the central location of a plurality of second avatars (also referred to as an aggregate location).
  • the information of the second avatar returned to the first user terminal further includes the number of second avatars corresponding to each geographic location information in the returned one or more geographical location information.
  • the information of the second avatar returned to the first user terminal also includes one or more geographic location information and a second avatar corresponding to each geographic location information. quantity.
  • the information of the second avatar returned by the interactive server 106 to the first user terminal is one or more geographic location information and the number of second avatars corresponding to each geographic location information
  • the first user terminal determines according to each geographic location information.
  • Each of the locations on the map displays an identifier on a corresponding location on the map, the identifier including the number of second avatars corresponding to the corresponding geographic location information. For example, the number of aggregated identities and aggregated second avatars is displayed at the latitude and longitude center on the displayed map.
  • the user clicks on an identifier, and sends the feature data of the second avatar corresponding to the identifier and the acquisition request of the location information to the interaction server 106.
  • the feature data and the location information of each second avatar corresponding to the identifier sent by the interaction server 106 are received, and each second avatar is rendered on the map according to the feature data and the location information of each second avatar.
  • the obtaining an interactive scene image includes:
  • the interactive scene image is that the first user terminal renders the first virtual image and the second virtual image to the first image by calling the real scene image of the environment in which the first user terminal is located by the camera of the first user terminal.
  • the real scene image captured by the user terminal.
  • the first avatar S1 and the second avatar S2 are rendered into an office scene.
  • the obtaining an interactive scene image includes:
  • the interactive scene image is provided by a server (interactive platform 103), in particular, by a map server in the interactive platform 103.
  • the interactive scene image is a real scene image corresponding to location information of the first avatar corresponding to the first user terminal;
  • the rendering the first avatar and the second avatar into the interactive scene image comprises:
  • the interactive scene image is provided by the map server 105
  • the interactive scene image provided by the map server 105 is a real scene image corresponding to the position information of the first avatar
  • the real scene image is the real position of the first user terminal.
  • Scene image may be a fusion display of the avatar and the map, for example, as shown in FIG. 2c, rendering the first avatar S1 and the selected second avatar S2 onto the map image around the first user terminal; or interacting with the scene
  • the image may be a fused display of avatars and streetscapes, such as shown in Figure 2d, rendering the first avatar S1 and the second avatar S2 onto a street view image between the two buildings.
  • the interaction request carries the specified location information corresponding to the first user terminal, where the interactive scene image is a real scene image at a location corresponding to the specified location information;
  • the rendering of the first avatar and the second avatar to the interactive scene includes:
  • the interaction request may also be a location traversing interaction request, where the interaction request may carry the specified location information, and the location indicated by the indication location information may be, for example, a traversed location.
  • the generated interactive scene may be a real scene image corresponding to the location indicated by the specified location information, and the designated location may be any location on the map, or the designated location may also be any location on the map with a street view.
  • the real scene image corresponding to the specified location information may be a map image or a street view image.
  • the feature data of the avatar includes facial feature data, facial maps, and dress up data.
  • the method further includes:
  • the user can create a first avatar in an interactive application on the first terminal.
  • the interactive application of the first terminal acquires the facial feature data of the first avatar in response to the user's scanning operation on the face, and acquires the dressing data of the first avatar in response to the selection of the dressing element identifier.
  • the interactive application on the first terminal sends the created feature data of the first avatar to the interaction server 106 on the interaction platform 103.
  • the avatar establishment request is sent to the interaction server 106, and the avatar establishment request carries the first The identification ID of the avatar and the feature data of the first avatar.
  • the feature data of the first avatar includes facial feature data, facial map, and dressing data.
  • the method for interacting between user terminals provided by the present application further includes:
  • the second avatar when the update notification message of the second avatar is received, the second avatar may be one of the plurality of second avatars on the map displayed on the first user terminal, or may be rendered The second avatar onto the interactive scene image.
  • the update notification message may be, for example, a notification to update feature data of the second avatar, such as facial feature data, face patch, and dressing data, etc., and may also update location data of the second avatar, and may also be an update.
  • the state of the second avatar such as updating the state of the second avatar to an offline state, the second avatar displayed on the corresponding map disappears from the map, and the second avatar displayed in the interactive scene image is The interactive scene image disappears.
  • the first user terminal performs a corresponding update operation on the corresponding second avatar according to the update message.
  • FIG. 3a will describe the interactive method of the augmented reality provided by the embodiment of the present invention from the perspective of the server.
  • the method can be applied to the server in FIG. 1A.
  • the method in this embodiment includes the following steps:
  • Step 301 Receive location information of a first avatar established by a user sent by the terminal.
  • the corresponding terminal may obtain the location information of the avatar established by the user, and the location information may be the location information of the terminal, or may be the location after the location of the avatar corresponding to the avatar.
  • Information, location information can be latitude and longitude values or geographic coordinate values.
  • each terminal may send the avatar established by its user and its location information to the server.
  • the server itself can also set some avatars and set location information for these avatars.
  • the server may establish a mapping table of ⁇ virtual image, location information> according to the avatar information reported by each terminal and the avatar information set by itself, and store the established mapping table in the database.
  • the avatar created by the user of the terminal may be referred to as a first avatar, and the other avatars may be referred to as a second avatar.
  • the second avatar may be established by a user of another terminal (the avatar may be manipulated by the other terminal, for example, changing the location of the avatar, replacing the avatar's dress, changing the avatar's upper and lower line states, etc. ), it can also be set by the server (the avatar can be manipulated by the server).
  • the terminal may send the location information of the first avatar to the server, and the server receives the location information of the first avatar sent by the terminal, and at the same time, the server may also The preset distance information sent by the terminal is received, and the preset distance information can be customized according to actual needs, for example, 40 meters, 80 meters, and the like.
  • Step 302 Search for a second avatar within a preset distance range of the location indicated by the location information.
  • the server may query the preset distance of the location indicated by the location information according to the mapping relationship between the avatar and the location information stored in the data, that is, the mapping table.
  • the second avatar within the scope.
  • the queried second avatar may be multiple, and the server sends the queried second avatar information to the terminal.
  • Step 303 Send information of the second avatar to the terminal.
  • the server may determine specific information sent to the terminal according to the size of the preset distance and/or the number of the second avatars that are queried. For example, when the preset distance is less than a certain distance threshold (for example, 50, 100, etc.), the terminal may send the second avatar's person location list information (including specific individual second avatars and their location information); When the preset distance is greater than or equal to the distance threshold, the terminal may send the location quantity aggregation information of the second avatar to the terminal (which may include only the number of the second avatars aggregated by the respective locations, excluding the specific avatar information), For example, it is possible to carry out polymerization by ⁇ latitude and longitude center, quantity>.
  • a certain distance threshold for example, 50, 100, etc.
  • the terminal may send the second avatar's person location list information (including specific individual second avatars and their location information);
  • the terminal may send the location quantity aggregation information of the second avatar to the terminal (which may include only the number of the second avatars aggregated by the respective locations,
  • the terminal may send the second avatar's person location list information (including specific second avatars). And the location information); and when the number of the second avatars found is greater than or equal to the number threshold, the location number aggregation information of the second avatar may be sent to the terminal (wherein only the second virtuality of each location aggregation may be included) The number of images does not include specific avatar information).
  • a certain threshold for example, 5, 10, etc.
  • the terminal may acquire the specific second avatar and detailed location of the specific location aggregation according to the operation of the user (for example, clicking on an aggregation location). information.
  • the user may select a second avatar from the plurality of second avatars, the terminal determines a second avatar selected by the user, and the terminal acquires the first avatar to initiate the second avatar selected by the user.
  • Step 304 Receive an interaction request initiated by the first avatar sent by the terminal to the second virtual image.
  • the server receives an interaction request initiated by the first avatar sent by the terminal to the second avatar selected by the user.
  • the server may determine, according to the online status of each avatar maintained, whether the second avatar selected by the user is online, and if not online, directly return a request failure notification message to the terminal; if online, the server according to the interaction request Generate an interactive scene.
  • Step 305 An interaction scenario generated according to the interaction request.
  • the interaction request may be a chat interaction request (such as voice, video, text, and the like), and correspondingly, the generated interaction scene may default to a real scene image corresponding to the location information of the first avatar.
  • the real scene image may be a map image corresponding to the position information, or a street view image corresponding to the position information, or a real scene image corresponding to the position information.
  • the interaction request may be a location traversing interaction request, where the interaction request may include specified location information, and correspondingly, the generated interaction scenario may be a real scene image corresponding to the specified location information, and the specified location may be any on the map.
  • a location, or a designated location may also be any location on the map that has a street view.
  • the real scene image corresponding to the specified location information may be a map image or a street view image.
  • Step 306 Send the interaction scenario to the terminal, so that the terminal renders the first avatar and the second avatar into the interaction scenario to implement interaction between avatars.
  • the terminal will send an update notification message to the server, the server
  • the data stored in the database is updated according to the update notification message, and then the update notification message is sent to other terminals displaying the avatar to notify the other terminal to update the displayed corresponding avatar.
  • the server after receiving the location information of the first avatar created by the user sent by the terminal, the server searches for a second avatar within a preset distance range of the location indicated by the location information, where the Sending information of the second avatar to the terminal; after receiving the interaction request initiated by the terminal to the second avatar, the interaction scenario generated according to the interaction request;
  • the interaction scene is sent to the terminal, so that the terminal renders the first avatar and the second avatar into the interactive scene, that is, the terminal renders different avatars into the same real interactive scene, Thereby, the interaction between the virtual images is realized, and the application scenario of the related technology of the virtual reality is expanded.
  • the application example also provides a method for interaction between user terminals, and the method is applicable to the interactive platform in FIG. 1B, as shown in FIG. 3b, the method includes:
  • S311 Receive geographic location information of the first avatar corresponding to the first user terminal sent by the first user terminal.
  • S312 Searching, in a mapping table between the avatar and the geographic location information, information of the second avatar within a preset distance range corresponding to the geographic location information;
  • S313 Send the information of the second avatar to the first user terminal.
  • S314 Receive an interaction request that is sent by the first avatar to the second avatar according to the information of the second avatar, and establish, with the first user terminal, according to the interaction request. a first connection between the first connection and a second connection between the second user terminal corresponding to the second avatar;
  • S315 Generate an interactive scene image according to the interaction request, where the interactive scene image is a real scene image corresponding to the first user terminal, and send the interactive scene image to the first user terminal, so that the first user The terminal renders the first avatar and the second avatar into the interactive scene image;
  • S316 Receive, by using the first connection, the interactive content sent by the first user terminal, and send the interactive content to the second user terminal by using the second connection.
  • the embodiment of the present invention further provides a terminal, as shown in FIG. 4, including an obtaining unit 401, a sending unit 402, a receiving unit 403, and a processing unit 404, as follows:
  • the obtaining unit 401 is configured to acquire location information of the first avatar established by the user.
  • the sending unit 402 is configured to send the location information of the first avatar to the server.
  • the receiving unit 403 is configured to receive information of the second avatar within a preset distance range of the location indicated by the location information.
  • the obtaining unit 401 is further configured to acquire an interaction request initiated by the first avatar to the second avatar selected by the user.
  • the sending unit 402 is further configured to send the interaction request to the server;
  • the receiving unit 403 is further configured to receive an interaction scenario generated by the server according to the interaction request.
  • processing unit 404
  • the processing unit 404 is configured to render the first avatar and the second avatar into the interactive scene to implement interaction between avatars.
  • the acquiring unit may acquire location information of the first avatar established by the user, and the sending unit sends the location information of the first avatar to the server; and then the receiving unit receives the location indicated by the location information.
  • the acquiring unit acquires an interaction request initiated by the first avatar to the second avatar; the receiving unit receives an interaction scenario generated by the server according to the interaction request,
  • the processing unit renders the first avatar and the second avatar into the interactive scene, and the terminal in the embodiment implements the avatar by rendering different avatars into the same real interactive scene.
  • the interaction extends the application scenarios of the related technologies of virtual and real.
  • the obtaining unit 401, the sending unit 402, the receiving unit 403, and the processing unit 404 may be used to implement corresponding steps of the method embodiments of the present application.
  • the processing unit 404 may be used to implement corresponding steps of the method embodiments of the present application.
  • the embodiment of the present invention further provides a terminal, as shown in FIG. 5, which shows a schematic structural diagram of a terminal involved in the embodiment of the present invention, specifically:
  • the terminal may include a radio frequency (RF) circuit 501, a memory 502 including one or more computer readable storage media, an input unit 503, a display unit 504, a sensor 505, an audio circuit 506, and wireless fidelity (WiFi,
  • the Wireless Fidelity module 507 includes a processor 508 having one or more processing cores, and a power supply 509 and the like. It will be understood by those skilled in the art that the terminal structure shown in FIG. 5 does not constitute a limitation to the terminal, and may include more or less components than those illustrated, or a combination of certain components, or different component arrangements. among them:
  • the RF circuit 501 can be used for transmitting and receiving information or during a call, and receiving and transmitting the signal. Specifically, after receiving the downlink information of the base station, the downlink information is processed by one or more processors 508. In addition, the data related to the uplink is sent to the base station. .
  • the RF circuit 501 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a low noise amplifier (LNA, Low Noise Amplifier), duplexer, etc. In addition, the RF circuit 501 can also communicate with the network and other devices through wireless communication.
  • SIM Subscriber Identity Module
  • LNA Low Noise Amplifier
  • the wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), and Code Division Multiple Access (CDMA). , Code Division Multiple Access), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), e-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • SMS Short Messaging Service
  • the memory 502 can be used to store software programs and modules, and the processor 508 executes various functional applications and data processing by running software programs and modules stored in the memory 502.
  • the memory 502 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the terminal (such as audio data, phone book, etc.).
  • memory 502 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 502 may also include a memory controller to provide access to memory 502 by processor 508 and input unit 503.
  • the input unit 503 can be configured to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • input unit 503 can include a touch-sensitive surface as well as other input devices.
  • Touch-sensitive surfaces also known as touch screens or trackpads, collect touch operations on or near the user (such as the user using a finger, stylus, etc., any suitable object or accessory on a touch-sensitive surface or touch-sensitive Operation near the surface), and drive the corresponding connection terminal according to a preset program.
  • the touch-sensitive surface may include two parts: a touch detection terminal and a touch controller.
  • the touch detection terminal detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits a signal to the touch controller; the touch controller receives the touch information from the touch detection terminal, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 508 is provided and can receive commands from the processor 508 and execute them.
  • touch-sensitive surfaces can be implemented in a variety of types, including resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 503 can also include other input devices. Specifically, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • Display unit 504 can be used to display information entered by the user or information provided to the user, as well as various graphical user interfaces of the terminal, which can be composed of graphics, text, icons, video, and any combination thereof.
  • the display unit 504 can include a display panel.
  • the display panel can be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
  • the touch-sensitive surface can cover the display panel, and when the touch-sensitive surface detects a touch operation on or near it, it is transmitted to the processor 508 to determine the type of the touch event, and then the processor 508 displays the type according to the type of the touch event. A corresponding visual output is provided on the panel.
  • the touch-sensitive surface and display panel are implemented as two separate components to perform input and input functions, in some embodiments, the touch-sensitive surface can be integrated with the display panel to implement input and output functions.
  • the terminal may also include at least one type of sensor 505, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel according to the brightness of the ambient light, and the proximity sensor may close the display panel and/or the backlight when the terminal moves to the ear.
  • the gravity acceleration sensor can detect the magnitude of acceleration in each direction (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • attitude of the terminal such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that can be configured in the terminal are no longer used here. Narration.
  • the audio circuit 506, the speaker, and the microphone provide an audio interface between the user and the terminal.
  • the audio circuit 506 can transmit the converted electrical signal of the audio data to the speaker, and convert it into a sound signal output by the speaker; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 506 and converted.
  • the audio data is then processed by the audio data output processor 508, sent via RF circuitry 501 to, for example, another terminal, or the audio data is output to memory 502 for further processing.
  • the audio circuit 506 may also include an earbud jack to provide communication between the peripheral earphone and the terminal.
  • WiFi is a short-range wireless transmission technology.
  • the terminal can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 507. It provides wireless broadband Internet access for users.
  • FIG. 5 shows the WiFi module 507, it can be understood that it does not belong to the necessary configuration of the terminal, and may be omitted as needed within the scope of not changing the essence of the invention.
  • Processor 508 is the control center of the terminal, interconnecting various portions of the entire terminal using various interfaces and lines, executing or executing software programs and/or modules stored in memory 502, and invoking data stored in memory 502, executing The terminal's various functions and processing data, so as to monitor the terminal as a whole.
  • the processor 508 can include one or more processing cores; wherein the processor 508 can integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 508.
  • the terminal also includes a power source 509 (such as a battery) for powering various components.
  • a power source 509 (such as a battery) for powering various components.
  • the power source can be logically coupled to the processor 508 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • Power source 509 may also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
  • the terminal may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the processor 508 in the terminal loads the executable file corresponding to the process of one or more applications into the memory 502 according to the following instruction, and is stored in the memory by the processor 508.
  • the application in 502 thereby implementing the various functions described above for the method of interaction between user terminals on the terminal side.
  • the embodiment of the present invention further provides a server, as shown in FIG. 6, comprising a receiving unit 601, a searching unit 602, a sending unit 603, and a generating unit 604, as follows:
  • the receiving unit 601 is configured to receive location information of the first avatar created by the user sent by the terminal.
  • the searching unit 602 is configured to search for a second avatar within a preset distance range of the location indicated by the location information.
  • the sending unit 603 is configured to send information about the second avatar to the terminal.
  • the generating unit 604 is configured to generate an interactive scenario according to the interaction request.
  • the receiving unit 601, the searching unit 602, the sending unit 603, and the generating unit 604 can be used to implement corresponding steps of the method embodiments of the present application.
  • the receiving unit 601, the searching unit 602, the sending unit 603, and the generating unit 604 can be used to implement corresponding steps of the method embodiments of the present application.
  • the searching unit searches for the second avatar within the preset distance range of the location indicated by the location information, and sends the second avatar.
  • the unit sends the information of the second avatar to the terminal; after the receiving unit receives the interaction request sent by the terminal to the second avatar sent by the terminal, the generating unit is configured according to the interaction Requesting the generated interaction scenario; the sending unit sends the interaction scenario to the terminal, so that the terminal renders the first avatar and the second avatar into the interaction scenario, that is, the embodiment
  • the terminal can realize the interaction between the virtual images by rendering different avatars into the same real interactive scene, and expands the application scenario of the related technologies of virtual and real.
  • the embodiment of the present invention further provides a server.
  • the server may be configured by a cluster system, and the electronic device is configured to be integrated into one or each unit function to implement the functions of each unit, and the server is at least A database for storing data and a processor for processing data, or a storage medium provided in the identification server or a separately set storage medium.
  • a microprocessor for the processor for processing data, a microprocessor, a central processing unit (CPU), a digital signal processor (DSP, Digital Signal Processor) or programmable logic may be used when performing processing.
  • An FPGA Field-programmable Gate Array
  • An FPGA Field-programmable Gate Array
  • the operation instruction may be a computer executable code
  • the user terminal of the server side of the embodiment of the present invention is implemented by using the operation instruction
  • the server as an example of the hardware entity 700, is shown in FIG. 7, and includes a processor 701, a storage medium 702, and at least one external communication interface 703.
  • the processor 701, the storage medium 702, and the external communication interface 703 are all connected by a bus 704. .
  • server item is similar to the description of the corresponding method, and is not described here.
  • technical details not disclosed in the server embodiment of the present invention refer to the description of the corresponding method embodiment of the present invention.
  • the embodiment of the present invention further provides an interaction system between the avatars, including the terminal and the server.
  • the terminal may be the terminal described above, and the server may be the server described above.
  • the specific interaction process refer to the foregoing description. I will not repeat them here.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit, if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the embodiments of the present invention may contribute to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage.
  • the medium includes a number of instructions for causing a computer device (which may be a personal computer, device, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • a computer device which may be a personal computer, device, or network device, etc.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Primary Health Care (AREA)
  • Tourism & Hospitality (AREA)
  • Computing Systems (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明实施例公开了一种虚拟形象之间的互动方法、终端、服务器及系统,其中,虚拟形象之间的互动方法包括:获取第一虚拟形象的位置信息;将所述第一虚拟形象的位置信息发送给服务器;接收服务器发送的所述位置信息所指示的位置的预设距离范围内的第二虚拟形象的信息;根据所述一个或多个第二虚拟形象的信息向服务器发送所述第一虚拟形象向所述一个或多个第二虚拟形象中任一第二虚拟形象发起的互动请求;接收所述服务器根据所述互动请求生成的互动场景图像;将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景图像中,本发明实施例能够实现虚拟形象之间的互动。

Description

用户终端之间的互动方法、终端、服务器、系统及存储介质
本申请要求于2016年12月21日提交中国专利局、申请号为201611191383.0、申请名称为“一种虚拟形象之间的互动方法、终端、服务器及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明实施例涉及通信技术领域,具体涉及用户终端之间的互动方法、终端、服务器、系统及存储介质。
背景技术
目前常见的、将现实信息与虚拟信息相结合的技术,例如增强现实、混合现实等技术开始走进大众视野。一大批以移动终端定位与状态感知、多媒体信息处理与展现技术为基础的增强现实应用开始涌现,并充分利用移动互联网资源优势对用户所观察的物理世界进行信息拓展、体验增强。这引起产业各方的极大关注,成为当前技术研究和标准化的热点。
技术内容
有鉴于此,本发明实施例提供了用户终端之间的互动方法、终端、服务器、系统及存储介质,能够实现虚拟形象之间的互动。
本发明实施例提供的用户终端之间的互动方法,应用于终端,包括:获取第一用户终端对应的第一虚拟形象的地理位置信息,将所述第一虚拟形象的地理位置信息发送给服务器;所述服务器中存储有虚拟形象和 地理位置信息之间的映射表;
接收服务器发送的所述地理位置信息对应的预设距离范围内的第二虚拟形象的信息;
根据所述第二虚拟形象的信息,向服务器发送针对所述第二虚拟形象的互动请求;以使服务器建立与所述第一用户终端之间的第一连接及建立与所述第二虚拟形象对应的第二用户终端之间的第二连接;
获取互动场景图像,该互动场景图像为所述第一用户终端对应的真实场景图像,将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景图像中;获取交互内容,将该交互内容通过所述第一连接发送给服务器,以使服务器通过所述第二连接将所述交互内容发送给所述第二用户终端。
本发明实施例还提供了用户终端之间的互动方法,应用于服务器,包括:
接收第一用户终端发送的该第一用户终端对应的第一虚拟形象的地理位置信息;
在虚拟形象和地理位置信息之间的映射表中查找所述地理位置信息对应的预设距离范围内的第二虚拟形象的信息;
将所述第二虚拟形象的信息发送给所述第一用户终端;
接收所述第一用户终端根据所述第二虚拟形象的信息发送的所述第一虚拟形象向所述第二虚拟形象发起的互动请求,根据该互动请求建立与所述第一用户终端之间的第一连接及建立与所述第二虚拟形象对应的第二用户终端之间的第二连接;
根据所述互动请求生成互动场景图像,该互动场景图像为所述第一用户终端对应的真实场景图像,将该互动场景图像发送给所述第一用户终端,以使所述第一用户终端将所述第一虚拟形象及所述第二虚拟形象 渲染至该互动场景图像中。
通过所述第一连接接收所述第一用户终端发送的互动内容,将该互动内容通过所述第二连接发送给所述第二用户终端。
本发明实施例提供一种终端,包括一个或一个以上存储器,一个或一个以上处理器;其中,所述一个或一个以上存储器存储有一个或者一个以上指令模块,经配置由所述一个或者一个以上处理器执行;其中,所述一个或者一个以上指令模块包括:
获取单元,用以获取第一用户终端对应的第一虚拟形象的地理位置信息,
发送单元,用以将所述第一虚拟形象的地理位置信息发送给服务器;所述服务器中存储有虚拟形象和地理位置信息之间的映射表;
接收单元,用以接收服务器发送的所述地理位置信息对应的预设距离范围内的第二虚拟形象的信息;
所述发送单元,还用以根据所述第二虚拟形象的信息,向服务器发送针对所述第二虚拟形象的互动请求;以使服务器建立与所述第一用户终端之间的第一连接及建立与所述第二虚拟形象对应的第二用户终端之间的第二连接;
所述获取单元,还用以获取互动场景图像,该互动场景图像为所述第一用户终端对应的真实场景图像;
处理单元,用以将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景图像中;
所述获取单元,还用以获取交互内容,
所述发送单元,还用以将该交互内容通过所述第一连接发送给服务器,以使服务器通过所述第二连接将所述交互内容发送给所述第二用户终端。
本发明实施例提供一种服务器,包括一个或一个以上存储器,一个或一个以上处理器;其中,所述一个或一个以上存储器存储有一个或者一个以上指令模块,经配置由所述一个或者一个以上处理器执行;其中,所述一个或者一个以上指令模块包括:
接收单元,用以接收第一用户终端发送的该第一用户终端对应的第一虚拟形象的地理位置信息;
查找单元,用以在虚拟形象和地理位置信息之间的映射表中查找所述地理位置信息对应的预设距离范围内的第二虚拟形象的信息;
发送单元,用以将所述第二虚拟形象的信息发送给所述第一用户终端;
接收单元,用以接收所述第一用户终端根据所述第二虚拟形象的信息发送的所述第一虚拟形象向所述第二虚拟形象发起的互动请求,
所述生成单元,还用以根据该互动请求建立与所述第一用户终端之间的第一连接及建立与所述第二虚拟形象对应的第二用户终端之间的第二连接;根据所述互动请求生成互动场景图像,该互动场景图像为所述第一用户终端对应的真实场景图像;
所述发送单元,还用以将该互动场景图像发送给所述第一用户终端,以使所述第一用户终端将所述第一虚拟形象及所述第二虚拟形象渲染至该互动场景图像中。
所述接收单元,还用以通过所述第一连接接收所述第一用户终端发送的互动内容,将该互动内容通过所述第二连接发送给所述第二用户终端。
本发明实施例还提供了一种虚拟形象之间的互动系统,包括上述终端,以及上述服务器。
本发明实施例还提供一种非易失性计算机可读存储介质,存储有计 算机可读指令,可以使至少一个处理器执行如上述应用于终端的所述方法。
本发明实施例还提供一种非易失性计算机可读存储介质,存储有计算机可读指令,可以使至少一个处理器执行如上述应用于服务器的所述方法。
采用本申请提供的上述方案,将不同虚拟形象渲染至同一真实互动场景中,进而实现了虚拟形象之间的互动,扩展了虚实结合相关技术的应用场景。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A是本发明实施例所提供的虚拟形象之间的互动方法的一个场景示意图;
图1B是本发明另一实施例的系统示意图;
图2a-1是本发明实施例所提供的虚拟形象之间的互动方法的一个流程示意图;
图2a-2是本发明另一实例应用于终端的一种用户终端之间的交互方法的流程图;
图2b是本发明实施例终端侧的装扮界面示意图;
图2c是本发明实施例所渲染的一个互动场景示意图;
图2d是本发明实施例所渲染的另一互动场景示意图;
图2e是本发明实施例所渲染的另一互动场景示意图;
图3a是本发明实施例所提供的虚拟形象之间的互动方法的另一流程示意图;
图3b是本发明另一实例应用于服务器的一种用户终端之间的交互方法的流程图;
图4是本发明实施例所提供的终端的一个结构示意图;
图5是本发明实施例所提供的终端的另一结构示意图;
图6是本发明实施例所提供的服务器的一个结构示意图;
图7是本发明实施例所提供的服务器的另一结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明实施例保护的范围。
本发明实施例提供了一种虚拟形象之间的互动方法、终端、服务器及系统,能够实现虚拟形象之间的互动。本发明实施例虚拟形象之间的互动方法一个具体实施场景可如图1A所示,包括终端与服务器,终端可以有多个,每个终端的用户都可以在其终端上建立虚拟形象,然后将其用户建立的虚拟形象的信息上报给服务器,上报的信息可以包括虚拟形象及虚拟形象的位置信息,位置信息例如可以是经纬度值或地理坐标值,服务器将各个终端上报的各个虚拟形象及其位置信息对应保存在数据库中。当某个终端(例如第一终端)的用户创建的虚拟形象(第一虚拟形象)想要与其他虚拟形象(例如第二终端的用户创建的第二虚拟形象)进行互动时,第一终端可以将第一虚拟形象的位置信息发送给服务 器,服务器可以给第一终端反馈所述位置信息所指示的位置的预设距离(例如50米、100米等)范围内的第二虚拟形象的信息,第一终端收到第二虚拟形象的信息之后,可以获取所述第一虚拟形象向所述第二虚拟形象发起的互动请求,然后将所述互动请求发送给服务器,接收服务器根据所述互动请求生成的互动场景,所述第一终端可以将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景中,以实现虚拟形象之间的互动,所述互动可以包括语音、视频、文字等聊天互动,也可以包括位置穿越互动,由于可以实现虚拟形象之间的互动,因而,本发明实施例的方法可以应用的场景包括但不限于社交、广告、游戏、商铺等。
本申请实施例的另一系统示意图,如图1B所示,该系统100可以包括:第一用户终端101、第二用户终端102及互动平台103。该互动平台103包括地图服务器105及互动服务器106。各实施例中,系统100中可以包括多个第一用户终端及多个第二用户终端,图1B中仅以第一用户终端101及第二用户终端102为例来说明各实施例的方案。
每个用户终端(包括第一用户终端101及第二用户终端102)的用户都可以在其用户终端上建立虚拟形象,例如,第一用户终端对应的第一虚拟形象可以是第一用户终端处的用户根据自己的面部信息建立的虚拟人物,然后将用户建立的虚拟形象的特征数据发送给互动平台103,互动平台103中的互动服务器106保存第一虚拟形象的特征数据。该互动平台包括互动服务器及地图服务器,该互动服务器可以为提供增强现实互动服务的互动服务器,该地图服务器可以为第三方的提供地图服务的服务器。终端将建立的虚拟形象的特征数据发送给该互动平台中的互动服务器,上报的虚拟形象的特征数据,例如包括面部特征数据、面部贴图及装扮数据等。互动服务器将虚拟形象的标识与虚拟形象的特征数据关联保存。用户可以在互动应用中创建第一虚拟形象,例如,该互动 应用为增强现实互动APP,用户可以在该增强现实APP中创建虚拟形象,并将创建的虚拟形象的特征数据发送给互动服务器。
所述互动应用中还可以集成地图功能,当用户操作地图图标时,所述互动应用将该第一用户终端对应的第一虚拟形象的地理位置信息发送给互动平台103。该地理位置信息例如可以是经纬度值或地理坐标值,当接收到所述第一虚拟形象的地理位置信息后,互动平台103中的互动服务器106将第一终端101发送的第一虚拟形象的地理位置信息与第一虚拟形象的标识关联保存。同时互动服务器106还将各个终端上报的各个虚拟形象的标识及其地理位置信息关联对应保存在数据库中。即互动服务器106中存储有虚拟形象和地理位置信息之间的映射表。当互动服务器106接收到第一用户终端101发送的第一虚拟形象的地理位置信息后,服务器在所述虚拟形象和地理位置信息之间的映射表中查找所述第一虚拟形象的地理位置信息所指示的位置的预设距离(例如50米、100米等)范围内的第二虚拟形象,该第二虚拟形象可以为一个,也可以为多个。同时获取所述一个或多个第二虚拟形象的信息,各第二虚拟形象的信息可以包括特征数据及地理位置信息,互动服务器106将获取的所述一个或多个第二虚拟形象的信息发送给第一用户终端101。
当接收到所述第一虚拟形象的地理位置信息后,互动平台103中的地图服务器105获取所述地理位置信息对应的地图数据。地图服务器105将获取的地图数据发送给第一用户终端101。
第一用户终端101根据接收到的地图数据展示地图,根据接收到的一个或多个第二虚拟形象的信息在该地图上展示所述一个或多个第二虚拟形象。用户选取其中的一个第二虚拟形象,第一用户终端101上的互动应用响应于用户的选取操作,获取互动场景图像,将第一虚拟形象及选取的第二虚拟形象渲染至该互动场景图像中,同时向互动平台中的 互动服务器106发送互动请求,互动服务器106响应于该互动请求,建立与第一用户终端之间的第一连接,同时建立与选取的第二虚拟形象对应的第二用户终端之间的第二连接。在后续的互动中,第一用户终端将互动内容通过第一连接发送给互动服务器106,互动服务器106通过第二连接将互动内容发送给第二用户终端。
图2a-1所示实施例将从终端的角度描述本发明实施例提供的虚拟形象之间的互动方法,该方法应用于图1A中的第一终端或第二终端,如图2a-1所示,本实施例的方法包括以下步骤:
步骤201、获取用户建立的第一虚拟形象的位置信息。
具体实现中,第一虚拟形象可以是用户根据自己的面部信息建立的。即用户可以使用终端的人脸扫描系统扫描面部,以获取面部特征数据及面部贴图,面部特征数据可以包括嘴巴、鼻子、眼睛、眉毛、脸部、下巴等部位的特征数据;然后将获取的面部特征数据及面部贴图融合至预设的虚拟形象模型的面部;最后可以从终端提供的装扮界面中选择装扮,将所选的装扮融合至预设的虚拟形象模型的对应部位。在一个具体的实施例中,装扮界面可如图2b所示,装扮界面中提供的装扮包括但不限于发型、衣服、裤子、鞋子等。或者,第一虚拟形象也可以是用户直接从系统中选择的相关的虚拟形象。第一虚拟形象可以是人物、动物、或其他卡通形象,第一虚拟形象可以是三维的,也可以是平面的,此处不做具体限定。
各个终端的用户在其终端上建立虚拟形象之后,对应终端可以获取其用户建立的虚拟形象的位置信息,该位置信息可以是该终端的位置信息,也可以是对应虚拟形象进行位置穿越后的位置信息,位置信息可以是经纬度值或地理坐标值,具体地,终端可以采用地图应用的软件开发工具包(Software Development Kit,SDK)获取位置信息,也可以采用 系统提供的位置相关应用(例如社交应用、游戏应用、生活服务应用等)程序编程接口(Application Programming Interface,API)来获取位置信息,此处不做具体限定。在获取位置信息之后,各个终端可以将其用户建立的虚拟形象及其位置信息发送给服务器。服务器自身也可以设置一些虚拟形象,并为这些虚拟形象设置位置信息。服务器可以根据各个终端上报的虚拟形象信息及自身设置的虚拟形象信息建立<虚拟形象,位置信息>的映射表,将建立的映射表存入数据库中。
为便于描述,本实施例中,可以将终端的用户建立的虚拟形象称为第一虚拟形象,将其他虚拟形象称为第二虚拟形象。第二虚拟形象可以是其他终端的用户建立的(该虚拟形象可由所述其他终端进行操控,例如改变该虚拟形象的位置,更换该虚拟形象的装扮,改变该虚拟形象的上、下线状态等),也可以是服务器设置的(该虚拟形象可由服务器进行操控)。当第一虚拟形象想要与第二虚拟形象进行互动时,终端可以将第一虚拟形象的位置信息发送给服务器。
步骤202、将所述第一虚拟形象的位置信息发送给服务器。
与此同时,终端还可以向服务器发送预设距离信息,预设距离可视实际需求自定义,例如可以是40米,80米等。
步骤203、接收所述位置信息所指示的位置的预设距离范围内的第二虚拟形象的信息。
服务器在收到所述位置信息及所述预设距离信息之后,可以根据数据中存储的<虚拟形象,位置信息>的映射表,查询所述位置信息所指示的位置的预设距离范围内的第二虚拟形象。
例如,当所述位置信息为东经113°49',北纬22°34',预设距离信息为40米时,服务器将查询距离东经113°49',北纬22°34'这个位置40米以内存在的所有的第二虚拟形象;当预设距离信息为80米时,服务器 将查询距离东经113°49',北纬22°34'这个位置80米以内存在的所有的第二虚拟形象,查询到的第二虚拟形象可以有多个,服务器将查询到的第二虚拟形象的信息发送给终端。
具体地,服务器可以根据预设距离的大小,和/或查询到的第二虚拟形象的数量确定返回给终端的具体信息。例如,当预设距离小于某个距离阈值(例如50、100等)时,可以给终端返回第二虚拟形象的人物位置列表信息(其中包括具体的各个第二虚拟形象及其位置信息);而当预设距离大于或等于该距离阈值时,可以给终端返回第二虚拟形象的位置数量聚合信息(其中可以仅包括各个位置聚合的第二虚拟形象的数量,不包括具体的虚拟形象信息),例如可以以<经纬度中心,数量>进行聚合。再例如,当查找到的第二虚拟形象的数量小于某个数量阈值(例如5、10等)时,可以给终端返回第二虚拟形象的人物位置列表信息(其中包括具体的各个第二虚拟形象及其位置信息);而当查找到的第二虚拟形象的数量大于或等于该数量阈值时,可以给终端返回第二虚拟形象的位置数量聚合信息(其中可以仅包括各个位置聚合的第二虚拟形象的数量,不包括具体的虚拟形象信息)。
若终端接收到服务器反馈的信息为第二虚拟形象的位置数量聚合信息,则终端可以根据用户的操作(例如点击某个聚合位置)获取某个位置聚合的具体的各个第二虚拟形象及详细位置信息。
此后,用户可以从多个第二虚拟形象中选择一个第二虚拟形象,终端确定用户所选的第二虚拟形象。
步骤204、获取所述第一虚拟形象向所述第二虚拟形象发起的互动请求。
即获取第一虚拟形象向用户所选的所述第二虚拟形象发起的互动请求。
步骤205、接收所述服务器根据所述互动请求生成的互动场景。
终端获取所述互动请求之后,可以将所述互动请求发送给服务器,服务器可以根据所维护的各个虚拟形象的在线状态判断所述第二虚拟形象是否在线,若不在线,则直接给终端返回请求失败通知消息;若在线,则服务器根据所述互动请求生成互动场景,终端接收服务器生成的互动场景。
当第一虚拟形象想要与第二虚拟形象进行互动时,终端可以将第一虚拟形象的位置信息发送给服务器。
具体实现中,上述互动请求可以是聊天互动请求(例如语音、视频、文字等聊天互动),对应地,所生成的互动场景可以默认为所述第一虚拟形象的位置信息对应的真实场景图像,该真实场景图像可以是所述位置信息对应的地图图像,或者所述位置信息对应的街景图像,或者所述位置信息对应的实景图像。
或者上述互动请求还可以是位置穿越互动请求,该互动请求中可以包括指定位置信息,对应地,所生成的互动场景可以是所述指定位置信息对应的真实场景图像,指定位置可以是地图上任何一个位置,或者指定位置也可以是地图上任何一个有街景的位置,所述指定位置信息对应的真实场景图像可以是地图图像,也可以是街景图像。
步骤206、将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景中,以实现虚拟形象之间的互动。
具体实现中,所渲染的互动场景可以是虚拟形象与地图的融合展示,例如图2c所示,将第一虚拟形象S1与选取的第二虚拟形象S2渲染至第一用户终端周围的地图图像上;或者所渲染的互动场景可以是虚拟形象与街景的融合展示,例如图2d所示,将第一虚拟形象S1与第二虚拟形象S2渲染至两栋建筑之间的街景图像上;或者所渲染的互动场 景可以是虚拟形象与实景的融合展示,例如图2e所示,将第一虚拟形象S1与第二虚拟形象S2渲染至办公场景中。需要说明的是,图2c至2e仅为互动场景的一些效果展示图,实际中,并不构成对最终展示效果的限定。
具体实现中,各个虚拟形象可以在所述互动场景中进行聊天互动,例如进行语音、文字、视频等互动;各个虚拟形象也可以在所述互动场景中进行位置穿越互动,例如各个虚拟形象可以穿越到同一有街景的地方,从而模拟出一起游览街景的效果。
此后,若用户所选的第二虚拟形象有更新(包括位置更新、和/或装扮更新、和/或上下线状态更新),终端将接收到服务器发送的更新通知消息,终端可以根据该更新通知消息更新所述互动场景中渲染的所述第二虚拟形象。
由于可以实现虚拟形象之间的互动,因此本实施例的方法可以应用的场景包括但不限于社交、游戏、广告、商铺、生活服务、旅游等场景,下面举几个例子对本实施例提供的方法的应用场景进行说明。
例如,本实施例的方法可以应用在社交中,不同用户可以在同一社交应用上建立各自的虚拟形象,虚拟形象之间可以互加好友,成为好友的虚拟形象之间,可以发起文字、语音等聊天互动。在社交应用界面,终端可以将互动的虚拟形象渲染至地图、或街景、或实景等互动场景中,以呈现出虚实结合的互动效果。
例如,本实施例的方法可以应用在游戏中,用户可以在游戏应用中建立自己的虚拟形象,然后和其他用户在该游戏应用中建立的虚拟形象进行竞赛、对决,或者和服务器配置给自己的虚拟形象进行竞赛、对决。在游戏应用界面,终端可以将互动的虚拟形象渲染至街景、实景等互动场景中,以呈现出虚实结合的互动效果。
再例如,本实施例的方法可以应用在广告中,用户可以在广告应用中建立自己的虚拟形象,在广告应用中,终端可以将用户的虚拟形象和服务器配置给自己的虚拟形象,一同渲染至商家指定位置的街景中,该街景画面随着用户的操作(例如前进、后退、转弯等)实时更新,以呈现出虚拟形象一起进行观赏、游玩的效果,从而达到模拟景点旅游体验的效果。
本实施例中,终端可以获取其用户建立的第一虚拟形象的位置信息,将所述第一虚拟形象的位置信息发送给服务器;然后接收所述位置信息所指示的位置的预设距离范围内的第二虚拟形象的信息,获取所述第一虚拟形象向所述第二虚拟形象发起的互动请求;接收所述服务器根据所述互动请求生成的互动场景,将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景中,本实施例通过将不同虚拟形象渲染至同一真实互动场景中,从而实现了虚拟形象之间的互动,扩展了虚实结合相关技术的应用场景。
图2a-2为本申请实施例的一种用户终端之间的交互方法的流程图。该方法应用于图1B中第一用户终端101。如图2a-2所示,该方法可以包括以下步骤。
S211:获取第一用户终端对应的第一虚拟形象的地理位置信息,将所述第一虚拟形象的地理位置信息发送给服务器;所述服务器中存储有虚拟形象和地理位置信息之间的映射表。
当用户登录第一用户终端101上的互动应用后,该互动应用中还可以集成地图功能,当用户操作所述互动应用中的地图图标时,例如,点击该地图图标以打开地图时,所述互动应用将该第一虚拟形象的地理位置信息发送给互动平台103。各个终端中的互动应用响应于用户对地图图标的操作,都可以将其对应的虚拟形象的地理位置信息发送给互动平 台103。互动服务器106可以根据各个终端上报的虚拟形象的地理位置信息建立<虚拟形象,位置信息>的映射表,在该映射表中,虚拟形象为虚拟形象的标识ID,位置信息为对应虚拟形象的地理位置数据。当用户打开地图时,向互动平台103发送虚拟形象的地理位置数据,服务器建立虚拟形象的所述<虚拟形象,位置信息>的映射表,当用户退出地图时,向服务器发送退出请求,服务器将对应的所述<虚拟形象,位置信息>的映射表删除。同时互动服务器106还记录各虚拟形象的状态信息,当虚拟形象对应的用户终端处的用户打开地图时,互动服务器106将对应的虚拟形象的状态设置为在线,标注对应的虚拟形象在地图上,当用户退出地图时,互动服务器106将对应的虚拟形象的状态设置为离线状态,表征对应的虚拟形象不在地图上。此外,互动服务器106还保存各虚拟形象的特征数据,在该特征数据中包括各虚拟形象的ID、虚拟形象的脸部特征数据、面部贴图及装扮特征数据。
S212:接收服务器发送的所述地理位置信息对应的预设距离范围内的第二虚拟形象的信息。
互动服务器106根据保存的<虚拟形象,位置信息>的映射表查找与所述地理位置信息对应的位置在预设距离范围内的一个或多个第二虚拟信息,同时获取该一个或多个第二虚拟形象的信息。
S213:根据所述第二虚拟形象的信息,向服务器发送针对所述第二虚拟形象的互动请求;以使服务器建立与所述第一用户终端之间的第一连接及建立与所述第二虚拟形象对应的第二用户终端之间的第二连接。
当在步骤S112中,当第一用户终端将第一虚拟形象的地理位置信息发送给互动平台103时,地图服务器105根据该第一虚拟形象的地理位置信息获取对应的地图数据,将该地图数据返回给第一用户终端。同时互动服务器106将查找到的一个或多个第二虚拟形象的信息发送给第 一用户终端。第一用户终端上的互动应用根据所述地图数据展示地图,根据所述一个或多个第二虚拟形象的信息将所述一个或多个第二虚拟形象渲染到地图上。用户可以选取其中一个第二虚拟形象进行互动,当用户完成选取操作后,第一用户终端上的互动应用向互动服务器106发送第一虚拟形象与选取的所示第二虚拟形象的互动请求。互动服务器106根据该互动请求在第一用户终端与互动服务器106之间建立第一链接,在选取的第二虚拟形象对应的第二用户终端与互动服务器106之间建立第二连接。同时,用户还可以选取具体的互动形式,对应的所示互动请求中携带具体的互动形式的标识,其中,所述互动形式包括语音聊天互动、文字互动等。
在一些实例中,互动服务器106在建立所述第一连接与所述第二连接之前,先判断所述选取的第二虚拟形象是否在线,当该第二虚拟形象在线时,才建立所述第一连接及第二连接,当所述第二虚拟形象不在线时,则直接给第一用户终端返回互动请求失败的通知消息。
S214:获取互动场景图像,该互动场景图像为所述第一用户终端对应的真实场景图像,将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景图像中;获取交互内容,将该交互内容通过所述第一连接发送给服务器,以使服务器通过所述第二连接将所述交互内容发送给所述第二用户终端。
当第一用户终端上的互动应用向互动服务器106发送互动请求后,同时该互动应用获取互动场景图像,将第一虚拟形象与选取的第二虚拟形象渲染到该互动场景图像中。在后续的互动中,第一用户终端将互动内容通过所述第一连接发送给互动服务器106,互动服务器106将所述互动内容发送给所述第二虚拟形象对应的第二用户终端。
本实施例中,第一用户终端可以获取第一虚拟形象的位置信息,将 所述第一虚拟形象的位置信息发送给互动平台上的互动服务器;然后接收所述位置信息所指示的位置的预设距离范围内的第二虚拟形象的信息,向互动服务器发送所述第一虚拟形象向所述第二虚拟形象发起的互动请求;以使服务器建立第一连接及第二连接,第一用户终端同时获取互动场景图像,将第一虚拟形象与第二虚拟形象渲染到互动场景图像中,第一用户终端通过第一连接将互动内容发送给互动服务器,互动服务器通过第二连接将互动内容发送给所述第二虚拟形象对应的第二用户终端。将进行互动的虚拟形象渲染至同一互动场景图像中,进而进行后续的互动,将虚实结合的应用场景应用在社交的互动情景中。
在一些实例中,其中,所述将所述第一虚拟形象的地理位置信息发送给服务器包括:
响应于对地图控件的操作,将所述第一虚拟形象的位置信息发送给服务器,其中所述服务器根据该地理位置信息获取与该地理位置信息对应的地图数据;
所述第二虚拟形象的信息包括所述第二虚拟形象的地理位置信息;
所述方法进一步包括:
接收服务器发送的地图数据,根据该地图数据展示对应地图;
根据所述第二虚拟形象的地理位置信息,将所述第二虚拟形象渲染在所述地图上;
其中,所述根据所述第二虚拟形象的信息,向服务器发送针对所述第二虚拟形象的互动请求包括:
响应于对所述第二虚拟形象的选取操作,向服务器发送所述第一虚拟形象向所述第二虚拟形象发起的互动请求。
在一些实例中,其中,所述第二虚拟形象为多个,所述第二虚拟形象的地理位置信息包括各第二虚拟形象的地理位置信息,所述第二虚拟 形象的信息还包括各第二虚拟形象的特征数据;
其中,所述根据所述第二虚拟形象的地理位置信息,将所述第二虚拟形象渲染在所述地图上包括:
针对任一第二虚拟形象,根据该第二虚拟形象的地理位置信息确定该第二虚拟形象在所述地图上的位置,根据该第二虚拟形象的特征数据在该位置上渲染所述第二虚拟形象。
互动服务器106可以根据预设距离的大小,和/或查询到的第二虚拟形象的数量确定返回给终端的具体信息。例如,当预设距离小于某个距离阈值(例如50、100等)时,给终端返回的第二虚拟形象的信息包括人物位置列表信息(其中包括具体的各个第二虚拟形象的特征数据及其位置信息)。再例如,当查找到的第二虚拟形象的数量小于某个数量阈值(例如5、10等)时,给终端返回的第二虚拟形象的信息包括人物位置列表信息(其中包括具体的各个第二虚拟形象的特征数据及其位置信息)。当终端接收到互动服务器106反馈的第二虚拟形象的信息为第二虚拟形象的人物位置列表信息,该人物位置列表信息中包括各第二虚拟形象的特征数据及位置数据时,针对任一第二虚拟形象,根据该第二虚拟形象的位置数据确定该第二虚拟形象在所述地图上的位置,根据该第二虚拟形象的特征数据在该位置上展示该第二虚拟形象。
在一些实例中,其中,所述第二虚拟形象为多个,所述第二虚拟形象的地理位置信息包括一个或多个地理位置信息,所述第二虚拟形象的信息还包括所述一个或多个地理位置信息中各地理位置信息对应的第二虚拟形象的数量;
其中,所述根据所述第二虚拟形象的地理位置信息,将所述第二虚拟形象渲染在所述地图上包括:
根据所述一个或多个地理位置信息中的任一地理位置信息,确定该 地理位置信息在所述地图上的位置,在该确定的位置上展示包含该地理位置信息对应的第二虚拟形象的数量的标识;
响应于对展示的任一标识的操作,向服务器发送该标识对应的各第二虚拟形象的特征数据及地理位置信息的获取请求;
接收服务器发送的各第二虚拟形象的特征数据及地理位置信息;
根据各第二虚拟形象的特征数据及地理位置信息在所述地图上展示各第二虚拟形象。
互动服务器106可以根据预设距离的大小,和/或查询到的第二虚拟形象的数量确定返回给终端的具体信息。当预设距离大于或等于距离阈值时,给第一用户终端返回的第二拟形象的信息包括一个或多个地理位置信息,该位置信息为多个第二虚拟形象聚合的位置,例如,该位置为多个第二虚拟形象的中心位置(也可称为聚合位置)。同时,给第一用户终端返回的第二虚拟形象的信息还包括的返回的一个或多个地理位置信息中各地理位置信息对应的第二虚拟形象的个数。当查找到的第二虚拟形象的数量大于或等于数量阈值时,给第一用户终端返回的第二虚拟形象的信息同样包括一个或多个地理位置信息及各地理位置信息对应的第二虚拟形象的数量。当互动服务器106返回给第一用户终端的第二虚拟形象的信息为一个或多个地理位置信息及各地理位置信息对应的第二虚拟形象的数量时,第一用户终端根据各地理位置信息确定各自在地图上的位置,在地图上对应位置展示标识,该标识包括对应的地理位置信息对应的第二虚拟形象的数量。例如,在展示的地图上的所述经纬度中心处展示聚合标识及聚合的第二虚拟形象的数量。响应于用户对一标识的操作,例如,用户点击一标识,向互动服务器106发送所述标识对应的各第二虚拟形象的特征数据及位置信息的获取请求。接收互动服务器106发送的所述标识对应的各第二虚拟形象的特征数据及位置信 息,进而根据各第二虚拟形象的特征数据及位置信息在地图上渲染各第二虚拟形象。
在一些实例中,其中,所述获取互动场景图像包括:
采集所述第一用户终端所处的真实场景图像,将该真实场景图像作为所述互动场景图像。
在该实例中,互动场景图像为第一用户终端通过调用第一用户终端的摄像头采集的第一用户终端所处的环境的真实场景图像,将第一虚拟形象与第二虚拟形象渲染至第一用户终端拍摄的真实场景图像中。例如图2e所示,将第一虚拟形象S1与第二虚拟形象S2渲染至办公场景中。
在一些实例中,其中,所述获取互动场景图像包括:
接收服务器根据所述互动请求生成的所述第一用户终端对应的互动场景图像。
在该实例中,互动场景图像由服务器(互动平台103)提供,具体地,由互动平台103中的地图服务器提供。
在一些实例中,其中,所述互动场景图像为所述第一用户终端对应的第一虚拟形象的位置信息对应的真实场景图像;
所述将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景图像中包括:
将所述第一虚拟形象与所述第二虚拟形象渲染至所述第一用户终端对应的第一虚拟形象的位置信息对应的真实场景图像中。
在该实例中,互动场景图像由地图服务器105提供,地图服务器105提供的该互动场景图像为第一虚拟形象的位置信息对应的真实场景图像,该真实场景图像为第一用户终端所在位置的真实场景图像。例如,互动场景图像可以是虚拟形象与地图的融合展示,例如图2c所示,将第一虚拟形象S1与选取的第二虚拟形象S2渲染至第一用户终端周围的地 图图像上;或者互动场景图像可以是虚拟形象与街景的融合展示,例如图2d所示,将第一虚拟形象S1与第二虚拟形象S2渲染至两栋建筑之间的街景图像上。
在一些实例中,其中,所述互动请求中携带所述第一用户终端对应的指定位置信息,所述互动场景图像为该指定位置信息对应的位置处的真实场景图像;
所述将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景上包括:
将所述第一虚拟形象与所述第二虚拟形象渲染至所述指定位置信息对应的位置处的真实场景图像中。
上述互动请求还可以是位置穿越互动请求,该互动请求中可以携带指定位置信息,该指示位置信息所指示的位置,例如,可以是穿越后的位置。对应地,所生成的互动场景可以是所述指定位置信息所指示的位置对应的真实场景图像,指定位置可以是地图上任何一个位置,或者指定位置也可以是地图上任何一个有街景的位置,所述指定位置信息对应的真实场景图像可以是地图图像,也可以是街景图像。
在一些实例中,其中,所述虚拟形象的特征数据包括面部特征数据、面部贴图及装扮数据,
在将所述第一虚拟形象的地理位置信息发送给服务器之前,该方法进一步包括:
响应于用户对面部的扫描操作,获取所述第一虚拟形象的面部特征数据及面部贴图,响应于对装扮元素标识的选取,获取所述第一虚拟形象的装扮数据;
向服务器发送虚拟形象建立请求,该请求中携带所述第一虚拟形象的面部特征数据、面部贴图及装扮数据。
在该实例中,用户可以在第一终端上的互动应用建立第一虚拟形象。第一终端的互动应用响应于用户对面部的扫描操作,获取所述第一虚拟形象的面部特征数据,响应于对装扮元素标识的选取,获取所述第一虚拟形象的装扮数据。第一终端上的互动应用将建立的第一虚拟形象的特征数据发送给互动平台103上的互动服务器106,具体地,向互动服务器106发送虚拟形象建立请求,该虚拟形象建立请求中携带第一虚拟形象的标识ID及该第一虚拟形象的特征数据。其中,第一虚拟形象的特征数据包括面部特征数据、面部贴图及装扮数据等。
在一些实例中,其中,本申请提供的用户终端之间的互动方法,进一步包括:
接收所述服务器发送的所述第二虚拟形象的更新通知消息;
根据所述更新通知消息更新所述第二虚拟形象。
在该实施例中,当接收到第二虚拟形象的更新通知消息时,该第二虚拟形象可以为第一用户终端上展示的地图上的多个第二虚拟形象中的一个,也可以是渲染到互动场景图像上的第二虚拟形象。该更新通知消息,例如,可以是通知更新第二虚拟形象的特征数据,如面部特征数据、面部贴片及装扮数据等,同时也可以是更新第二虚拟形象的位置数据,此外还可以是更新第二虚拟形象的状态,如更新第二虚拟形象的状态为离线状态时,则对应的地图上展示的该第二虚拟形象从地图上消失,在互动场景图像中展示的该第二虚拟形象从互动场景图像中消失。第一用户终端接收到更新消息时,根据更新消息对对应的第二虚拟形象进行相应的更新操作。
图3a所示实施例将从服务器的角度描述本发明实施例提供的增强现实的互动方法,该方法可应用于图1A中的服务器,如图3a所示,本实施例的方法包括以下步骤:
步骤301、接收终端发送的用户建立的第一虚拟形象的位置信息。
具体虚拟形象的建立过程可参阅上述实施例的描述,此处不再赘述。
各个终端的用户在其终端上建立虚拟形象之后,对应终端可以获取其用户建立的虚拟形象的位置信息,该位置信息可以是该终端的位置信息,也可以是对应虚拟形象进行位置穿越后的位置信息,位置信息可以是经纬度值或地理坐标值。在获取位置信息之后,各个终端可以将其用户建立的虚拟形象及其位置信息发送给服务器。服务器自身也可以设置一些虚拟形象,并为这些虚拟形象设置位置信息。服务器可以根据各个终端上报的虚拟形象信息及自身设置的虚拟形象信息建立<虚拟形象,位置信息>的映射表,将建立的映射表存入数据库中。
为便于描述,本实施例中,可以将终端的用户建立的虚拟形象称为第一虚拟形象,将其他虚拟形象称为第二虚拟形象。第二虚拟形象可以是其他终端的用户建立的(该虚拟形象可由所述其他终端进行操控,例如改变该虚拟形象的位置,更换该虚拟形象的装扮,改变该虚拟形象的上、下线状态等),也可以是服务器设置的(该虚拟形象可由服务器进行操控)。当第一虚拟形象想要与第二虚拟形象进行互动时,终端可以将第一虚拟形象的位置信息发送给服务器,服务器接收终端发送的第一虚拟形象的位置信息,与此同时,服务器还可以接收终端发送的预设距离信息,预设距离信息可视实际需求自定义,例如可以是40米,80米等。
步骤302、查找所述位置信息所指示的位置的预设距离范围内的第二虚拟形象。
服务器在收到所述位置信息及所述预设距离信息之后,可以根据数据中存储的虚拟形象与位置信息的对应关系,即上述映射表,查询所述 位置信息所指示的位置的预设距离范围内的第二虚拟形象。查询到的第二虚拟形象可以有多个,服务器将查询到的第二虚拟形象的信息发送给终端。
步骤303、将所述第二虚拟形象的信息发送给所述终端。
具体地,服务器可以根据预设距离的大小,和/或查询到的第二虚拟形象的数量确定发送给终端的具体信息。例如,当预设距离小于某个距离阈值(例如50、100等)时,可以给终端发送第二虚拟形象的人物位置列表信息(其中包括具体的各个第二虚拟形象及其位置信息);而当预设距离大于或等于该距离阈值时,可以给终端发送第二虚拟形象的位置数量聚合信息(其中可以仅包括各个位置聚合的第二虚拟形象的数量,不包括具体的虚拟形象信息),例如可以以<经纬度中心,数量>进行聚合。再例如,当查找到的第二虚拟形象的数量小于某个数量阈值(例如5、10等)时,可以给终端发送第二虚拟形象的人物位置列表信息(其中包括具体的各个第二虚拟形象及其位置信息);而当查找到的第二虚拟形象的数量大于或等于该数量阈值时,可以给终端发送第二虚拟形象的位置数量聚合信息(其中可以仅包括各个位置聚合的第二虚拟形象的数量,不包括具体的虚拟形象信息)。
若终端接收到服务器反馈的信息为第二虚拟形象的位置数量聚合信息,则终端可以根据用户的操作(例如点击某个聚合位置)获取某个位置聚合的具体的各个第二虚拟形象及详细位置信息。
此后,用户可以从多个第二虚拟形象中选择一个第二虚拟形象,终端确定用户所选的第二虚拟形象,终端获取所述第一虚拟形象向用户所选的所述第二虚拟形象发起的互动请求,将所述互动请求发送给服务器。
步骤304、接收所述终端发送的所述第一虚拟形象向所述第二虚拟 形象发起的互动请求。
即服务器接收所述终端发送的所述第一虚拟形象向所述用户所选的第二虚拟形象发起的互动请求。服务器可以根据所维护的各个虚拟形象的在线状态判断所述用户所选的第二虚拟形象是否在线,若不在线,则直接给终端返回请求失败通知消息;若在线,则服务器根据所述互动请求生成互动场景。
步骤305、根据所述互动请求生成的互动场景。
具体实现中,上述互动请求可以是聊天互动请求(例如语音、视频、文字等聊天互动),对应地,所生成的互动场景可以默认为所述第一虚拟形象的位置信息对应的真实场景图像,该真实场景图像可以是所述位置信息对应的地图图像,或者所述位置信息对应的街景图像,或者所述位置信息对应的实景图像。
或者上述互动请求还可以是位置穿越互动请求,该互动请求中可以包括指定位置信息,对应地,所生成的互动场景可以是所述指定位置信息对应的真实场景图像,指定位置可以是地图上任何一个位置,或者指定位置也可以是地图上任何一个有街景的位置,所述指定位置信息对应的真实场景图像可以是地图图像,也可以是街景图像。
步骤306、将所述互动场景发送给所述终端,以使得所述终端将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景中,以实现虚拟形象之间的互动。
此后,当某个终端的用户建立的虚拟形象有更新(包括虚拟形象的更新、和/或位置信息的更新、和/或上下线状态更新)时,该终端将向服务器发送更新通知消息,服务器根据该更新通知消息更新数据库中存储的数据,然后向显示有该虚拟形象的其他终端发送该更新通知消息,以通知所述其他终端更新显示的对应的虚拟形象。
本实施例中,服务器在接收到终端发送的用户建立的第一虚拟形象的位置信息之后,会查找所述位置信息所指示的位置的预设距离范围内的第二虚拟形象,将所述第二虚拟形象的信息发送给所述终端;在接收所述终端发送的所述第一虚拟形象向所述第二虚拟形象发起的互动请求之后,根据所述互动请求生成的互动场景;将所述互动场景发送给所述终端,以使得所述终端将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景中,即终端通过将不同虚拟形象渲染至同一真实互动场景中,从而实现了虚拟形象之间的互动,扩展了虚实结合相关技术的应用场景。
本申请实例还提供了一种用户终端之间的互动方法,该方法可应用于图1B中的互动平台,如图3b所示,该方法包括:
S311:接收第一用户终端发送的该第一用户终端对应的第一虚拟形象的地理位置信息;
S312:在虚拟形象和地理位置信息之间的映射表中查找所述地理位置信息对应的预设距离范围内的第二虚拟形象的信息;
S313:将所述第二虚拟形象的信息发送给所述第一用户终端;
S314:接收所述第一用户终端根据所述第二虚拟形象的信息发送的所述第一虚拟形象向所述第二虚拟形象发起的互动请求,根据该互动请求建立与所述第一用户终端之间的第一连接及建立与所述第二虚拟形象对应的第二用户终端之间的第二连接;
S315:根据所述互动请求生成互动场景图像,该互动场景图像为所述第一用户终端对应的真实场景图像,将该互动场景图像发送给所述第一用户终端,以使所述第一用户终端将所述第一虚拟形象及所述第二虚拟形象渲染至该互动场景图像中;
S316:通过所述第一连接接收所述第一用户终端发送的互动内容, 将该互动内容通过所述第二连接发送给所述第二用户终端。
本申请提供的用于服务器侧的用户终端之间的互动方法的各步骤与图2a-2中的用户终端侧的用户终端之间的互动方法中的各步骤相对应,在此不再赘述。
为了更好地实施实施例一所描述的方法,本发明实施例还提供一种终端,如图4所示,包括获取单元401、发送单元402、接收单元403,以及处理单元404,如下:
(1)获取单元401;
获取单元401用于,获取用户建立的第一虚拟形象的位置信息。
(2)发送单元402;
发送单元402用于,将所述第一虚拟形象的位置信息发送给服务器。
(3)接收单元403;
接收单元403用于,接收所述位置信息所指示的位置的预设距离范围内的第二虚拟形象的信息。
获取单元401还用于,获取所述第一虚拟形象向所述用户所选的第二虚拟形象发起的互动请求。
发送单元402还用于,将所述互动请求发送给所述服务器;
接收单元403还用于,接收所述服务器根据所述互动请求生成的互动场景。
(4)处理单元404;
处理单元404用于,将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景中,以实现虚拟形象之间的互动。
本实施例中,获取单元可以获取其用户建立的第一虚拟形象的位置信息,发送单元将所述第一虚拟形象的位置信息发送给服务器;然后接收单元接收所述位置信息所指示的位置的预设距离范围内的第二虚拟 形象的信息,获取单元获取所述第一虚拟形象向所述第二虚拟形象发起的互动请求;接收单元接收所述服务器根据所述互动请求生成的互动场景,处理单元将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景中,本实施例的终端通过将不同虚拟形象渲染至同一真实互动场景中,从而实现了虚拟形象之间的互动,扩展了虚实结合相关技术的应用场景。
在一些实施例中,所述获取单元401、发送单元402、接收单元403及处理单元404可以用于实现本申请各方法实施例的对应步骤。各个单元的具体功能可以参见前面的方法实施例,在此不再赘述。
本发明实施例还提供了一种终端,如图5所示,其示出了本发明实施例所涉及的终端的结构示意图,具体来讲:
该终端可以包括射频(RF,Radio Frequency)电路501、包括有一个或一个以上计算机可读存储介质的存储器502、输入单元503、显示单元504、传感器505、音频电路506、无线保真(WiFi,Wireless Fidelity)模块507、包括有一个或者一个以上处理核心的处理器508、以及电源509等部件。本领域技术人员可以理解,图5中示出的终端结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
RF电路501可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,交由一个或者一个以上处理器508处理;另外,将涉及上行的数据发送给基站。通常,RF电路501包括但不限于天线、至少一个放大器、调谐器、一个或多个振荡器、用户身份模块(SIM,Subscriber Identity Module)卡、收发信机、耦合器、低噪声放大器(LNA,Low Noise Amplifier)、双工器等。此外,RF电路501还可以通过无线通信与网络和其他设备通信。所述无线通信可以使 用任一通信标准或协议,包括但不限于全球移动通讯系统(GSM,Global System of Mobile communication)、通用分组无线服务(GPRS,General Packet Radio Service)、码分多址(CDMA,Code Division Multiple Access)、宽带码分多址(WCDMA,Wideband Code Division Multiple Access)、长期演进(LTE,Long Term Evolution)、电子邮件、短消息服务(SMS,Short Messaging Service)等。
存储器502可用于存储软件程序以及模块,处理器508通过运行存储在存储器502的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器502可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据终端的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器502可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器502还可以包括存储器控制器,以提供处理器508和输入单元503对存储器502的访问。
输入单元503可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。具体地,在一个具体的实施例中,输入单元503可包括触敏表面以及其他输入设备。触敏表面,也称为触摸显示屏或者触控板,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触敏表面上或在触敏表面附近的操作),并根据预先设定的程式驱动相应的连接终端。可选的,触敏表面可包括触摸检测终端和触摸控制器两个部分。其中,触摸检测终端检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检 测终端上接收触摸信息,并将它转换成触点坐标,再送给处理器508,并能接收处理器508发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触敏表面。除了触敏表面,输入单元503还可以包括其他输入设备。具体地,其他输入设备可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元504可用于显示由用户输入的信息或提供给用户的信息以及终端的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示单元504可包括显示面板,可选的,可以采用液晶显示器(LCD,Liquid Crystal Display)、有机发光二极管(OLED,Organic Light-Emitting Diode)等形式来配置显示面板。进一步的,触敏表面可覆盖显示面板,当触敏表面检测到在其上或附近的触摸操作后,传送给处理器508以确定触摸事件的类型,随后处理器508根据触摸事件的类型在显示面板上提供相应的视觉输出。虽然在图5中,触敏表面与显示面板是作为两个独立的部件来实现输入和输入功能,但是在某些实施例中,可以将触敏表面与显示面板集成而实现输入和输出功能。
终端还可包括至少一种传感器505,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板的亮度,接近传感器可在终端移动到耳边时,关闭显示面板和/或背光。作为运动传感器的一种,重力加速度传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别终端姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于终端还可配置的陀螺仪、气压计、 湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路506、扬声器,传声器可提供用户与终端之间的音频接口。音频电路506可将接收到的音频数据转换后的电信号,传输到扬声器,由扬声器转换为声音信号输出;另一方面,传声器将收集的声音信号转换为电信号,由音频电路506接收后转换为音频数据,再将音频数据输出处理器508处理后,经RF电路501以发送给比如另一终端,或者将音频数据输出至存储器502以便进一步处理。音频电路506还可能包括耳塞插孔,以提供外设耳机与终端的通信。
WiFi属于短距离无线传输技术,终端通过WiFi模块507可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图5示出了WiFi模块507,但是可以理解的是,其并不属于终端的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器508是终端的控制中心,利用各种接口和线路连接整个终端的各个部分,通过运行或执行存储在存储器502内的软件程序和/或模块,以及调用存储在存储器502内的数据,执行终端的各种功能和处理数据,从而对终端进行整体监控。可选的,处理器508可包括一个或多个处理核心;其中,处理器508可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器508中。
终端还包括给各个部件供电的电源509(比如电池),优选的,电源可以通过电源管理系统与处理器508逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源509还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源 转换器或者逆变器、电源状态指示器等任意组件。
尽管未示出,终端还可以包括摄像头、蓝牙模块等,在此不再赘述。具体在本实施例中,终端中的处理器508会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器502中,并由处理器508来运行存储在存储器502中的应用程序,从而实现上述应用于终端侧的用户终端之间的互动方法的各种功能。
本发明实施例还提供了一种服务器,如图6所示,包括接收单元601、查找单元602、发送单元603及生成单元604,如下:
接收单元601用于,接收终端发送的用户建立的第一虚拟形象的位置信息。
查找单元602用于,查找所述位置信息所指示的位置的预设距离范围内的第二虚拟形象。
发送单元603用于,将所述第二虚拟形象的信息发送给所述终端。
生成单元604用于,根据所述互动请求生成的互动场景。
在一些实施例中,所述接收单元601、查找单元602、发送单元603及生成单元604可以用于实现本申请各方法实施例的对应步骤。各个单元的具体功能可以参见前面的方法实施例,在此不再赘述。
本实施例中,在接收单元接收到终端发送的用户建立的第一虚拟形象的位置信息之后,查找单元会查找所述位置信息所指示的位置的预设距离范围内的第二虚拟形象,发送单元将所述第二虚拟形象的信息发送给所述终端;在接收单元接收所述终端发送的所述第一虚拟形象向所述第二虚拟形象发起的互动请求之后,生成单元根据所述互动请求生成的互动场景;发送单元将所述互动场景发送给所述终端,以使得所述终端将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景中,即本实施例的终端可以通过将不同虚拟形象渲染至同一真实互动场景中,从 而实现虚拟形象之间的互动,扩展了虚实结合相关技术的应用场景。
本发明实施例还提供了一种服务器,如图7所示,所述服务器可以是通过集群系统构成的,为实现各单元功能而合并为一或各单元功能分体设置的电子设备,服务器至少包括用于存储数据的数据库和用于处理数据的处理器,或者包括设置于识别服务器内的存储介质或独立设置的存储介质。
其中,对于用于处理数据的处理器而言,在执行处理时,可以采用微处理器、中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)或者可编程逻辑阵列(FPGA,Field-programmable Gate Array)实现;对于存储介质来说,包含操作指令,该操作指令可以为计算机可执行代码,通过所述操作指令来实现上述本发明实施例服务器侧的用户终端之间的互动方法中的各个步骤。
该服务器作为硬件实体700的一个示例如图7所示,包括处理器701、存储介质702以及至少一个外部通信接口703,所述处理器701、存储介质702以及外部通信接口703均通过总线704连接。
需要指出的是:以上涉及服务器项的描述,与对应方法描述是类似的,此处不做赘述。对于本发明服务器实施例中未披露的技术细节,请参照本发明对应方法实施例的描述。
最后,本发明实施例还提供一种虚拟形象之间的互动系统,包括终端以及服务器,终端可以是上述所描述的终端,服务器可以是上述所描述的服务器,具体交互过程可参阅前文的描述,此处不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者 可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,装置,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本发明实施例的技术方案,而非对其限制;尽管参照前述实施例对本发明实施例进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的 精神和范围。

Claims (33)

  1. 一种用户终端之间的交互方法,应用于第一用户终端,该方法包括:
    获取第一用户终端对应的第一虚拟形象的地理位置信息,将所述第一虚拟形象的地理位置信息发送给服务器;所述服务器中存储有虚拟形象和地理位置信息之间的映射表;
    接收服务器发送的所述地理位置信息对应的预设距离范围内的第二虚拟形象的信息;
    根据所述第二虚拟形象的信息,向服务器发送针对所述第二虚拟形象的互动请求;以使服务器建立与所述第一用户终端之间的第一连接及建立与所述第二虚拟形象对应的第二用户终端之间的第二连接;
    获取互动场景图像,该互动场景图像为所述第一用户终端对应的真实场景图像,将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景图像中;获取交互内容,将该交互内容通过所述第一连接发送给服务器,以使服务器通过所述第二连接将所述交互内容发送给所述第二用户终端。
  2. 根据权利要求1所述的方法,其中,所述将所述第一虚拟形象的地理位置信息发送给服务器包括:
    响应于对地图控件的操作,将所述第一虚拟形象的位置信息发送给服务器,其中所述服务器根据该地理位置信息获取与该地理位置信息对应的地图数据;
    所述第二虚拟形象的信息包括所述第二虚拟形象的地理位置信息;
    所述方法进一步包括:
    接收服务器发送的地图数据,根据该地图数据展示对应地图;
    根据所述第二虚拟形象的地理位置信息,将所述第二虚拟形象渲染在所述地图上;
    其中,所述根据所述第二虚拟形象的信息,向服务器发送针对所述第二虚拟形象的互动请求包括:
    响应于对所述地图上的所述第二虚拟形象的选取操作,向服务器发送所述第一虚拟形象向所述第二虚拟形象发起的互动请求。
  3. 根据权利要求2所述的方法,其中,所述第二虚拟形象为多个,所述第二虚拟形象的地理位置信息包括各第二虚拟形象的地理位置信息,所述第二虚拟形象的信息还包括各第二虚拟形象的特征数据;
    其中,所述根据所述第二虚拟形象的地理位置信息,将所述第二虚拟形象渲染在所述地图上包括:
    针对任一第二虚拟形象,根据该第二虚拟形象的地理位置信息确定该第二虚拟形象在所述地图上的位置,根据该第二虚拟形象的特征数据在该位置上渲染所述第二虚拟形象。
  4. 根据权利要求2所述的方法,其中,所述第二虚拟形象为多个,所述第二虚拟形象的地理位置信息包括一个或多个地理位置信息,所述第二虚拟形象的信息还包括所述一个或多个地理位置信息中各地理位置信息对应的第二虚拟形象的数量;
    其中,所述根据所述第二虚拟形象的地理位置信息,将所述第二虚拟形象渲染在所述地图上包括:
    根据所述一个或多个地理位置信息中的任一地理位置信息,确定该地理位置信息在所述地图上的位置,在该确定的位置上展示包含该地理位置信息对应的第二虚拟形象的数量的标识;
    响应于对展示的任一标识的操作,向服务器发送该标识对应的各第二虚拟形象的特征数据及地理位置信息的获取请求;
    接收服务器发送的各第二虚拟形象的特征数据及地理位置信息;
    根据各第二虚拟形象的特征数据及地理位置信息在所述地图上展示各第二虚拟形象。
  5. 根据权利要求1所述的方法,其中,所述获取互动场景图像包括:
    采集所述第一用户终端所处的真实场景图像,将该真实场景图像作为所述互动场景图像。
  6. 根据权利要求1所述的方法,其中,所述获取互动场景图像包括:
    接收服务器根据所述互动请求生成的所述第一用户终端对应的互动场景图像。
  7. 根据权利要求6所述的方法,其中,所述与第一用户终端对应的互动场景图像为与第一虚拟形象的地理位置信息对应的真实场景图像;
    所述将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景图像中包括:
    将所述第一虚拟形象与所述第二虚拟形象渲染至所述第一用户终端对应的第一虚拟形象的地理位置信息对应的真实场景图像中。
  8. 根据权利要求6所述的方法,其中,所述互动请求中携带所述第一用户终端对应的指定位置信息,所述与第一用户终端对应的互动场景图像为该指定位置信息对应的位置处的真实场景图像;
    所述将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景上包括:
    将所述第一虚拟形象与所述第二虚拟形象渲染至所述指定位置信息对应的位置处的真实场景图像中。
  9. 根据权利要求1所述的方法,其中,所述虚拟形象的特征数据包括面部特征数据、面部贴图及装扮数据,
    在将所述第一虚拟形象的地理位置信息发送给服务器之前,该方法进一步包括:
    响应于用户对面部的扫描操作,获取所述第一虚拟形象的面部特征数据及面部贴图,响应于对装扮元素标识的选取,获取所述第一虚拟形象的装扮数据;
    向服务器发送虚拟形象建立请求,该请求中携带所述第一虚拟形象的面部特征数据、面部贴图及装扮数据。
  10. 根据权利要求1所述的方法,其中,所述方法进一步包括:
    接收所述服务器发送的更新通知消息,所述更新通知消息用于通知所述用户所选的第二虚拟形象的更新;
    根据所述更新通知消息更新所述第二虚拟形象。
  11. 一种用户终端之间的互动方法,应用于服务器,该方法包括:
    接收第一用户终端发送的该第一用户终端对应的第一虚拟形象的地理位置信息;
    在虚拟形象和地理位置信息之间的映射表中查找所述地理位置信息对应的预设距离范围内的第二虚拟形象的信息;
    将所述第二虚拟形象的信息发送给所述第一用户终端;
    接收所述第一用户终端根据所述第二虚拟形象的信息发送的所述第一虚拟形象向所述第二虚拟形象发起的互动请求,根据该互动请求建立与所述第一用户终端之间的第一连接及建立与所述第二虚拟形象对应的第二用户终端之间的第二连接;
    根据所述互动请求生成互动场景图像,该互动场景图像为所述第一用户终端对应的真实场景图像,将该互动场景图像发送给所述第一用户 终端,以使所述第一用户终端将所述第一虚拟形象及所述第二虚拟形象渲染至该互动场景图像中;
    通过所述第一连接接收所述第一用户终端发送的互动内容,将该互动内容通过所述第二连接发送给所述第二用户终端。
  12. 根据权利要求11所述的方法,其中,所述第二虚拟形象为多个,所述第二虚拟形象的信息包括:所述多个第二虚拟形象的人物位置列表信息,该人物位置列表信息中包括所述一个或多个第二虚拟形象中各第二虚拟形象的特征数据及位置信息;
    在接收到所述第一虚拟形象的地理位置信息后,该方法进一步包括:
    将所述地理位置信息对应的地图数据发送给所述第一用户终端,以使该第一用户终端根据所述地图数据展示地图,并根据所述各第二虚拟形象的特征数据及位置信息将所述各第二虚拟形象渲染至所述地图上。
  13. 根据权利要求11所述的方法,其中,所述第二虚拟形象为多个,所述第二虚拟形象的信息包括:一个或多个地理位置信息,该一个或多个地理位置信息各自对应的第二虚拟形象的数量;
    在接收到所述第一虚拟形象的地理位置信息后,该方法进一步包括:
    将所述地理位置信息对应的地图数据发送给所述第一用户终端,以使该第一用户终端根据所述地图数据展示地图,根据所述各地理位置信息及其各自对应的第二虚拟形象的数量在所述地图上对应位置处展示对应的包含第二虚拟形象的数量的标识;
    接收所述第一用户终端响应于对展示的任一标识的操作,发送的该标识对应的各第二虚拟形象的特征数据及地理位置信息的获取请求;
    将所述各第二虚拟形象的特征数据及地理位置信息发送给所述第 一用户终端,以使所述第一用户终端根据各第二虚拟形象的特征数据及地理位置信息在所述地图上展示所述各第二虚拟形象。
  14. 根据权利要求11所述的方法,其中,所述互动请求中携带所述第一虚拟形象的位置信息,其中,所述根据所述互动请求生成互动场景图像包括:
    根据所述第一虚拟形象的位置信息获取所述第一虚拟形象的位置信息对应的真实场景图像。
  15. 根据权利要求11所述的方法,其中,所述互动请求中携带指定位置信息,其中,所述根据所述互动请求生成互动场景图像包括:
    根据所述指定位置信息获取该指定位置信息对应的位置处的真实场景图像。
  16. 一种终端,包括一个或一个以上存储器,一个或一个以上处理器;其中,所述一个或一个以上存储器存储有一个或者一个以上指令模块,经配置由所述一个或者一个以上处理器执行;其中,所述一个或者一个以上指令模块包括:
    获取单元,用以获取第一用户终端对应的第一虚拟形象的地理位置信息,
    发送单元,用以将所述第一虚拟形象的地理位置信息发送给服务器;所述服务器中存储有虚拟形象和地理位置信息之间的映射表;
    接收单元,用以接收服务器发送的所述地理位置信息对应的预设距离范围内的第二虚拟形象的信息;
    所述发送单元,还用以根据所述第二虚拟形象的信息,向服务器发送针对所述第二虚拟形象的互动请求;以使服务器建立与所述第一用户终端之间的第一连接及建立与所述第二虚拟形象对应的第二用户终端之间的第二连接;
    所述获取单元,还用以获取互动场景图像,该互动场景图像为所述第一用户终端对应的真实场景图像;
    处理单元,用以将所述第一虚拟形象与所述第二虚拟形象渲染至所述互动场景图像中;
    所述获取单元,还用以获取交互内容,
    所述发送单元,还用以将该交互内容通过所述第一连接发送给服务器,以使服务器通过所述第二连接将所述交互内容发送给所述第二用户终端。
  17. 根据权利要求16所述的终端,其中,所述发送单元,还用以:
    响应于对地图控件的操作,将所述第一虚拟形象的位置信息发送给服务器,其中所述服务器根据该地理位置信息获取与该地理位置信息对应的地图数据;
    所述第二虚拟形象的信息包括所述第二虚拟形象的地理位置信息;
    所述接收单元,还用以接收服务器发送的地图数据;
    所述处理单元,还用以根据该地图数据展示对应地图;根据所述第二虚拟形象的地理位置信息将所述第二虚拟形象渲染在所述地图上;
    所述发送单元,还用以响应于对所述地图上的所述第二虚拟形象的选取操作,向服务器发送所述第一虚拟形象向所述第二虚拟形象发起的互动请求。
  18. 根据权利要求17所述的终端,其中,所述第二虚拟形象为多个,所述第二虚拟形象的地理位置信息包括各第二虚拟形象的地理位置信息,所述第二虚拟形象的信息还包括各第二虚拟形象的特征数据;
    所述处理单元,还用以:
    针对任一第二虚拟形象,根据该第二虚拟形象的地理位置信息确定该第二虚拟形象在所述地图上的位置,根据该第二虚拟形象的特征数据 在该位置上渲染所述第二虚拟形象。
  19. 根据权利要求17所述的终端,其中,所述第二虚拟形象为多个,所述第二虚拟形象的地理位置信息包括一个或多个地理位置信息,所述第二虚拟形象的信息还包括所述一个或多个地理位置信息中各地理位置信息对应的第二虚拟形象的数量;
    所述处理单元,还用以:
    根据所述一个或多个地理位置信息中的任一地理位置信息,确定该地理位置信息在所述地图上的位置,在该确定的位置上展示包含该地理位置信息对应的第二虚拟形象的数量的标识;
    所述发送单元,还用以响应于对展示的任一标识的操作,向服务器发送该标识对应的各第二虚拟形象的特征数据及地理位置信息的获取请求;
    所述接收单元,还用以接收服务器发送的各第二虚拟形象的特征数据及地理位置信息;
    所述处理单元,还用以根据各第二虚拟形象的特征数据及地理位置信息在所述地图上展示各第二虚拟形象。
  20. 根据权利要求16所述的终端,其中,所述获取单元,还用以:
    采集所述第一用户终端所处的真实场景图像,将该真实场景图像作为所述互动场景图像。
  21. 根据权利要求16所述的终端,其中,所述获取单元,还用以:
    接收服务器根据所述互动请求生成的所述第一用户终端对应的互动场景图像。
  22. 根据权利要求21所述的终端,其中,所述互动场景图像为所述第一用户终端对应的第一虚拟形象的位置信息对应的真实场景图像;
    所述处理单元,还用以:
    将所述第一虚拟形象与所述第二虚拟形象渲染至所述第一用户终端对应的第一虚拟形象的位置信息对应的真实场景图像中。
  23. 根据权利要求21所述的终端,其中,所述互动请求中携带所述第一用户终端对应的指定位置信息,所述互动场景图像为该指定位置信息对应的位置处的真实场景图像;
    所述处理单元,还用以:
    将所述第一虚拟形象与所述第二虚拟形象渲染至所述指定位置信息对应的位置处的真实场景图像中。
  24. 根据权利要求16所述的终端,其中,所述虚拟形象的特征数据包括面部特征数据、面部贴图及装扮数据,
    所述获取单元,还用以响应于用户对面部的扫描操作,获取所述第一虚拟形象的面部特征数据及面部贴图,响应于对装扮元素标识的选取,获取所述第一虚拟形象的装扮数据;
    所述发送单元,还用以向服务器发送虚拟形象建立请求,该请求中携带所述第一虚拟形象的面部特征数据、面部贴图及装扮数据。
  25. 根据权利要求16所述的终端,其中,所述接收单元,还用以接收所述服务器发送的所述第二虚拟形象的更新通知消息;
    所述处理单元,还用以根据所述更新通知消息更新所述第二虚拟形象。
  26. 一种服务器,包括一个或一个以上存储器,一个或一个以上处理器;其中,所述一个或一个以上存储器存储有一个或者一个以上指令模块,经配置由所述一个或者一个以上处理器执行;其中,所述一个或者一个以上指令模块包括:
    接收单元,用以接收第一用户终端发送的该第一用户终端对应的第一虚拟形象的地理位置信息;
    查找单元,用以在虚拟形象和地理位置信息之间的映射表中查找所述地理位置信息对应的预设距离范围内的第二虚拟形象的信息;
    发送单元,用以将所述第二虚拟形象的信息发送给所述第一用户终端;
    接收单元,用以接收所述第一用户终端根据所述第二虚拟形象的信息发送的所述第一虚拟形象向所述第二虚拟形象发起的互动请求,
    所述生成单元,还用以根据该互动请求建立与所述第一用户终端之间的第一连接及建立与所述第二虚拟形象对应的第二用户终端之间的第二连接;根据所述互动请求生成互动场景图像,该互动场景图像为所述第一用户终端对应的真实场景图像;
    所述发送单元,还用以将该互动场景图像发送给所述第一用户终端,以使所述第一用户终端将所述第一虚拟形象及所述第二虚拟形象渲染至该互动场景图像中。
    所述接收单元,还用以通过所述第一连接接收所述第一用户终端发送的互动内容,将该互动内容通过所述第二连接发送给所述第二用户终端。
  27. 根据权利要求26所述的服务器,其中,所述第二虚拟形象为多个,所述第二虚拟形象的信息包括:所述多个第二虚拟形象的人物位置列表信息,该人物位置列表信息中包括所述一个或多个第二虚拟形象中各第二虚拟形象的特征数据及位置信息;
    在所述接收单元接收到所述第一虚拟形象的地理位置信息后,所述发送单元,还用以:
    将所述地理位置信息对应的地图数据发送给所述第一用户终端,以使该第一用户终端根据所述地图数据展示地图,并根据所述各第二虚拟形象的特征数据及位置信息将所述各第二虚拟形象渲染至所述地图上。
  28. 根据权利要求26所述的服务器,其中,所述第二虚拟形象为多个,所述第二虚拟形象的信息包括:一个或多个地理位置信息,该一个或多个地理位置信息各自对应的第二虚拟形象的数量;
    在所述接收单元接收到所述第一虚拟形象的地理位置信息后,所述发送单元,还用以:
    将所述地理位置信息对应的地图数据发送给所述第一用户终端,以使该第一用户终端根据所述地图数据展示地图,根据所述各地理位置信息及其各自对应的第二虚拟形象的数量在所述地图上对应位置处展示对应的包含第二虚拟形象的数量的标识;
    所述接收单元,还用以接收所述第一用户终端响应于对展示的任一标识的操作,发送的该标识对应的各第二虚拟形象的特征数据及地理位置信息的获取请求;
    所述发送单元,还用以将所述各第二虚拟形象的特征数据及地理位置信息发送给所述第一用户终端,以使所述第一用户终端根据各第二虚拟形象的特征数据及地理位置信息在所述地图上展示所述各第二虚拟形象。
  29. 根据权利要求26所述的服务器,其中,所述互动请求中携带所述第一虚拟形象的位置信息,其中,所述生成单元,还用以:
    根据所述第一虚拟形象的位置信息获取所述第一虚拟形象的位置信息对应的真实场景图像。
  30. 根据权利要求26所述的服务器,其中,所述互动请求中携带指定位置信息,其中,所述根据所述互动请求生成互动场景图像包括:
    根据所述指定位置信息获取该指定位置信息对应的位置处的真实场景图像。
  31. 一种虚拟形象之间的互动系统,其特征在于,包括根据权利要 求16至25任意一项所述的终端,以及根据权利要求26至30任意一项所述的服务器。
  32. 一种非易失性计算机可读存储介质,存储有计算机可读指令,可以使至少一个处理器执行如权利要求1-10任一项所述的方法。
  33. 一种非易失性计算机可读存储介质,存储有计算机可读指令,可以使至少一个处理器执行如权利要求11-15任一项所述的方法。
PCT/CN2017/117058 2016-12-21 2017-12-19 用户终端之间的互动方法、终端、服务器、系统及存储介质 WO2018113639A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/364,370 US10636221B2 (en) 2016-12-21 2019-03-26 Interaction method between user terminals, terminal, server, system, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611191383.0 2016-12-21
CN201611191383.0A CN107741809B (zh) 2016-12-21 2016-12-21 一种虚拟形象之间的互动方法、终端、服务器及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/364,370 Continuation US10636221B2 (en) 2016-12-21 2019-03-26 Interaction method between user terminals, terminal, server, system, and storage medium

Publications (1)

Publication Number Publication Date
WO2018113639A1 true WO2018113639A1 (zh) 2018-06-28

Family

ID=61234991

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/117058 WO2018113639A1 (zh) 2016-12-21 2017-12-19 用户终端之间的互动方法、终端、服务器、系统及存储介质

Country Status (3)

Country Link
US (1) US10636221B2 (zh)
CN (1) CN107741809B (zh)
WO (1) WO2018113639A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852770A (zh) * 2018-08-21 2020-02-28 阿里巴巴集团控股有限公司 数据处理方法、装置、计算设备及显示设备
CN111050187A (zh) * 2019-12-09 2020-04-21 腾讯科技(深圳)有限公司 一种虚拟视频处理的方法、装置及存储介质
CN113181643A (zh) * 2021-04-29 2021-07-30 广州三七极创网络科技有限公司 虚拟角色的绘制方法、装置及电子设备
CN113591489A (zh) * 2021-07-30 2021-11-02 中国平安人寿保险股份有限公司 语音交互方法、装置及相关设备

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108829234A (zh) * 2018-04-27 2018-11-16 上海爱优威软件开发有限公司 能够在线互动的运动方法及系统
CN113112614B (zh) * 2018-08-27 2024-03-19 创新先进技术有限公司 基于增强现实的互动方法及装置
CN109271553A (zh) * 2018-08-31 2019-01-25 乐蜜有限公司 一种虚拟形象视频播放方法、装置、电子设备及存储介质
CN109445579A (zh) * 2018-10-16 2019-03-08 翟红鹰 基于区块链的虚拟形象交互方法、终端及可读存储介质
CN109636886B (zh) * 2018-12-19 2020-05-12 网易(杭州)网络有限公司 图像的处理方法、装置、存储介质和电子装置
CN110430553B (zh) * 2019-07-31 2022-08-16 广州小鹏汽车科技有限公司 车辆间的互动方法、装置、存储介质及控制终端
CN110691279A (zh) * 2019-08-13 2020-01-14 北京达佳互联信息技术有限公司 一种虚拟直播的方法、装置、电子设备及存储介质
CN110635995A (zh) * 2019-09-30 2019-12-31 上海掌门科技有限公司 一种实现用户间交互的方法、装置与系统
CN110837300B (zh) * 2019-11-12 2020-11-27 北京达佳互联信息技术有限公司 虚拟交互的方法、装置、电子设备及存储介质
CN111408136B (zh) * 2020-02-28 2021-02-26 苏州叠纸网络科技股份有限公司 一种游戏交互控制方法、装置及存储介质
CN112346562A (zh) * 2020-10-19 2021-02-09 深圳市太和世纪文化创意有限公司 一种沉浸式三维虚拟交互方法、系统以及电子设备
CN112330819B (zh) * 2020-11-04 2024-02-06 腾讯科技(深圳)有限公司 基于虚拟物品的交互方法、装置及存储介质
CN113096244A (zh) * 2021-04-14 2021-07-09 谭昌锋 一种场景社交方法及系统
CN113313837A (zh) * 2021-04-27 2021-08-27 广景视睿科技(深圳)有限公司 一种增强现实的环境体验方法、装置及电子设备
CN113457155A (zh) * 2021-06-25 2021-10-01 网易(杭州)网络有限公司 游戏中的显示控制方法、装置、电子设备及可读存储介质
CN113384901B (zh) * 2021-08-16 2022-01-18 北京蔚领时代科技有限公司 交互程序实例处理方法、装置、计算机设备及存储介质
CN113689577A (zh) * 2021-09-03 2021-11-23 上海涞秋医疗科技有限责任公司 虚拟三维模型与实体模型匹配的方法、系统、设备及介质
KR102630218B1 (ko) * 2021-12-27 2024-01-29 주식회사 카카오 지도 기반 가상 공간에서의 대화 서비스 제공 방법 및 장치
CN115134324B (zh) * 2022-05-11 2023-04-25 钉钉(中国)信息技术有限公司 交互卡片的更新方法、服务器、终端及存储介质
CN114723860B (zh) * 2022-06-08 2022-10-04 深圳智华科技发展有限公司 虚拟形象的生成方法、装置、设备及存储介质
CN117547838A (zh) * 2022-08-05 2024-02-13 腾讯科技(成都)有限公司 社交互动的方法、装置、设备、可读存储介质及程序产品
US20240086142A1 (en) * 2022-09-09 2024-03-14 Rovi Guides, Inc. Dynamically adjusting a personal boundary of an avatar in an xr environment
CN117931327A (zh) * 2022-10-13 2024-04-26 腾讯科技(成都)有限公司 虚拟对象的显示方法、装置、设备及存储介质
WO2024087814A1 (zh) * 2022-10-25 2024-05-02 聚好看科技股份有限公司 一种虚拟会议中范围交流的实现方法及显示设备、移动终端
CN116931737A (zh) * 2023-08-03 2023-10-24 重庆康建光电科技有限公司 一种人与场景的虚拟现实互动实现系统及方法
CN117037048B (zh) * 2023-10-10 2024-01-09 北京乐开科技有限责任公司 一种基于虚拟形象的社交互动方法及系统
CN117193541B (zh) * 2023-11-08 2024-03-15 安徽淘云科技股份有限公司 虚拟形象交互方法、装置、终端和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1743043A (zh) * 2005-06-19 2006-03-08 珠海市西山居软件有限公司 一种网络游戏系统及其实现方法
CN102981761A (zh) * 2012-11-13 2013-03-20 广义天下文化传播(北京)有限公司 用于移动终端应用程序的触发式交互方法
CN103905291A (zh) * 2012-12-27 2014-07-02 腾讯科技(深圳)有限公司 一种基于地理位置的通讯方法、移动终端、服务器及系统
WO2015135476A1 (en) * 2014-03-11 2015-09-17 Tencent Technology (Shenzhen) Company Limited Voice interaction method and apparatus

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6057856A (en) * 1996-09-30 2000-05-02 Sony Corporation 3D virtual reality multi-user interaction with superimposed positional information display for each user
CN100417143C (zh) * 2004-12-08 2008-09-03 腾讯科技(深圳)有限公司 基于即时通信平台的个人虚拟形象互动娱乐系统及方法
CN100579085C (zh) * 2007-09-25 2010-01-06 腾讯科技(深圳)有限公司 用户界面的实现方法、用户终端和即时通讯系统
US9357025B2 (en) * 2007-10-24 2016-05-31 Social Communications Company Virtual area based telephony communications
CN101930284B (zh) * 2009-06-23 2014-04-09 腾讯科技(深圳)有限公司 一种实现视频和虚拟网络场景交互的方法、装置和系统
CN103368816A (zh) * 2012-03-29 2013-10-23 深圳市腾讯计算机系统有限公司 基于虚拟人物形象的即时通讯方法及系统
KR102516124B1 (ko) * 2013-03-11 2023-03-29 매직 립, 인코포레이티드 증강 및 가상 현실을 위한 시스템 및 방법
US20150193982A1 (en) * 2014-01-03 2015-07-09 Google Inc. Augmented reality overlays using position and orientation to facilitate interactions between electronic devices
CN103929479B (zh) * 2014-04-10 2017-12-12 惠州Tcl移动通信有限公司 移动终端模拟真实场景实现用户互动的方法及系统
US9947139B2 (en) * 2014-06-20 2018-04-17 Sony Interactive Entertainment America Llc Method and apparatus for providing hybrid reality environment
CN105468142A (zh) * 2015-11-16 2016-04-06 上海璟世数字科技有限公司 基于增强现实技术的互动方法、系统和终端
CN106100983A (zh) * 2016-08-30 2016-11-09 黄在鑫 一种基于增强现实与gps定位技术的移动社交网络系统
CN106846032A (zh) * 2016-11-24 2017-06-13 北京小米移动软件有限公司 电商应用程序中的互动方法、装置及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1743043A (zh) * 2005-06-19 2006-03-08 珠海市西山居软件有限公司 一种网络游戏系统及其实现方法
CN102981761A (zh) * 2012-11-13 2013-03-20 广义天下文化传播(北京)有限公司 用于移动终端应用程序的触发式交互方法
CN103905291A (zh) * 2012-12-27 2014-07-02 腾讯科技(深圳)有限公司 一种基于地理位置的通讯方法、移动终端、服务器及系统
WO2015135476A1 (en) * 2014-03-11 2015-09-17 Tencent Technology (Shenzhen) Company Limited Voice interaction method and apparatus

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852770A (zh) * 2018-08-21 2020-02-28 阿里巴巴集团控股有限公司 数据处理方法、装置、计算设备及显示设备
CN110852770B (zh) * 2018-08-21 2023-05-26 阿里巴巴集团控股有限公司 数据处理方法、装置、计算设备及显示设备
CN111050187A (zh) * 2019-12-09 2020-04-21 腾讯科技(深圳)有限公司 一种虚拟视频处理的方法、装置及存储介质
CN111050187B (zh) * 2019-12-09 2020-12-15 腾讯科技(深圳)有限公司 一种虚拟视频处理的方法、装置及存储介质
CN113181643A (zh) * 2021-04-29 2021-07-30 广州三七极创网络科技有限公司 虚拟角色的绘制方法、装置及电子设备
CN113591489A (zh) * 2021-07-30 2021-11-02 中国平安人寿保险股份有限公司 语音交互方法、装置及相关设备
CN113591489B (zh) * 2021-07-30 2023-07-18 中国平安人寿保险股份有限公司 语音交互方法、装置及相关设备

Also Published As

Publication number Publication date
US10636221B2 (en) 2020-04-28
US20190221045A1 (en) 2019-07-18
CN107741809B (zh) 2020-05-12
CN107741809A (zh) 2018-02-27

Similar Documents

Publication Publication Date Title
WO2018113639A1 (zh) 用户终端之间的互动方法、终端、服务器、系统及存储介质
CN108234276B (zh) 一种虚拟形象之间互动的方法、终端及系统
WO2019233229A1 (zh) 一种图像融合方法、装置及存储介质
CN108307140B (zh) 网络通话方法、装置和计算机可读存储介质
WO2016173513A1 (zh) 基于推荐内容的互动方法、终端和服务器
KR101977526B1 (ko) 화상 스플라이싱 방법, 단말, 및 시스템
WO2015172704A1 (en) To-be-shared interface processing method, and terminal
WO2012088665A1 (zh) 对联系人进行处理的方法及移动终端
WO2017193998A1 (zh) 即时通信方法和装置
CN111597455B (zh) 社交关系的建立方法、装置、电子设备及存储介质
CN108513088B (zh) 群组视频会话的方法及装置
WO2018018698A1 (zh) 一种增强现实ar的信息处理方法、装置及系统
WO2019149028A1 (zh) 应用程序的下载方法及终端
CN106127829B (zh) 一种增强现实的处理方法、装置及终端
CN107979628B (zh) 获取虚拟物品的方法、装置及系统
CN110673770B (zh) 消息展示方法及终端设备
CN108900407B (zh) 会话记录的管理方法、装置及存储介质
CN109426343B (zh) 基于虚拟现实的协作训练方法及系统
CN108876878B (zh) 头像生成方法及装置
CN108228033A (zh) 一种消息显示方法及移动终端
CN105303591B (zh) 在拼图上叠加地点信息的方法、终端及服务器
CN109639569A (zh) 一种社交通信方法及终端
CN106330672B (zh) 一种即时通信方法及系统
CN108880974B (zh) 会话群组创建方法及装置
CN108880975B (zh) 信息显示方法、装置及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17882791

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17882791

Country of ref document: EP

Kind code of ref document: A1