WO2021139328A1 - 一种虚拟道具分配的方法和相关装置 - Google Patents

一种虚拟道具分配的方法和相关装置 Download PDF

Info

Publication number
WO2021139328A1
WO2021139328A1 PCT/CN2020/124292 CN2020124292W WO2021139328A1 WO 2021139328 A1 WO2021139328 A1 WO 2021139328A1 CN 2020124292 W CN2020124292 W CN 2020124292W WO 2021139328 A1 WO2021139328 A1 WO 2021139328A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal device
geographic location
virtual item
information
virtual
Prior art date
Application number
PCT/CN2020/124292
Other languages
English (en)
French (fr)
Inventor
梁宇轩
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP20911764.7A priority Critical patent/EP3995935A4/en
Priority to JP2022517927A priority patent/JP7408785B2/ja
Priority to KR1020227005601A priority patent/KR20220032629A/ko
Publication of WO2021139328A1 publication Critical patent/WO2021139328A1/zh
Priority to US17/581,502 priority patent/US20220148231A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/215Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/216Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/61Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor using advertising information
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/90Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
    • A63F13/92Video game devices specially adapted to be hand-held while playing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0207Discounts or incentives, e.g. coupons or rebates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0261Targeted advertisements based on user location
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Definitions

  • the embodiments of the present application relate to the field of augmented reality (AR) technology, and specifically relate to the distribution of virtual props.
  • AR augmented reality
  • the embodiment of the present application provides a virtual prop distribution method and related devices, which are used to greatly enrich the user's experience and interest in interactive activities, and combine the interactive props to show a complete effect experience.
  • an embodiment of the present application provides a method for distributing virtual items, which may include:
  • the first terminal device Sending to the first terminal device the geographic location information of the first virtual item corresponding to the first terminal device to instruct to obtain the corresponding first virtual item, and the first terminal device is any one of the at least one terminal device.
  • the embodiment of the present application provides a method for virtual item distribution, which may include:
  • the first virtual item corresponding to the oneself is obtained from the corresponding geographic location.
  • an embodiment of the present application provides a server, and the server may include:
  • a receiving unit configured to obtain geographic location and status information of at least one terminal device in the first scenario
  • the determining unit is configured to determine the first virtual item corresponding to each terminal device based on the geographic location and state information of each terminal device received by the receiving unit, and the first virtual item corresponds to the first scene;
  • the determining unit is used to determine the geographic location information at which the first virtual props are respectively released;
  • the sending unit is configured to send the geographic location information of the first virtual item corresponding to the first terminal device to the first terminal device to instruct to obtain the corresponding first virtual item, and the first terminal device is any of the at least one terminal device One.
  • the status information includes environmental information and user characteristic information where the terminal device is located, the at least one terminal device includes multiple terminal devices, and the determining unit includes:
  • the grouping module is configured to group multiple terminal devices based on the geographic locations of the multiple terminal devices received by the receiving unit to obtain at least one terminal device grouping;
  • the determining module is used to determine the corresponding area on the map of the terminal device grouping obtained by the grouping module, wherein the map is pre-divided into a plurality of areas with corresponding geographic location ranges;
  • the determining module is configured to respectively determine the distribution weight of the corresponding terminal device according to the user characteristic information received by the receiving unit;
  • the determining module is used to determine the first virtual item of the corresponding terminal device based on the environment information and the distribution weight of the terminal device when there are virtual items corresponding to the area corresponding to the terminal device grouping.
  • the grouping module may include:
  • the determining sub-module is used to determine the relative position between any two terminal devices based on the geographic locations of multiple terminal devices;
  • the grouping sub-module is used to group multiple terminal devices based on the relative position between any two terminal devices determined by the determining sub-module.
  • the determining sub-module calculates the geographic locations of any two terminal devices through a breadth-first algorithm to obtain the relative position between any two terminal devices.
  • the sending unit may include:
  • the sending module is configured to send a voice message or a text message to the first terminal device, where the voice message or the text message carries the geographic location information of the first virtual item corresponding to the first terminal device.
  • the server further includes:
  • the receiving unit is also used to receive the switching instruction sent by the terminal device
  • the switching unit is configured to switch the first scene where the terminal device is located to the second scene according to the switching instruction received by the receiving unit, wherein the second scene corresponds to a second virtual item, and the level of the second virtual item is higher than that of the first virtual item.
  • the level of the item is configured to switch the first scene where the terminal device is located to the second scene according to the switching instruction received by the receiving unit, wherein the second scene corresponds to a second virtual item, and the level of the second virtual item is higher than that of the first virtual item. The level of the item.
  • an embodiment of the present application provides a terminal device, which may include:
  • the acquiring unit is used to acquire geographic location and status information in the first scenario
  • the sending unit is configured to send geographic location and status information to the server, so that the server determines the first virtual item corresponding to each terminal device, the first virtual item and the first scene based on the geographic location and status information of each terminal device.
  • the receiving unit is configured to receive the geographic location information of the first virtual item sent by the server;
  • the acquiring unit is configured to acquire the first virtual item corresponding to the first virtual item from the corresponding geographic location according to the geographic location information where the first virtual item is released and received by the receiving unit.
  • the terminal device may also include:
  • the collecting unit is used to collect the first surrounding environment information through the configured camera
  • the first correction unit is used to correct the geographic location based on the corresponding Internet Protocol IP address and the first surrounding environment information.
  • the terminal device may also include:
  • the acquiring unit is configured to acquire multiple historical geographic location information, and collect second surrounding environment information through the configured camera;
  • the second correction unit is configured to correct the geographic location based on the multiple historical geographic location information and the second surrounding environment information obtained by the acquiring unit.
  • the receiving unit may include:
  • the receiving module is configured to receive a voice message or text message sent by the server, where the voice message or text message carries the geographic location information of the first virtual item corresponding to the first terminal device.
  • the acquiring unit is further configured to acquire the switching instruction in the first scenario
  • the sending unit is configured to send a switching instruction to the server, so that the server switches the first scene to the second scene, wherein the second scene corresponds to a second virtual item, and the level of the second virtual item is higher than the level of the first virtual item .
  • an embodiment of the present application provides a server, including: a processor and a memory; the memory is used to store program instructions, and when the server is running, the processor executes the program instructions stored in the memory to make the server Perform the method as described above.
  • an embodiment of the present application provides a terminal device, including: a processor and a memory; the memory is used to store program instructions, and when the terminal device is running, the processor executes the program instructions stored in the memory to enable The terminal device executes the method as described above.
  • embodiments of the present application provide a computer-readable storage medium, where the storage medium is used to store a computer program, and the computer program is used to execute the method in the above aspect.
  • the embodiments of the present application provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the method in the above aspects.
  • the first virtual device corresponding to each terminal device is determined. Props, respectively sending the geographic location information of the corresponding first virtual prop to each terminal device, so that each terminal device can obtain the corresponding first virtual prop from the corresponding geographic location according to the acquired geographic location information.
  • the virtual props that can be allocated to each terminal device in different scenarios are determined, which greatly enriches the user's experience and interest in interactive activities, and combines interactive props to show a complete Effect experience.
  • FIG. 1 is a schematic diagram of the system architecture of virtual item distribution in an embodiment of the present application
  • FIG. 2 is a schematic diagram of an embodiment of a method for distributing virtual props in an embodiment of the present application
  • FIG. 3 is a schematic structural diagram of an AR system in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of dividing areas on a map in an embodiment of the present application.
  • FIG. 5 is a schematic diagram showing the area corresponding to the virtual item on the map in an embodiment of the present application.
  • Fig. 6 is a schematic diagram of determining distribution weights by age in an embodiment of the present application.
  • Fig. 7 is a schematic diagram of grouping by a breadth-first algorithm in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of feedback of posted geographic location information by voice in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of feedback of posted geographic location information in text in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of displaying or implicitly displaying the first virtual item in an embodiment of the present application.
  • Fig. 11 is a voice interaction system provided in an embodiment of the present application.
  • Figure 12 is another voice interaction system provided in an embodiment of the present application.
  • FIG. 13 is a schematic diagram of another embodiment of a method for distributing virtual items in an embodiment of the present application.
  • FIG. 14 is a schematic diagram of changing a scene in an embodiment of the present application.
  • FIG. 15 is a schematic diagram of an embodiment of a server provided in an embodiment of the present application.
  • FIG. 16 is a schematic diagram of another embodiment of a server provided in an embodiment of the present application.
  • FIG. 17 is a schematic diagram of an embodiment of a terminal device provided in an embodiment of the present application.
  • FIG. 18 is a schematic diagram of the hardware structure of the communication device in an embodiment of the present application.
  • the embodiments of the present application provide a method, a server, and a terminal device for distributing virtual props, which are used to greatly enrich the user's experience and interest in interactive activities, and combine the interactive props to show a complete effect experience.
  • steps or units may include other steps or units that are not clearly listed or are inherent to these processes, methods, products, or equipment.
  • the naming or numbering of steps appearing in this application does not mean that the steps in the method flow must be executed in the time/logical sequence indicated by the naming or numbering.
  • the named or numbered process steps can be implemented according to the The technical purpose changes the execution order, as long as the same or similar technical effects can be achieved.
  • an embodiment of the present application provides a method for allocating virtual items.
  • the method can be applied to the system architecture diagram shown in FIG. 1. Please refer to FIG. 1 for the allocation of virtual items in this embodiment of the application.
  • the system includes at least one terminal device and a server.
  • each user can be joined together, and each user can hold a terminal device, such as: terminal device 1, terminal device 2, terminal device 3, terminal device 4, and so on.
  • Each terminal device obtains its own geographic location and status information, which can include but is not limited to the current environment the user is in, and user-related user characteristic information.
  • the terminal device can send the geographic location and status information to the server, so that the server can obtain the user's geographic location and status information in real time, and then determine the virtual items assigned to each terminal device.
  • the server will determine the location of each virtual item according to the configured rules, and then send it to the corresponding terminal through voice broadcast, text display, etc. Device, so that the corresponding terminal device guides the corresponding user to obtain the virtual item according to the placement geographic location.
  • the server can be considered as a collection of at least AR processing capabilities, LBS (location-based service) service capabilities, and voice processing capabilities.
  • the terminal device described above is at least a collection of cameras, camera sensors and other devices.
  • the terminal devices include but are not limited to mobile phones, mobile terminals, tablets, laptops, etc., or wearables with communication functions Smart devices, such as smart watches, smart bracelets, etc., will not be specifically limited in the embodiments of the present application.
  • An embodiment of the method for assigning virtual items in the embodiments of the present application includes:
  • At least one terminal device obtains respective current geographic location and status information in a first scenario.
  • the description of the current geographic location is mainly used to indicate the geographic location where the terminal device is located when the terminal device obtains geographic location and status information.
  • the rewards that the user can obtain in the interactive activity will always be affected by the user’s location, environmental factors, etc., so the users are The terminal equipment held will obtain geographic location and status information.
  • the scene described above can be similar to the level in a certain treasure hunt game interactive activity, and the first scene can be any level in the treasure hunt game interactive activity. In this embodiment of the application, the first scene will not be correct. Make specific restrictions.
  • the current geographic location described includes but is not limited to any of the following: wifiMac (physical mac address of wifi), cellID (operator base station), IP address, longitude and latitude obtained through positioning (such as GPS positioning), etc. .
  • the described status information includes but is not limited to any of the following: current environmental information, user characteristic information, and current environmental information may include, but is not limited to, current temperature conditions, weather conditions, or date, etc.
  • User characteristic information may include but not Limited to the user’s age, consumption, etc. It should be understood that the aforementioned current geographic location, status information, current environment information, and user characteristic information may be other information in actual applications, in addition to the situations described above, which will not be used in the embodiments of this application. Specific restrictions.
  • the correctness of the current geographic location can be corrected based on information such as the user's current surrounding environment, so that the accuracy of the current geographic location provided to the server is optimized and most relevant.
  • the actual location of the user It can be corrected in the following two ways:
  • Each terminal device collects its own first surrounding environment information through the configured camera, and then each terminal device respectively corrects the current geographic location based on its corresponding Internet Protocol IP address and the first surrounding environment information.
  • the first surrounding environment information may be the surrounding environment of the current location of the terminal device, such as a certain building, a certain cell, a certain highway, etc., which will not be specifically described in the embodiments of the present application.
  • the terminal device can obtain its own IP information from the base station covering a certain area, so that the terminal device can determine a wider location range based on the IP information, so that the current geographic location can be determined after combining the first surrounding information. Correction, so that the revised current geographic location fits the user's actual location to the greatest extent, and greatly improves the user's activity experience.
  • Each terminal device obtains multiple historical geographic location information, and collects its own second surrounding environment information through the configured camera; then, each terminal device is based on multiple historical geographic location information and second surrounding environment information respectively Correct the current geographic location.
  • the historical geographic location information can be obtained from the location search server, such as Google search, search map, etc.
  • the location search server such as Google search, search map, etc.
  • each terminal device After each terminal device obtains multiple pieces of historical geographic location information, it will calculate the multiple information based on the Knn nearest neighbor classification method.
  • the historical geographic location information is trained and classified so that the styles of the historical geographic location information included in each category are similar, and then a style that fits the second surrounding environment information is selected, such as a style with temperature, etc., to compare the current geographic location information. The position is corrected.
  • Each terminal device separately sends its current geographic location and status information to the server.
  • the server separately determines the first virtual item of each terminal device based on the current geographic location and state information of each terminal device, and the first virtual item corresponds to the first scene.
  • the server can superimpose virtual props on real video images based on AR technology, thereby synthesizing the video images with virtual props, and realize the combination of virtual props and real video images, so that the user will pass through the terminal held by the user.
  • FIG. 3 is a schematic structural diagram of an AR system in an embodiment of this application. It can be seen from Figure 3 that the AR system is composed of a virtual scene generation unit and interactive devices such as a head display and a helmet.
  • the virtual scene generation unit is responsible for the modeling, management, drawing and management of other peripherals of the virtual scene;
  • the head display is responsible for displaying the virtual and display fusion signals;
  • the head tracker tracks the user's sight changes;
  • the interactive equipment is used to realize Input and output of sensory signals and environmental control operation signals.
  • the camera and sensor collect the video or image of the real scene, pass it to the server for analysis and reconstruction, and combine the data of the head tracker to analyze the relative position of the virtual scene and the real scene, realize the alignment of the coordinate system and perform Fusion calculation of virtual scenes;
  • interactive equipment collects external control signals to realize interactive operations of virtual and real scenes.
  • the fused information will be displayed in the head display in real time and displayed in the user's field of vision.
  • each scene will be configured with different virtual props.
  • the first scene will correspond to the first virtual prop
  • the second scene will correspond to the second virtual prop, etc.
  • the level of virtual items will also be higher.
  • each level will correspond to different rewards, and the higher the level of the level, the corresponding increase in difficulty, so the rewards will be richer.
  • the server Since the current geographic location and status information of the terminal device will further affect the probability that the user can obtain richer virtual items, after the server obtains the current geographic location and status information of each terminal device in the first scenario, it will be based on The current geographic location and status information of each terminal device is used to determine the first virtual item that needs to be allocated to each terminal device.
  • the description of the current environmental location is mainly used to indicate the environmental location where the terminal device is located when the terminal device obtains geographic location and status information.
  • the status information can include current environment information and user characteristic information
  • the server can determine the first virtual item of each terminal device based on this, and at least one terminal device is specifically multiple terminals.
  • Device time. Step 203 is specifically as follows:
  • the server groups multiple terminal devices based on the current geographic location of each terminal device to obtain at least one terminal device group;
  • the server determines the area corresponding to each terminal device grouping on the map, where the map is pre-divided into multiple areas with corresponding geographic location ranges;
  • the server separately determines the distribution weight of the corresponding terminal device according to the characteristic information of each user
  • the server determines the first virtual item of each terminal device based on each current environment information and the distribution weight of each terminal device.
  • the server Since the server has divided the activity range of the interactive activity based on certain preset rules in advance, and then displayed the divided activity range in a virtual form based on the AR technology.
  • the map may be the result of zooming out and drawing the surveyed street graphics according to a certain ratio.
  • the embodiment of the present application may set the size of the area, and divide the map into multiple areas according to the size of the set area, so that each area on the map
  • the size of and the size of the set area can be divided into multiple areas, so that each area can include a corresponding geographic location range.
  • FIG. 4 is a schematic diagram of dividing regions on a map in an embodiment of this application.
  • the map can be divided into multiple circular areas, and the radius of each circular area is set The radius of the circular area corresponds to that.
  • the number of regions will not be limited in the embodiments of the present application, and the radius of the circle described above will not be limited.
  • the map can also be divided into areas of other shapes in practical applications, such as a square with a side length of x, etc., which will not be specifically limited in the embodiments of the present application.
  • FIG. 5 is a schematic diagram showing the area corresponding to the virtual item on the map in an embodiment of the application. It can be seen from Fig. 5 that for each circular area, at least one virtual prop can be placed in the circular area, where the black dots represent the virtual props.
  • the server groups the terminal devices, it can determine the area corresponding to each terminal device group on the map, that is, it can be understood as determining which area on the map each terminal device in each terminal device group is located in.
  • FIG. 6 is a schematic diagram of determining the distribution weight by age in an embodiment of this application. It can be seen from Figure 6 that as the age increases, the distribution weight determined will be higher.
  • the server can separately determine the distribution weight of the corresponding terminal device based on the characteristic information of each user, that is, determine the probability of each terminal device acquiring the corresponding virtual item, so that it can improve the acquisition of different interactive activities according to the actual situation of the user.
  • the possibility of virtual props in the game enhances the user’s interest and experience in interactive activities.
  • the user characteristic information described above may also include other characteristic information in practical applications, which will not be limited here.
  • the server determines the first virtual item of each terminal device based on each current environment information and the distribution weight of each terminal device.
  • the server when there are virtual props in the area corresponding to each terminal device grouping, different virtual props will be allocated due to different current environments. At this time, the server still needs to combine the current environment information from the prop pool. Candidate virtual props are screened out. On this basis, the server will determine the first virtual item of each terminal device from the candidate virtual items based on the previously determined distribution weight of each terminal device.
  • the server will find all rain-related candidate virtual items from the item pool in the area corresponding to the terminal device grouping, such as small umbrellas, large umbrellas or cars. At this time, it is obvious that the server will determine the car It is the first virtual item of the terminal device 2, the large umbrella is determined as the first virtual item of the terminal device 1, and the small umbrella is determined as the first virtual item of the terminal device 1. Therefore, the combination of user characteristic information, current environment information and other state information in this embodiment enriches the user's experience in interactive activities, and can fully display the effect experience of interactive props.
  • the aforementioned current environmental information may include but is not limited to temperature, weather or date, and temperature includes but is not limited to high temperature, low temperature, etc., and weather includes but not limited to sunny, rainy, etc., and the date may be a holiday , Non-holidays, etc., specifically will not be limited in the embodiments of this application.
  • the relative position between any two terminal devices can be determined based on the current geographic location of each terminal device; the server is based on the relative position between any two terminal devices.
  • the location groups at least one terminal device.
  • the current geographic locations of any two terminal devices may be calculated by the server through a breadth-first algorithm to obtain the relative position between any two terminal devices.
  • FIG. 7 is a schematic diagram of grouping by a breadth-first algorithm in an embodiment of this application.
  • V1 to V8 respectively representing terminal device 1 to terminal device 8 as an example
  • V1 is added to the area
  • V1 is taken out, and marked as true (that is, it has been visited), and its adjacent points are added to the area
  • V2 V3 Take out V2 and mark it as true (that is, it has been visited), and add the neighboring points that it has not visited to enter the area, then ⁇ —[V3 V4 V5]
  • any two terminal devices can be divided into the same group.
  • the server determines the geographic location information at which each first virtual item is released.
  • the organizer of the interactive event will randomly place virtual props or designate them to be placed at any location in the activity area in advance based on different needs.
  • virtual props can be placed according to the population distribution density. Generally speaking, the crowd distribution The greater the density of the area, the more virtual props will be placed in the location corresponding to the location. In actual applications, other placement methods can also be used, which will not be specifically limited here.
  • the server can generate a correspondence relationship based on the virtual props that are randomly placed or designated to be placed and the geographic location information of the respective placements, and store the correspondence relationship in the database.
  • the server determines the first virtual item that can be allocated to each terminal device, it can determine the geographic location information at which each first virtual item is released based on the corresponding correspondence.
  • the server sends the geographic location information where the first virtual item corresponding to the first terminal device is released to the first terminal device, where the first terminal device is any one of the at least one terminal device.
  • each first virtual item being released can be used to instruct the respective terminal device to obtain the corresponding first virtual item from the corresponding geographic location, that is to say, each first virtual item
  • the placed geographic location information indicates the geographic location where the first virtual item was placed, for example: a corner on the first floor of a shopping mall, and so on. Therefore, after the server obtains the geographical location information where each first virtual item is released, it can send it to the corresponding terminal device respectively, specifically, it can send it to any first terminal device among the at least one terminal device, This enables the first terminal device to obtain the corresponding first virtual item from the corresponding geographic location under the indication of the location information of the geographic location where the first virtual item corresponding to the first terminal device is released.
  • the server may notify the corresponding first terminal device of the geographic location information of the first virtual item corresponding to the first terminal device through a voice message or a text message.
  • the server determines the geographic location information where each first virtual item is placed, it carries the geographic location information where each first virtual item is placed in the voice message or text message, and sends the voice message or text message.
  • the use of voice or text improves the interactivity of interaction.
  • FIG. 8 which is a schematic diagram of feedback of posted geographic location information by voice in an embodiment of this application.
  • FIG. 9 which is a schematic diagram of feedback of posted geographic location information in text in an embodiment of this application. It should be understood that in actual applications, besides notifying the location information of the first virtual item corresponding to the first terminal device through a voice message or text message, it may also be other notification messages, which will not be used in this application. Make specific restrictions.
  • the first terminal device obtains the corresponding first virtual item from the corresponding geographic location according to the geographic location information at which the first virtual item corresponding to the first terminal device is placed.
  • the first terminal device if it receives the geographic location information that the first virtual item corresponding to the first terminal device is sent from the server, it can follow the instructions of the geographic location information to be launched. The first virtual item is obtained at the corresponding geographic location.
  • FIG. 10 is a schematic diagram of displaying or implicitly displaying the first virtual item in an embodiment of the application. It can be seen from FIG. 10 that if the first virtual item corresponding to the terminal device 1 appears in a display mode, then the terminal device 1 can directly obtain the first virtual item at this time. However, if the first virtual item corresponding to the terminal device 2 appears in an implicit manner, such as: locking, encryption, etc., then the terminal device 2 needs to perform the first virtual item in the locked or encrypted state at this time. Unlock operation. For example, perform tasks such as singing required by the unlocking operation, so that the first terminal device can obtain the first virtual props after the unlocking is successful, which fully enhances the user's experience and interest in the entire interactive activity.
  • Unlock operation For example, perform tasks such as singing required by the unlocking operation, so that the first terminal device can obtain the first virtual props after the unlocking is successful, which fully enhances the user's experience and interest in the entire interactive activity.
  • the first terminal device may receive the geographic location information of the first virtual item corresponding to the first terminal device by receiving a voice message or text message sent by the server, so that the first terminal device The device can then play the voice message, or display the text message on the display interface, so that the user corresponding to the first terminal device can play the voice message according to the indication of the geographic location information where the first virtual item is placed.
  • the first virtual item is obtained from the corresponding geographic location; or, the first virtual item is obtained from the corresponding geographic location under the instruction of a text message, so that the use of voice or text improves the interactivity of interaction.
  • each terminal device can obtain the voice message or text message of its user, and then send it to the server, so that the server feeds back the corresponding content.
  • FIG. 11 is a voice interaction system provided in an embodiment of this application. It can be seen from Figure 11 that the terminal equipment collects digital voice signals, and then sends them to the server through endpoint detection, noise reduction, and feature extraction. At this time, the server uses voice linguistics knowledge, signal processing technology, data mining technology, and statistical construction. The analog method, etc. trains the speech database, language database, etc. to obtain an acoustic model or language model, and then decodes the digital speech signal after feature extraction based on the acoustic module or language model to obtain the recognition result, that is, the text information.
  • FIG. 12 is another voice interaction system provided in an embodiment of this application.
  • the voice collected by the terminal device will perform feature extraction, and then the server will recognize and decode the feature-extracted voice based on the expectation maximization (EM) training algorithm, word division, and acoustic model. , So as to get the recognition result.
  • EM expectation maximization
  • word division word division
  • acoustic model acoustic model
  • the server when the user is in the process of acquiring the first virtual item, if he has any questions, he can inform the server through voice or text, and the server will feed back the corresponding guidance process, which will not be limited in the embodiment of this application.
  • FIG. 13 is a schematic diagram of an embodiment of the method for allocating virtual items provided by this embodiment.
  • Can include:
  • At least one terminal device obtains respective current geographic location and status information in a first scenario.
  • Each terminal device sends its current geographic location and status information to the server.
  • the server separately determines a first virtual item of each terminal device based on the current geographic location and state information of each terminal device, and the first virtual item corresponds to the first scene.
  • the server determines the geographic location information at which each first virtual item is released.
  • the server sends the geographic location information where the first virtual item corresponding to the first terminal device is released to the first terminal device, where the first terminal device is any one of the at least one terminal device.
  • the first terminal device obtains the corresponding first virtual item from the corresponding geographic location according to the geographic location information at which the first virtual item corresponding to the first terminal device is placed.
  • steps 501-506 are similar to steps 201-206 described in FIG. 2, and the details will not be repeated here.
  • Each terminal device obtains its own handover instruction in the first scenario.
  • each first terminal device in at least one terminal device can obtain the corresponding first virtual item from the corresponding geographic location, it means that the user has received the corresponding reward in the current first scene. . Then, it will enter the next scene at this time, and continue to obtain the corresponding virtual props in the next scene.
  • the switching operation can be triggered by clicking the switching button, inputting voice, etc., so as to obtain the corresponding switching instruction, which will not be specifically limited in the embodiment of the present application.
  • Each terminal device sends a switching instruction to the server respectively.
  • the terminal device after obtaining the corresponding first virtual item in the first scene, the terminal device sends a switching instruction to the server, so that the server can switch the first scene to the second scene under the instruction of the switching instruction, and further Therefore, each terminal device enters the second scene, and continues to obtain the corresponding second virtual props in the second scene.
  • the server switches the first scene to the second scene according to each switching instruction, where the second scene corresponds to a second virtual item, and the level of the second virtual item is higher than the level of the first virtual item.
  • the interactive activity may include at least one scene, and each scene will be configured with different virtual props.
  • FIG. 14 is a schematic diagram of changing a scene in an embodiment of this application. It can be seen from Figure 14 that as the level of the scene increases, the level of the corresponding virtual item will also increase.
  • the first virtual item in the first scene can be a raincoat
  • the second virtual item in the second scene is It may be an umbrella
  • the third virtual item in the third scene may be a car, etc., which will not be specifically limited in this embodiment.
  • each terminal device can obtain its current geographic location and status information in the second scene and then send it to the server.
  • the server separately determines the second virtual item of each terminal device in the second scene based on the current geographic location and state information of each terminal device.
  • the server will determine the second virtual item stored in the second scene and the corresponding The corresponding relationship between the placed geographic location information of each second virtual item is determined, and the geographic location information where each second virtual item is placed is determined, so as to send to the first terminal device the geographic location where the second virtual item corresponding to the first terminal device is placed. location information.
  • the first terminal device obtains the corresponding first virtual item from the corresponding geographic location according to the geographic location information of the second virtual item corresponding to the first terminal device.
  • the embodiments of the present application may divide the device into functional modules according to the foregoing method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. It should be noted that the division of modules in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 15 is a schematic diagram of an embodiment of the server 60 provided in the embodiment of the present application.
  • the server 60 includes:
  • the receiving unit 601 is configured to obtain geographic location and status information of at least one terminal device in the first scenario
  • the determining unit 602 is configured to determine a first virtual item corresponding to each terminal device based on the geographic location and state information of each terminal device received by the receiving unit 601, and the first virtual item corresponds to the first scene;
  • the determining unit 602 is configured to determine the geographic location information at which the first virtual props are respectively released;
  • the sending unit 603 is configured to send the geographic location information of the first virtual item corresponding to the first terminal device to the first terminal device to instruct to obtain the corresponding first virtual item, and the first terminal device is one of at least one terminal device anyone.
  • the status information includes the environment information and user characteristic information where the terminal device is located
  • the at least one terminal device includes multiple terminal devices
  • the determining unit 602 may include:
  • the grouping module is configured to group multiple terminal devices based on the geographic locations of the multiple terminal devices received by the receiving unit 601 to obtain at least one terminal device grouping;
  • the determining module is used to determine the corresponding area on the map of the terminal device grouping obtained by the grouping module, wherein the map is pre-divided into a plurality of areas with corresponding geographic location ranges;
  • the determining module is configured to determine the distribution weight of the corresponding terminal device according to the user characteristic information received by the receiving unit 601;
  • the determining module is used to determine the first virtual item of the corresponding terminal device based on the environment information and the distribution weight of the terminal device when there are virtual items corresponding to the area corresponding to the terminal device grouping.
  • the grouping module may include:
  • the determining sub-module is used to determine the relative position between any two terminal devices based on the geographic locations of multiple terminal devices;
  • the grouping sub-module is used to group multiple terminal devices based on the relative position between any two terminal devices determined by the determining sub-module.
  • the determining sub-module is used to perform a breadth-first algorithm on the geographic locations of any two terminal devices. Calculate to get the relative position between any two terminal devices.
  • the sending unit 603 may include:
  • the sending module is configured to send a voice message or a text message to the first terminal device, where the voice message or the text message carries the geographic location information of the first virtual item corresponding to the first terminal device.
  • FIG. 16 is a schematic diagram of another embodiment of the server 60 provided in this embodiment of the present application.
  • the server 60 may further include:
  • the receiving unit 601 is also configured to receive a switching instruction sent by a terminal device
  • the switching unit 604 is configured to switch the first scene where the terminal device is located to the second scene according to the switching instruction received by the receiving unit 601, wherein the second scene corresponds to a second virtual item, and the level of the second virtual item is higher than that of the first scene.
  • the level of a virtual item is configured to switch the first scene where the terminal device is located to the second scene according to the switching instruction received by the receiving unit 601, wherein the second scene corresponds to a second virtual item, and the level of the second virtual item is higher than that of the first scene.
  • the level of a virtual item is configured to switch the first scene where the terminal device is located to the second scene according to the switching instruction received by the receiving unit 601, wherein the second scene corresponds to a second virtual item, and the level of the second virtual item is higher than that of the first scene. The level of a virtual item.
  • FIG. 17 is a schematic diagram of an embodiment of a terminal device 70 provided in an embodiment of the application.
  • the terminal device 70 may include:
  • the acquiring unit 701 is configured to acquire geographic location and state information in the first scenario
  • the sending unit 702 is configured to send geographic location and status information to the server, so that the server can determine the first virtual item corresponding to each terminal device based on the geographic location and status information of each terminal device. Corresponding to the scene;
  • the receiving unit 703 is configured to receive the geographic location information of the first virtual item that is sent by the server;
  • the obtaining unit 701 is configured to obtain the first virtual item corresponding to the first virtual item from the corresponding geographic location according to the geographic location information where the first virtual item is released and received by the receiving unit 703.
  • the terminal device 70 further includes:
  • the collection unit is used to collect the first surrounding environment information through the configured camera
  • the first correction unit is used to correct the geographic location based on the corresponding Internet Protocol IP address and the first surrounding environment information.
  • the terminal device 70 further includes:
  • the acquiring unit is configured to acquire multiple historical geographic location information, and collect second surrounding environment information through the configured camera;
  • the second correction unit is configured to correct the geographic location based on the multiple historical geographic location information and the second surrounding environment information obtained by the acquiring unit.
  • the receiving unit 703 may include:
  • the receiving module is configured to receive a voice message or text message sent by the server, where the voice message or text message carries the geographic location information of the first virtual item corresponding to the first terminal device.
  • the obtaining unit 701 is further configured to obtain a switching instruction in the first scenario
  • the sending unit 703 is configured to send a switching instruction to the server, so that the server switches the first scene to the second scene, where the second scene corresponds to a second virtual item, and the second virtual item has a higher level than the first virtual item grade.
  • FIG. 18 is a schematic diagram of the hardware structure of the communication device in an embodiment of the present application. As shown in FIG. 18, the communication device may include:
  • the communication device includes at least one processor 801, a communication line 807, a memory 803, and at least one communication interface 804.
  • the processor 801 can be a general-purpose central processing unit (central processing unit, CPU), a microprocessor, an application-specific integrated circuit (server IC), or one or more programs for controlling the execution of the program of this application. Integrated circuits.
  • the communication line 807 may include a path to transmit information between the above-mentioned components.
  • the communication interface 804 uses any device such as a transceiver to communicate with other devices or communication networks, such as Ethernet, radio access network (RAN), wireless local area networks (WLAN), etc. .
  • RAN radio access network
  • WLAN wireless local area networks
  • the memory 803 may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), or other types that can store information and instructions
  • ROM read-only memory
  • RAM random access memory
  • the dynamic storage device, the memory can exist independently, and is connected to the processor through the communication line 807.
  • the memory can also be integrated with the processor.
  • the memory 803 is used to store computer-executable instructions for executing the solution of the present application, and the processor 801 controls the execution.
  • the processor 801 is configured to execute computer-executable instructions stored in the memory 803, so as to implement the method for allocating virtual props provided in the foregoing embodiments of the present application.
  • the computer-executable instructions in the embodiments of the present application may also be referred to as application program codes, which are not specifically limited in the embodiments of the present application.
  • the communication device may include multiple processors, such as the processor 801 and the processor 802 in FIG. 18.
  • processors can be a single-CPU (single-CPU) processor or a multi-core (multi-CPU) processor.
  • the processor here may refer to one or more devices, circuits, and/or processing cores for processing data (for example, computer program instructions).
  • the communication apparatus may further include an output device 805 and an input device 806.
  • the output device 805 communicates with the processor 801 and can display information in a variety of ways.
  • the input device 806 communicates with the processor 801, and can receive user input in a variety of ways.
  • the input device 806 may be a mouse, a touch screen device, or a sensing device.
  • the aforementioned communication device may be a general-purpose device or a dedicated device.
  • the communication device may be a desktop computer, a portable computer, a network server, a wireless terminal device, an embedded device, or a device with a similar structure in FIG. 18.
  • the embodiment of the present application does not limit the type of the communication device.
  • the above-mentioned receiving unit 601, acquiring unit 701, and receiving unit 703 can all be realized by the input device 806, the sending unit 603 and the sending unit 702 can all be realized by the output device 805, and the determining unit 602 and the switching unit 604 can all be realized by the processor 801. Or the processor 802 to achieve.
  • an embodiment of the present application also provides a storage medium, where the storage medium is used to store a computer program, and the computer program is used to execute the method provided in the foregoing embodiment.
  • the embodiments of the present application also provide a computer program product including instructions, which when run on a computer, cause the computer to execute the method provided in the above-mentioned embodiments.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Environmental & Geological Engineering (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephonic Communication Services (AREA)

Abstract

一种虚拟道具分配的方法和相关装置,用于大大地丰富了用户在互动活动中的体验与趣味性,并且结合互动道具展现了完整的效果体验。方法包括:获取在第一场景下的至少一个终端设备所处的地理位置和状态信息;基于每个终端设备的地理位置和状态信息,确定每个终端设备分别对应的第一虚拟道具,第一虚拟道具与第一场景相对应;确定所述第一虚拟道具分别被投放的地理位置信息;向第一终端设备发送第一终端设备所对应的第一虚拟道具被投放的地理位置信息以指示获取对应的第一虚拟道具,第一终端设备为至少一个终端设备中的任意一个。

Description

一种虚拟道具分配的方法和相关装置
本申请要求于2020年01月06日提交中国专利局、申请号为202010010741.3、申请名称为“一种虚拟道具分配的方法、服务器及终端设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及增强现实(augmented reality,AR)技术领域,具体涉及虚拟道具分配。
背景技术
传统的线下营销推广常常受到产地、人流、经费等成本的影响,往往只是简单地进行线下的推广互动活动。但随着互联网与信息技术的发展,互联网技术在传播推广中也起着重要的作用,因此在推广互动活动中融入互联网技术后,逐渐地出现了以转盘领奖、转盘抽奖等一系列的线上互动活动。
发明内容
本申请实施例提供了一种虚拟道具分配的方法和相关装置,用于大大地丰富了用户在互动活动中的体验与趣味性,并且结合互动道具展现了完整的效果体验。
一方面,本申请实施例提供了一种虚拟道具分配的方法,可以包括:
获取在第一场景下的至少一个终端设备所处的地理位置和状态信息;
基于每个终端设备的地理位置和状态信息,确定每个终端设备分别对应的第一虚拟道具,第一虚拟道具与第一场景相对应;
确定所述第一虚拟道具分别被投放的地理位置信息;
向第一终端设备发送第一终端设备所对应的第一虚拟道具被投放的地理位置信息以指示获取对应的第一虚拟道具,第一终端设备为至少一个终端设备中的任意一个。
另一方面,本申请实施例提供了一种虚拟道具分配的方法,可以包括:
在第一场景下获取所处的地理位置和状态信息;
向服务器发送所述地理位置和状态信息,以使服务器基于每个终端设备的地理位置和状态信息分别确定每个终端设备分别对应的第一虚拟道具,第一虚拟道具与第一场景相对应;
接收服务器发送的第一虚拟道具被投放的地理位置信息;
根据第一虚拟道具被投放的地理位置信息,从对应的地理位置获取自身对应的第一虚拟道具。
另一方面,本申请实施例提供了一种服务器,该服务器可以包括:
接收单元,用于获取在第一场景下的至少一个终端设备所处的地理位置和状态信息;
确定单元,用于基于接收单元接收到的每个终端设备的地理位置和状态信息,确定每个终端设备分别对应的第一虚拟道具,第一虚拟道具与第一场景相对应;
确定单元,用于确定第一虚拟道具分别被投放的地理位置信息;
发送单元,用于向第一终端设备发送第一终端设备所对应的第一虚拟道具被投放的地理位置信息以指示获取对应的第一虚拟道具,第一终端设备为至少一个终端设备中的任意一个。
可选地,状态信息包括所述终端设备所处的环境信息和用户特征信息,所述至少一个终端设备包括多个终端设备,该确定单元包括:
分组模块,用于基于接收单元接收到的多个终端设备的地理位置对多个终端设备进行分组,以得到至少一个终端设备分组;
确定模块,用于确定分组模块得到的终端设备分组在地图上对应的区域,其中,地图预先划分为多个具有相应地理位置范围的区域;
确定模块,用于根据接收单元接收到的用户特征信息分别确定所对应终端设备的分配权重;
确定模块,用于在终端设备分组所对应的区域中对应有虚拟道具时,基于环境信息、终端设备的分配权重分别确定所对应终端设备的第一虚拟道具。
可选地,分组模块,可以包括:
确定子模块,用于基于多个终端设备的地理位置确定任意两个终端设备之间的相对位置;
分组子模块,用于基于确定子模块确定的任意两个终端设备之间的相对位置对多个终端设备进行分组。
可选地,确定子模块,通过广度优先算法对任意两个终端设备的地理位置 进行计算,以得到任意两个终端设备之间的相对位置。
可选地,发送单元,可以包括:
发送模块,用于向第一终端设备发送语音消息或文字消息,其中,语音消息或文字消息中携带第一终端设备所对应的第一虚拟道具被投放的地理位置信息。
可选地,服务器还包括:
接收单元,还用于接收终端设备发送的切换指令;
切换单元,用于根据接收单元接收到的切换指令将终端设备所在的第一场景切换至第二场景,其中,第二场景对应有第二虚拟道具,第二虚拟道具的等级高于第一虚拟道具的等级。
另一方面,本申请实施例提供了一种终端设备,可以包括:
获取单元,用于在第一场景下获取所处的地理位置和状态信息;
发送单元,用于向服务器发送地理位置和状态信息,以使服务器基于每个终端设备的地理位置和状态信息分别确定每个终端设备分别对应的第一虚拟道具,第一虚拟道具与第一场景相对应;
接收单元,用于接收服务器发送的第一虚拟道具被投放的地理位置信息;
获取单元,用于根据接收单元接收到的第一虚拟道具被投放的地理位置信息,从对应的地理位置获取自身对应的第一虚拟道具。
可选地,终端设备还可以包括:
采集单元,用于通过所配置的摄像头采集第一周围环境信息;
第一修正单元,用于基于对应的互联网协议IP地址和第一周围环境信息对地理位置进行修正。
可选地,终端设备还可以包括:
获取单元,用于获取多个历史地理位置信息,以及通过所配置的摄像头采集第二周围环境信息;
第二修正单元,用于基于获取单元得到多个历史地理位置信息以及第二周围环境信息对地理位置进行修正。
可选地,接收单元,可以包括:
接收模块,用于接收服务器发送的语音消息或文字消息,其中,语音消息 或文字消息中携带第一终端设备所对应的第一虚拟道具被投放的地理位置信息。
可选地,获取单元,还用于在第一场景下获取切换指令;
发送单元,用于向服务器发送切换指令,以使得服务器将第一场景切换至第二场景,其中,第二场景对应有第二虚拟道具,第二虚拟道具的等级高于第一虚拟道具的等级。
另一方面,本申请实施例提供一种服务器,包括:处理器和存储器;该存储器用于存储程序指令,当该服务器运行时,该处理器执行该存储器存储的该程序指令,以使该服务器执行如上述方面的方法。
另一方面,本申请实施例提供一种终端设备,包括:处理器和存储器;该存储器用于存储程序指令,当该终端设备运行时,该处理器执行该存储器存储的该程序指令,以使该终端设备执行如上述方面的方法。
另一方面,本申请实施例提供了一种计算机可读存储介质,所述存储介质用于存储计算机程序,所述计算机程序用于执行以上方面的的方法。
另一方面,本申请实施例提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行以上方面的方法。
从以上技术方案可以看出,本申请实施例具有以下优点:
本申请实施例中,获取在第一场景下的至少一个终端设备所处的地理位置和状态信息后,基于每个终端设备的地理位置和状态信息,确定每个终端设备分别对应的第一虚拟道具,向每个终端设备分别发送所对应的第一虚拟道具被投放的地理位置信息,使得每个终端设备都能够根据获取的地理位置信息从对应的地理位置获取对应的第一虚拟道具。通过地理位置与状态信息的结合来确定出每个终端设备在不同场景下所能够分配到的虚拟道具,大大地丰富了用户在互动活动中的体验与趣味性,并且结合互动道具展现了完整的效果体验。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例。
图1是本申请实施例中虚拟道具分配的系统架构示意图;
图2是本申请实施例中进行虚拟道具分配的方法的一个实施例示意图;
图3是本申请实施例中AR系统的结构示意图;
图4是本申请实施例中对地图划分区域的示意图;
图5是本申请实施例中在地图上示意虚拟道具对应的区域的示意图;
图6是本申请实施例中以年龄确定分配权重的示意图;
图7是本申请实施例中通过广度优先算法进行分组的示意图;
图8是本申请实施例中以语音方式反馈被投放的地理位置信息的示意图;
图9是本申请实施例中以文字方式反馈被投放的地理位置信息的示意图;
图10是本申请实施例中对第一虚拟道具进行显示或隐式的示意图;
图11是本申请实施例中提供的语音交互系统;
图12是本申请实施例中提供的另一语音交互系统;
图13是本申请实施例中进行虚拟道具分配的方法的另一个实施例示意图;
图14是本申请实施例中对场景进行变更的示意图;
图15是本申请实施例中提供的服务器一个实施例示意图;
图16是本申请实施例中提供的服务器另一个实施例示意图;
图17是本申请实施例中提供的终端设备一个实施例示意图;
图18是本申请实施例中的通信装置的硬件结构一个示意图。
具体实施方式
本申请实施例提供了一种虚拟道具分配的方法、服务器及终端设备,用于大大地丰富了用户在互动活动中的体验与趣味性,并且结合互动道具展现了完整的效果体验。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。 此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。在本申请中出现的对步骤进行的命名或者编号,并不意味着必须按照命名或者编号所指示的时间/逻辑先后顺序执行方法流程中的步骤,已经命名或者编号的流程步骤可以根据要实现的技术目的变更执行次序,只要能达到相同或者相类似的技术效果即可。
传统的线上推广互动活动总是以领奖、抽奖等一系列的形式出现,并且每次抽奖仅针对一个用户,使得无法给更多用户带来较强的趣味互动性;另外,由于传统的方式只是简单地根据定位地图来实现虚拟的领奖或抽奖后告知用户抽中的奖项,但由于用户时常走动等因素都会对领奖等造成影响,使得无法感知到用户的相关情况,从而使得在现有的互动方式所带来的用户交互体验欠佳,并且难以结合互动道具展现完整的效果体验。
因此,为了解决上述问题,本申请实施例提供了一种虚拟道具分配的方法,该方法可以应用于如图1所示的系统架构示意图,请参阅图1,为本申请实施例中虚拟道具分配的系统架构示意图。从图1中可以看出,该系统包括至少一个终端设备和服务器。在同一个互动活动中,可以加入多个用户共同进行,而每个用户均可以持有一个终端设备,如:终端设备1、终端设备2、终端设备3以及终端设备4等。每个终端设备都会获取自身的地理位置以及状态信息,该状态信息可以包括但不限于用户所处的当前环境、以及用户相关的用户特征信息等。这样,终端设备便可以将地理位置以及状态信息发送给服务器,使得服务器可以实时地获取到用户的地理位置和状态信息后,分别确定出分配给每个终端设备的虚拟道具,针对不同的场景会对应有相应的虚拟道具,即不同的抽奖的奖励,这样,服务器就会依据已经配置的规则确定出每个虚拟道具的投放地理位置,从而通过语音播报、文字显示等展现方式发送至相应的终端设备,从而使得相应的终端设备根据这些投放地理位置指引对应的用户去获取得到该虚拟道具。
其中,服务器可以认为是至少集合了AR处理能力、LBS(location based service,基于地理位置的服务)服务能力以及语音处理能力。
上述所描述的终端设备至少是集合了摄像头、摄像头传感器等设备,在实际应用中,该终端设备包括但不限于手机、移动终端、平板电脑、笔记本电脑等,又或者是具有通信功能的可穿戴智能设备,如智能手表、智能手环等,在本申请实施例中将不做具体限定。
为便于更好地理解本申请实施例所提出的方案,下面对本申请实施例中的具体流程进行介绍,请参阅图2,本申请实施例中进行虚拟道具分配的方法的一个实施例,包括:
201、至少一个终端设备在第一场景下获取各自的当前地理位置和状态信息。
需要说明的是,为了便于描述,在之后的实施例中,主要以当前地理位置这一描述来指示终端设备获取地理位置和状态信息时,该终端设备所处的地理位置。
本实施例中,至少一个用户在加入到某个相同的互动活动之后,用户在该互动活动中所能够获取得到的奖励总会因用户所处的位置、环境因素等而受到影响,因此用户各自所持有的终端设备就会获取地理位置和状态信息。
需要说明的是,上述描述的场景可以类似于某个寻宝游戏互动活动中的关卡,而第一场景可以是该寻宝游戏互动活动中的任意一关,在本申请实施例中将不对第一场景做具体限定。
进一步说明的是,所描述的当前地理位置包括但不限于如下任一种:wifiMac(wifi的物理mac地址)、cellID(运营商基站)、IP地址、通过定位(如GPS定位)获得的经纬度等。所描述的状态信息包括但不限于如下任一种:当前环境信息、用户特征信息,而当前环境信息又可以包括但不限于当前的温度情况、天气情况或者日期等,用户特征信息可以包括但不限于用户的年龄情况、消费情况等。应当理解的是,前述的当前地理位置、状态信息、当前环境信息、用户特征信息除了上述所分别描述的情况以下,在实际应用中还可以是其他的信息,在本申请实施例中将不做具体限定。
可选地,在另一些实施例中,可以通过用户当前所处的周围环境等信息对当前地理位置进行正确度的修正,使得提供给服务器的当前地理位置的准确程度达到最优,最大地贴切用户的实际位置。主要可以通过以下两种方式进行修 正:
方式一:每个终端设备通过所配置的摄像头采集各自的第一周围环境信息,然后每个终端设备基于各自对应的互联网协议IP地址和第一周围环境信息分别对当前地理位置进行修正。
也就是说,第一周围环境信息可以是终端设备当前所处位置的周边环境,如周围的某某建筑、某某小区、某某公路等,具体在本申请实施例中将不做限定说明。终端设备能够从覆盖某一定区域的基站中获取到自己的IP信息,这样终端设备就能够基于IP信息确定出一个较宽的位置范围,使得在结合第一周围信息之后便可以对当前地理位置进行修正,从而使得修正后的当前地理位置最大地贴切用户的实际位置,极大地提升了用户的活动体验。
方式二:每个终端设备获取多个历史地理位置信息,以及通过所配置的摄像头采集各自的第二周围环境信息;然后,每个终端设备基于多个历史地理位置信息和第二周围环境信息分别对当前地理位置进行修正。
本实施例中,历史地理位置信息可以从位置搜索服务器中获取,比如:谷歌搜索、搜索地图等,每个终端设备在获取到多个历史地理位置信息后会分别基于Knn近邻分类法对这多个历史地理位置信息进行训练并分类,使得每一类中所包括的历史地理位置信息的风格相近,然后选择契合第二周围环境信息的风格,如:有温度的风格等,以此对当前地理位置进行修正。
需要说明书的是,上述除了方式一与方式二对当前地理位置进行修正以外,在实际应用中还可以包括其他的方式,在本申请实施例中将不做限定。
202、每个终端设备分别向服务器发送各自的当前地理位置和状态信息。
203、服务器基于每个终端设备的当前地理位置和状态信息分别确定每个终端设备的第一虚拟道具,第一虚拟道具与第一场景相对应。
本实施例中,服务器基于AR技术可以将虚拟道具叠加在真实的视频图像中,从而合成具有虚拟道具的视频图像,实现虚拟道具与真实视频图像的结合,这样用户就会通过所持有的终端设备所显示的视频图像去寻找虚拟道具。请参阅图3,为本申请实施例中AR系统的结构示意图。从图3看出,该AR系统由虚拟场景生成单元以及头部显示器和头盔等交互设备构成。其中,虚拟场景生成单元负责虚拟场景的建模、管理、绘制和其他外设的管理;头部显示器负责显 示虚拟和显示融合后的信号;头部跟踪器跟踪用户视线变化;交互设备用于实现感官信号及环境控制操作信号的输入输出。首先,摄像头和传感器采集真实场景的视频或者图像,传入服务器对其进行分析和重构,并结合头部跟踪器的数据来分析虚拟场景和真实场景的相对位置,实现坐标系的对齐并进行虚拟场景的融合计算;交互设备采集外部控制信号,实现对虚实结合场景的交互操作。融合后的信息会实时地显示在头部显示器中,展现在用户的视野中。
另外,每个场景都会配置不同的虚拟道具,比如第一场景会对应有第一虚拟道具,而第二场景会对应有第二虚拟道具等等,值得注意的是,场景的等级越高,那么虚拟道具的等级也就会越高。例如:类似于某个寻宝游戏互动活动中的关卡,每个关卡都会对应有不同的奖励,而关卡等级越高,难度也会相应的增加,所以奖励也会更加丰富。
而由于终端设备的当前地理位置和状态信息都会进一步地影响用户获取到较丰富的虚拟道具的概率,因此服务器在第一场景下得到每个终端设备的当前地理位置和状态信息之后,就会基于每个终端设备的当前地理位置和状态信息来确定出分别需要分配给每个终端设备的第一虚拟道具。
需要说明的是,为了便于描述,在之后的实施例中,主要以当前环境位置这一描述来指示终端设备获取地理位置和状态信息时,该终端设备所处的环境位置。可选地,在另一些实施例中,由于状态信息可以包括当前环境信息和用户特征信息,基于此服务器可以确定出每个终端设备的第一虚拟道具,在至少一个终端设备具体为多个终端设备时。步骤203具体如下:
服务器基于每个终端设备的当前地理位置对多个终端设备进行分组,以得到至少一个终端设备分组;
服务器确定每个终端设备分组在地图上对应的区域,其中,地图预先划分为多个具有相应地理位置范围的区域;
服务器根据每个用户特征信息分别确定对应的终端设备的分配权重;
当每个终端设备分组所对应的区域中对应有虚拟道具时,服务器基于每个当前环境信息、每个终端设备的分配权重分别确定每个终端设备的第一虚拟道具。
可以理解成,在海量的用户都参与到同一个互动活动时,每个用户都不可 能感知到其余的用户的情况,所以就需要通过区域对用户进行分组,即具体地服务器基于每个终端设备的当前地理位置对至少一个终端设备进行分组,从而得到至少一个终端设备分组。
由于服务器已经事先基于一定的预设规则对互动活动所在的活动范围进行了划分,然后基于AR技术将该划分后的活动范围以虚拟形式的地图展现。地图可以是对测绘的街道图形按照一定比例进行缩小绘制的图形结果,本申请实施例可设定区域的尺寸,按照设定区域的尺寸将地图划分成多个区域,使得地图上的每一个区域的尺寸与设定区域的尺寸相应可以被划分成多个区域,使得每个区域可以包括相应的地理位置范围。请参阅图4,为本申请实施例中对地图划分区域的示意图。从图4可以看出,以半径为a的圆形区域为例,在设定圆形区域的半径后,可将地图划分成多个圆形区域,且每个圆形区域的半径与设定的圆形区域的半径相应。另外,需要说明的是,在本申请实施例中将不做区域数量的限定,以及上述所描述的圆形半径也不做限定说明。另外,除了上述划分成圆形区域后,在实际应用中也可以将地图划分成其他形状的区域,例如:边长为x的正方形等,在本申请实施例中将不做具体限定。
互动活动的主办方等会基于不同的需求预先将虚拟道具随机投放或者指定投放在活动区域的任意一个位置,例如:可以根据人群分布密度投放虚拟道具,一般而言,人群分布密度越大的区域所对应的位置投放的虚拟道具会越多,在实际应用中还可以基于其他的投放方式,在此将不做具体限定。请参阅图5,为本申请实施例中在地图上示意虚拟道具对应的区域的示意图。从图5可以看出,针对每一个圆形区域,可以在圆形区域中投放至少一个虚拟道具,其中,黑点表示虚拟道具。
因此,服务器在对终端设备进行分组之后,便可以确定每个终端设备分组在地图上对应的区域,即可以理解成确定每个终端设备分组中的每个终端设备具体在地图上的哪个区域。
而由于不同的用户特征信息都会进一步地影响该用户能够获取到相应的虚拟道具的概率,例如:用户的消费情况或年龄等。一般而言,消费情况越高的用户,总是能够分配得到更加丰富的虚拟道具;而假设互动活动是推广某个 兴起的电子产品,那么年龄处于青年阶段的用户,也是能够分配得到更加丰富的虚拟道具的。例如,请参阅图6,为本申请实施例中以年龄确定分配权重的示意图。从图6可以看出,随着年龄的增长,所确定出的分配权重就会越高。具体可以从下述预测模型中确定出:
Figure PCTCN2020124292-appb-000001
其中,age为年龄,β 0=-26.52表示当年龄为0时的分配权重,β 1=0.78表示当年龄增加一个单位时,相应的分配权重就会增加0.78。因此,最终的分配权重的公式为:
Figure PCTCN2020124292-appb-000002
需要说明的是,上述β 0=-26.52与β 1=0.78是基于多个用户的实际年龄确定出的,在此处仅起到说明的作用,在实际应用中应当视情况而定。
因此,服务器可以基于每个用户特征信息分别确定出对应的终端设备的分配权重,即确定出每个终端设备获取到相应的虚拟道具的概率,使得能够根据用户的实际情况来提高获取不同互动活动中的虚拟道具的可能性,增强了用户在互动活动中的趣味性和体验。另外,需要说明的是,上述所描述的用户特征信息除了前述的消费情况或年龄之外,在实际应用中还可以包括其他的特征信息,在此将不做限定。
因此,当每个终端设备分组所对应的区域中对应有虚拟道具时,服务器基于每个当前环境信息、每个终端设备的分配权重分别确定每个终端设备的第一虚拟道具。
也就是理解成,在每个终端设备分组所对应的区域中对应有虚拟道具时,由于处于不同的当前环境都会分配不同的虚拟道具,此时服务器仍然需要结合所处的当前环境信息从道具池中筛选出候选虚拟道具。在此基础上,服务器就会基于前面确定出每个终端设备的分配权重从候选虚拟道具中确定出每个终端设备的第一虚拟道具了。
例如:假设当前环境信息包括雨天,而终端设备1的分配权重为0.3、终端 设备2的分配权重为0.5、终端设备3的分配权重为0.2。那么服务器就会从该终端设备分组所对应的区域中的道具池中找出所有与雨天有关的候选虚拟道具,例如:小雨伞、大雨伞或汽车等,此时很明显,服务器会将汽车确定为终端设备2的第一虚拟道具、将大雨伞确定为终端设备1的第一虚拟道具以及将小雨伞确定为终端设备1的第一虚拟道具。因此,本实施例中结合用户特征信息、当前环境信息等状态信息使得用户在互动活动中的体验丰富了,并且能够完整的展现互动道具的效果体验。
需要说明的是,前述的当前环境信息可以包括但不限定于温度、天气或者日期,而温度又包括但不限于高温、低温等,而天气又包括但不限于晴天、雨天等,日期可以是节假日、非节假日等,具体地在本申请实施例中将不做限定。
可选地,在另一些实施例中,针对前述的分组方式,可以基于每个终端设备的当前地理位置确定任意两个终端设备之间的相对位置;服务器基于任意两个终端设备之间的相对位置对至少一个终端设备进行分组。
可选地,在另一些实施例中,可以通过服务器通过广度优先算法对任意两个终端设备的当前地理位置进行计算,以得到任意两个终端设备之间的相对位置。具体地,请参照图7,为本申请实施例中通过广度优先算法进行分组的示意图。以V1~V8分别表示终端设备1~终端设备8为例,从图7中可以看出,将V1加入区域,取出V1,并标记为true(即已经访问),将其邻接点加进入区域,则<—[V2 V3];取出V2,并标记为true(即已经访问),将其未访问过的邻接点加进入区域,则<—[V3 V4 V5];取出V3,并标记为true(即已经访问),将其未访问过的邻接点加进入区域,则<—[V4 V5 V6 V7];取出V4,并标记为true(即已经访问),将其未访问过的邻接点加进入区域,则<—[V5 V6 V7 V8];取出V5,并标记为true(即已经访问),因为其邻接点已经加入区域,则<—[V6 V7 V8];取出V6,并标记为true(即已经访问),将其未访问过的邻接点加进入区域,则<—[V7 V8];取出V7,并标记为true(即已经访问),将其未访问过的邻接点加进入区域,则<—[V8];取出V8,并标记为true(即已经访问),将其未访问过的邻接点加进入区域,则<—[]。这样,便可以通过一层层地向下遍历确定出每个终端设备间的相对位置,从而基于相对位置来确定分组。
需要说明的是,前述的相对位置若在预设范围内时,则可以将这任意两个 终端设备划分在同一个组中。
204、服务器确定每个第一虚拟道具被投放的地理位置信息。
本实施例中,互动活动的主办方等会基于不同的需求预先将虚拟道具随机投放或者指定投放在活动区域的任意一个位置,例如:可以根据人群分布密度投放虚拟道具,一般而言,人群分布密度越大的区域所对应的位置投放的虚拟道具会越多,在实际应用中还可以基于其他的投放方式,在此将不做具体限定。此时服务器便可以基于随机投放或者指定投放的虚拟道具、以及各自所对应的投放的地理位置信息生成对应关系,并且将该对应关系存储在数据库中。
这样,服务器在确定出每个终端设备所能够分配到的第一虚拟道具之后,便可以基于相应的对应关系来确定出每个第一虚拟道具被投放的地理位置信息。
205、服务器向第一终端设备发送第一终端设备所对应的第一虚拟道具被投放的地理位置信息,第一终端设备为至少一个终端设备中的任意一个。
本实施例中,由于每个第一虚拟道具被投放的地理位置信息都可以用来指示各自对应的终端设备从对应的地理位置获取相应的第一虚拟道具,也就是说每个第一虚拟道具被投放的地理位置信息指示了第一虚拟道具被投放的地理位置,例如:某个商城一楼的某个角落等等。因此,服务器在得到每个第一虚拟道具被投放的地理位置信息之后,便可以将其分别发送给相应的终端设备,具体地,可以向至少一个终端设备中的任意一个第一终端设备发送,使得第一终端设备可以在自身所对应的第一虚拟道具被投放的地理位置位置信息的指示下,从对应的地理位置获取对应的第一虚拟道具。
可选地,在另一些实施例中,针对步骤205,服务器可以通过语音消息或者文字消息将第一终端设备所对应的第一虚拟道具被投放的地理位置信息告知对应的第一终端设备。
也就是说服务器在确定出每个第一虚拟道具被投放的地理位置信息后,将每个第一虚拟道具被投放的地理位置信息携带于语音消息或者文字消息中,通过发送语音消息或文字消息给对应的第一终端设备,使得利用语音或文字的方式提高了互动的交互性。请参阅图8,为本申请实施例中以语音方式反馈被投放的地理位置信息的示意图。类似地,请参阅图9,为本申请实施例中以文字 方式反馈被投放的地理位置信息的示意图。应当理解的是,在实际应用中除了通过语音消息或者文字消息告知第一终端设备对应的第一虚拟道具被投放的地理位置信息之外,还可以是其他的通知消息,在本申请中将不做具体限定。
206、第一终端设备根据第一终端设备所对应的第一虚拟道具被投放的地理位置信息,从对应的地理位置获取对应的第一虚拟道具。
本实施例中,若第一终端设备在接收了服务器发送的第一终端设备所对应的第一虚拟道具被投放的地理位置信息之后,便可以在该被投放的地理位置信息的指示下,从相应的地理位置处获取到第一虚拟道具了。
进一步地,在根据第一虚拟道具被投放的地理位置信息到达投放的地理位置之后,请参阅图10,为本申请实施例中对第一虚拟道具进行显示或隐式的示意图。从图10中可以看出,倘若终端设备1对应的第一虚拟道具是以显示的方式出现的,那么此时终端设备1就可以直接获取到第一虚拟道具。但是,倘若终端设备2对应的第一虚拟道具是以隐式的方式出现的,例如:加锁、加密等,那么此时终端设备2则需要对处于加锁或加密状态的第一虚拟道具进行解锁操作。比如:执行解锁操作所规定的唱歌等任务,这样第一终端设备在解锁成功后就可以获取到第一虚拟道具,充分地提升了用户在整个互动活动中的体验、趣味性。
可选地,在另一些实施例中,第一终端设备可以通过接收服务器发送的语音消息或文字消息来接收第一终端设备所对应的第一虚拟道具被投放的地理位置信息,这样第一终端设备便可以播放出该语音消息,或者在显示界面上显示出该文字消息,从而使得第一终端设备所对应的用户能够根据语音消息播放出的第一虚拟道具被投放的地理位置信息的指示,从相应的地理位置获取第一虚拟道具;或者,在文字消息的指示下从相应的地理位置获取第一虚拟道具,使得利用语音或文字的方式提高了互动的交互性。应当理解的是,在实际应用中除了通过语音消息或者文字消息来获取第一终端设备对应的第一虚拟道具被投放的地理位置信息之外,还可以是其他的通知消息,在本申请中将不做具体限定。
另外,需要说明的是,每个终端设备都可以通过获取各自用户的语音消息或文字消息,然后发送给服务器,使得服务器反馈相应的内容。请参阅图11, 为本申请实施例中提供的语音交互系统。从图11中可以看出,终端设备采集到数字语音信号,然后通过端点检测、降噪以及特征提取后发送给服务器,此时服务器通过语音语言学知识、信号处理技术、数据挖掘技术以及统计建模方法等对语音数据库、语言数据库等进行训练得到声学模型或语言模型,然后基于声学模块或语言模型对特征提取后的数字语音信号进行解码,从而得到识别结果,即文字信息。
又或者,请参阅图12,为本申请实施例中提供的另一语音交互系统。从图12可以看出,终端设备采集到的语音后会进行特征提取,然后服务器会基于最大期望(expectation maximization,EM)训练算法、字词划分以及声学模型对特征提取后的语音进行识别网络解码,从而得到识别结果。应当理解的是,在实际应用中,还可能包括其他的语音交互系统,在本申请实施例中将不做限定。
比如:当用户在获取第一虚拟道具的过程中,若是产生疑问等都可以通过语音或文字等方式告知服务器,服务器会反馈相应的指导流程等,具体在本申请实施例中将不做限定。
为便于更好地理解本申请实施例所提出的方案,下面对本实施例中的具体流程进行介绍,请参阅图13,是本实施例提供的虚拟道具分配的方法的一个实施例示意图,该方法可以包括:
501、至少一个终端设备在第一场景下获取各自的当前地理位置和状态信息。
502、每个终端设备分别向服务器发送各自的当前地理位置和状态信息。
503、服务器基于每个终端设备的当前地理位置和状态信息分别确定每个终端设备的第一虚拟道具,第一虚拟道具与第一场景相对应。
504、服务器确定每个第一虚拟道具被投放的地理位置信息。
505、服务器向第一终端设备发送第一终端设备所对应的第一虚拟道具被投放的地理位置信息,第一终端设备为至少一个终端设备中的任意一个。
506、第一终端设备根据第一终端设备所对应的第一虚拟道具被投放的地理位置信息,从对应的地理位置获取对应的第一虚拟道具。
本实施例中,步骤501-506与前述图2所描述的步骤201-206类似,具体此处将不做赘述。
507、每个终端设备分别在第一场景下获取各自的切换指令。
本实施例中,在至少一个终端设备中的每个第一终端设备能够从对应的地理位置获取到对应的第一虚拟道具之后,也就意味用户在当前的第一场景中得到了相应的奖励。那么,此时就会进入到下一个场景中,继续获取下一场景中相应的虚拟道具。应当理解的是,可以通过点击切换按钮、输入语音等方式来触发切换操作,从而获取到相应的切换指令,在本申请实施例中将不做具体限定。
508、每个终端设备分别向服务器发送切换指令。
本实施例中,终端设备获取到第一场景的中相应的第一虚拟道具后,通过将切换指令发送至服务器,使得服务器能够在切换指令的指示下将第一场景切换至第二场景,进一步地使得每个终端设备都进入到第二场景中,并在第二场景中继续获取相应的第二虚拟道具。
509、服务器根据每个切换指令将第一场景切换至第二场景,其中,第二场景对应有第二虚拟道具,第二虚拟道具的等级高于第一虚拟道具的等级。
本实施例中,由于互动活动可以包括至少一个场景,而且每个场景都会配置不同的虚拟道具。请参阅图14,为本申请实施例中对场景进行变更的示意图。从图14可以看出,随着场景等级增加,那么相应的虚拟道具的等级也会增加,比如,第一场景中的第一虚拟道具可以是雨衣,而第二场景中的第二虚拟道具则可以是雨伞,而第三场景中的第三虚拟道具则可以是汽车等,具体在本实施例中将不做具体限定。
因此,当服务器在将第一场景切换至第二场景后,这样每个终端设备便可以在第二场景下获取各自的的当前地理位置和状态信息后发给服务器。
那么,服务器基于每个终端设备的当前地理位置和状态信息分别确定每个终端设备在第二场景下的第二虚拟道具,此时服务器会基于第二场景下所存储的第二虚拟道具与相应的被投放的地理位置信息之间的对应关系,确定出每个第二虚拟道具被投放的地理位置信息,从而向第一终端设备发送第一终端设备所对应的第二虚拟道具被投放的地理位置信息。这样,第一终端设备根据第一终端设备所对应的第二虚拟道具被投放的地理位置信息,从对应的地理位置获取对应的第一虚拟道具。具体可以参照前述图2所描述的步骤201-206的描述进 行理解,此处将不做赘述。
应当理解的是,在处于不同的场景时,均可以参照前述步骤501-步骤509进行理解,具体此处仅以第一场景切换至第二场景为例进行说明,具体在本申请实施例中将不对第一场景与第二场景进行限定。
本申请实施例中,结合不同的场景切换,使得整个互动活动中具备更强的趣味互动性。
上述主要从方法的角度对本申请实施例提供的方案进行了介绍。可以理解的是为了实现上述功能,包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本申请中所公开的实施例描述的各示例的模块及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
下面对本申请实施例中的服务器60进行详细描述,请参阅图15,图15为本申请实施例中提供的服务器60一个实施例示意图,该服务器60包括:
接收单元601,用于获取在第一场景下的至少一个终端设备所处的地理位置和状态信息;
确定单元602,用于基于接收单元601接收到的每个终端设备的地理位置和状态信息,确定每个终端设备分别对应的第一虚拟道具,第一虚拟道具与第一场景相对应;
确定单元602,用于确定第一虚拟道具分别被投放的地理位置信息;
发送单元603,用于向第一终端设备发送第一终端设备所对应的第一虚拟道具被投放的地理位置信息以指示获取对应的第一虚拟道具,第一终端设备为至少一个终端设备中的任意一个。
可选地,在上述图15所对应的实施例的基础上,在本申请实施例提供的服务器60的另一实施例中,状态信息包括所述终端设备所处的环境信息和用户特征信息,所述至少一个终端设备包括多个终端设备,该确定单元602,可以包括:
分组模块,用于基于接收单元601接收到的多个终端设备的地理位置对多个终端设备进行分组,以得到至少一个终端设备分组;
确定模块,用于确定分组模块得到的终端设备分组在地图上对应的区域,其中,地图预先划分为多个具有相应地理位置范围的区域;
确定模块,用于根据接收单元601接收到的用户特征信息分别确定所对应终端设备的分配权重;
确定模块,用于在终端设备分组所对应的区域中对应有虚拟道具时,基于环境信息、终端设备的分配权重分别确定所对应终端设备的第一虚拟道具。
可选地,在上述图15的可选实施例的基础上,本申请实施例提供的服务器60的另一实施例中,分组模块,可以包括:
确定子模块,用于基于多个终端设备的地理位置确定任意两个终端设备之间的相对位置;
分组子模块,用于基于确定子模块确定的任意两个终端设备之间的相对位置对多个终端设备进行分组。
可选地,在上述图15所对应的实施例的基础上,本申请实施例提供的服务器60的另一实施例中,确定子模块,通过广度优先算法对任意两个终端设备的地理位置进行计算,以得到任意两个终端设备之间的相对位置。
可选地,在上述图15、以及图15所对应的可选实施例的基础上,本申请实施例提供的服务器60的另一实施例中,发送单元603,可以包括:
发送模块,用于向第一终端设备发送语音消息或文字消息,其中,语音消息或文字消息中携带第一终端设备所对应的第一虚拟道具被投放的地理位置信息。
可选地,在上述图15、以及图15所对应的可选实施例的基础上,请参阅图16,为本申请实施例提供的服务器60的另一实施例示意图,服务器60还可以包括:
接收单元601,还用于接收终端设备发送的切换指令;
切换单元604,用于根据接收单元601接收到的切换指令将终端设备所在的第一场景切换至第二场景,其中,第二场景对应有第二虚拟道具,第二虚拟道具的等级高于第一虚拟道具的等级。
上述从模块化功能实体的角度对本申请实施例中的服务器60进行描述,下面从模块化的角度对本申请实施例中的终端设备70进行描述。请参阅图17,图17为本申请实施例中提供的终端设备70一个实施例示意图,该终端设备70可以包括:
获取单元701,用于在第一场景下获取所处的地理位置和状态信息;
发送单元702,用于向服务器发送地理位置和状态信息,以使服务器基于每个终端设备的地理位置和状态信息分别确定每个终端设备分别对应的第一虚拟道具,第一虚拟道具与第一场景相对应;
接收单元703,用于接收服务器发送的第一虚拟道具被投放的地理位置信息;
获取单元701,用于根据接收单元703接收到的第一虚拟道具被投放的地理位置信息,从对应的地理位置获取自身对应的第一虚拟道具。
可选地,在上述图17所对应的可选实施例的基础上,本申请实施例提供的终端设备70的另一实施例中,终端设备70还包括:
采集单元,用于通过所配置摄像头采集第一周围环境信息;
第一修正单元,用于基于对应的互联网协议IP地址和第一周围环境信息对地理位置进行修正。
可选地,在上述图17所对应的可选实施例的基础上,本申请实施例提供的终端设备70的另一实施例中,终端设备70还包括:
获取单元,用于获取多个历史地理位置信息,以及通过所配置的摄像头采集第二周围环境信息;
第二修正单元,用于基于获取单元得到多个历史地理位置信息以及第二周围环境信息对地理位置进行修正。
可选地,在上述图17、以及图17所对应的可选实施例的基础上,本申请实施例提供的终端设备70的另一实施例中,接收单元703可以包括:
接收模块,用于接收服务器发送的语音消息或文字消息,其中,语音消息或文字消息中携带第一终端设备所对应的第一虚拟道具被投放的地理位置信息。
可选地,在上述图17、以及图17所对应的可选实施例的基础上,本申请实施例提供的终端设备70的另一实施例中,
获取单元701,还用于在第一场景下获取切换指令;
发送单元703,用于向服务器发送切换指令,以使得服务器将第一场景切换至第二场景,其中,第二场景对应有第二虚拟道具,第二虚拟道具的等级高于第一虚拟道具的等级。
上面从模块化功能实体的角度对本申请实施例中的服务器60和终端设备70进行描述,下面从硬件处理的角度对本申请实施例中的服务器60和终端设备70进行描述。图18是本申请实施例中的通信装置的硬件结构一个示意图。如图18所示,该通信装置可以包括:
该通信装置包括至少一个处理器801,通信线路807,存储器803以及至少一个通信接口804。
处理器801可以是一个通用中央处理器(central processing unit,CPU),微处理器,特定应用集成电路(application-specific integrated circuit,服务器IC),或一个或多个用于控制本申请方案程序执行的集成电路。
通信线路807可包括一通路,在上述组件之间传送信息。
通信接口804,使用任何收发器一类的装置,用于与其他装置或通信网络通信,如以太网,无线接入网(radio access network,RAN),无线局域网(wireless local area networks,WLAN)等。
存储器803可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储装置,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储装置,存储器可以是独立存在,通过通信线路807与处理器相连接。存储器也可以和处理器集成在一起。
其中,存储器803用于存储执行本申请方案的计算机执行指令,并由处理器801来控制执行。处理器801用于执行存储器803中存储的计算机执行指令,从而实现本申请上述实施例提供的虚拟道具分配的方法。
可选的,本申请实施例中的计算机执行指令也可以称之为应用程序代码,本申请实施例对此不作具体限定。
在具体实现中,作为一种实施例,通信装置可以包括多个处理器,例如图18中的处理器801和处理器802。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个装置、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
在具体实现中,作为一种实施例,通信装置还可以包括输出设备805和输入设备806。输出设备805和处理器801通信,可以以多种方式来显示信息。输入设备806和处理器801通信,可以以多种方式接收用户的输入。例如,输入设备806可以是鼠标、触摸屏装置或传感装置等。
上述的通信装置可以是一个通用装置或者是一个专用装置。在具体实现中,通信装置可以是台式机、便携式电脑、网络服务器、无线终端装置、嵌入式装置或有图18中类似结构的装置。本申请实施例不限定通信装置的类型。
上述接收单元601、获取单元701、接收单元703都可以通过输入设备806来实现,发送单元603、发送单元702都可以通过输出设备805来实现,确定单元602、切换单元604都可以通过处理器801或处理器802来实现。
另外,本申请实施例还提供了一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序用于执行上述实施例提供的方法。
本申请实施例还提供了一种包括指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例提供的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性 的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (16)

  1. 一种虚拟道具分配的方法,所述方法由服务器执行,所述方法包括:
    获取在第一场景下的至少一个终端设备所处的地理位置和状态信息;
    基于每个终端设备的地理位置和状态信息,确定每个终端设备分别对应的第一虚拟道具,所述第一虚拟道具与所述第一场景相对应;
    确定所述第一虚拟道具分别被投放的地理位置信息;
    向第一终端设备发送所述第一终端设备所对应的第一虚拟道具被投放的地理位置信息以指示获取对应的第一虚拟道具,所述第一终端设备为所述至少一个终端设备中的任意一个。
  2. 根据权利要求1所述的方法,所述状态信息包括所述终端设备所处的环境信息和用户特征信息,所述至少一个终端设备包括多个终端设备,所述基于每个终端设备的地理位置和状态信息,分别确定每个终端设备分别对应的第一虚拟道具,包括:
    基于所述多个终端设备的地理位置对所述多个终端设备进行分组,以得到至少一个终端设备分组;
    确定所述终端设备分组在地图上对应的区域,其中,所述地图预先划分为多个具有相应地理位置范围的区域;
    根据所述用户特征信息分别确定所对应终端设备的分配权重;
    当所述终端设备分组所对应的区域中对应有虚拟道具时,基于所述环境信息、所述终端设备的分配权重分别确定所对应终端设备的第一虚拟道具。
  3. 根据权利要求2所述的方法,基于所述多个终端设备的地理位置对所述多个终端设备进行分组,包括:
    基于所述多个终端设备的地理位置确定任意两个终端设备之间的相对位置;
    基于所述任意两个终端设备之间的相对位置对所述多个终端设备进行分组。
  4. 根据权利要求3所述的方法,基于所述多个终端设备的地理位置确定任意两个终端设备之间的相对位置,包括:
    通过广度优先算法对任意两个终端设备的地理位置进行计算,以得到所述 任意两个终端设备之间的相对位置。
  5. 根据权利要求1至4中任一所述的方法,所述向第一终端设备发送所述第一终端设备所对应的第一虚拟道具被投放的地理位置信息以指示获取对应的第一虚拟道具,包括:
    向所述第一终端设备发送语音消息或文字消息,其中,所述语音消息或是文字消息中携带所述第一终端设备所对应的第一虚拟道具被投放的地理位置信息。
  6. 根据权利要求1至5中任一所述的方法,所述向第一终端设备发送所述第一终端设备所对应的第一虚拟道具被投放的地理位置信息以指示获取对应的第一虚拟道具之后,所述方法还包括:
    接收所述终端设备发送的切换指令;
    根据所述切换指令将所述终端设备所在的所述第一场景切换至第二场景,其中,所述第二场景对应有第二虚拟道具,所述第二虚拟道具的等级高于所述第一虚拟道具的等级。
  7. 一种虚拟道具分配的方法,所述方法由终端设备执行,所述方法包括:
    在第一场景下获取所处的地理位置和状态信息;
    向服务器发送所述地理位置和状态信息,以使所述服务器基于每个终端设备的地理位置和状态信息分别确定所述每个终端设备分别对应的第一虚拟道具,所述第一虚拟道具与所述第一场景相对应;
    接收所述服务器发送的第一虚拟道具被投放的地理位置信息;
    根据所述第一虚拟道具被投放的地理位置信息,从对应的地理位置获取自身对应的第一虚拟道具。
  8. 根据权利要求7所述的方法,所述方法还包括:
    通过所配置的摄像头采集第一周围环境信息;
    基于对应的互联网协议IP信息和所述第一周围环境信息对所述地理位置进行修正。
  9. 根据权利要求7所述的方法,所述方法还包括:
    获取多个历史地理位置信息,以及通过所配置的摄像头采集第二周围环境信息;
    基于所述多个历史地理位置信息以及所述第二周围环境信息对所述地理位置进行修正。
  10. 根据权利要求7至9任一所述的方法,所述接收所述服务器发送的第一虚拟道具被投放的地理位置信息,包括:
    接收所述服务器发送的语音消息或文字消息,其中,所述语音消息或文字消息中携带所述第一终端设备所对应的第一虚拟道具被投放的地理位置信息。
  11. 根据权利要求7至10中任一所述的方法,所述根据所述第一虚拟道具被投放的地理位置信息,从对应的地理位置获取自身对应的第一虚拟道具之后,还包括:
    在所述第一场景下获取切换指令;
    向所述服务器发送所述切换指令,以使得所述服务器将所述第一场景切换至第二场景,其中,所述第二场景对应有第二虚拟道具,所述第二虚拟道具的等级高于所述第一虚拟道具的等级。
  12. 一种服务器,包括:
    接收单元,用于获取在第一场景下的至少一个终端设备所处的地理位置和状态信息;
    确定单元,用于基于所述接收单元接收到的每个终端设备的地理位置和状态信息,确定每个终端设备分别对应的第一虚拟道具,所述第一虚拟道具与所述第一场景相对应;
    所述确定单元,用于确定所述第一虚拟道具分别被投放的地理位置信息;
    发送单元,用于向第一终端设备发送所述第一终端设备所对应的第一虚拟道具被投放的地理位置信息以指示获取对应的第一虚拟道具,所述第一终端设备为所述至少一个终端设备中的任意一个。
  13. 一种服务器,所述服务器包括:
    输入/输出(I/O)接口、处理器和存储器,
    所述存储器中存储有程序指令;
    所述处理器用于执行存储器中存储的程序指令,执行如权利要求1至6中任一所述的方法。
  14. 一种终端设备,所述终端设备包括:
    输入/输出(I/O)接口、处理器和存储器,
    所述存储器中存储有程序指令;
    所述处理器用于执行存储器中存储的程序指令,执行如权利要求7至11中任一所述的方法。
  15. 一种计算机可读存储介质,所述存储介质用于存储计算机程序,所述计算机程序用于执行如权利要求1至6中任一项所述的方法,或执行如权利要求7至11中任一项所述的方法。
  16. 一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行权利要求1至6中任一项所述的方法,或执行如权利要求7至11中任一项所述的方法。
PCT/CN2020/124292 2020-01-06 2020-10-28 一种虚拟道具分配的方法和相关装置 WO2021139328A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP20911764.7A EP3995935A4 (en) 2020-01-06 2020-10-28 METHOD FOR ALLOCATING VIRTUAL ACCESSORIES AND ASSOCIATED APPARATUS
JP2022517927A JP7408785B2 (ja) 2020-01-06 2020-10-28 仮想プロップの割り当て方法及び関連装置
KR1020227005601A KR20220032629A (ko) 2020-01-06 2020-10-28 가상적 프롭 배정 방법 및 관련된 장치
US17/581,502 US20220148231A1 (en) 2020-01-06 2022-01-21 Virtual prop allocation method and related apparatuses

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010010741.3 2020-01-06
CN202010010741.3A CN111221416B (zh) 2020-01-06 2020-01-06 一种虚拟道具分配的方法、服务器及终端设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/581,502 Continuation US20220148231A1 (en) 2020-01-06 2022-01-21 Virtual prop allocation method and related apparatuses

Publications (1)

Publication Number Publication Date
WO2021139328A1 true WO2021139328A1 (zh) 2021-07-15

Family

ID=70831261

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/124292 WO2021139328A1 (zh) 2020-01-06 2020-10-28 一种虚拟道具分配的方法和相关装置

Country Status (6)

Country Link
US (1) US20220148231A1 (zh)
EP (1) EP3995935A4 (zh)
JP (1) JP7408785B2 (zh)
KR (1) KR20220032629A (zh)
CN (1) CN111221416B (zh)
WO (1) WO2021139328A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111221416B (zh) * 2020-01-06 2021-12-07 腾讯科技(深圳)有限公司 一种虚拟道具分配的方法、服务器及终端设备
CN111672123A (zh) * 2020-06-10 2020-09-18 腾讯科技(深圳)有限公司 虚拟操作对象的控制方法和装置、存储介质及电子设备
CN112416494B (zh) * 2020-11-20 2022-02-15 腾讯科技(深圳)有限公司 虚拟资源的处理方法、装置、电子设备及存储介质
CN113101648B (zh) * 2021-04-14 2023-10-24 北京字跳网络技术有限公司 一种基于地图的交互方法、设备及存储介质
CN117472184A (zh) * 2023-11-02 2024-01-30 广州保呗科技有限公司 一种用于虚拟现场的数据采集装置和方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537582A (zh) * 2016-08-24 2018-09-14 阿里巴巴集团控股有限公司 一种数据处理方法及装置
CN109274977A (zh) * 2017-07-18 2019-01-25 腾讯科技(深圳)有限公司 虚拟道具分配方法、服务器及客户端
CN109284714A (zh) * 2018-09-21 2019-01-29 上海掌门科技有限公司 一种虚拟物品分配、发放及领取方法
CN109829703A (zh) * 2019-01-29 2019-05-31 腾讯科技(深圳)有限公司 虚拟物品发放方法和装置
CN111221416A (zh) * 2020-01-06 2020-06-02 腾讯科技(深圳)有限公司 一种虚拟道具分配的方法、服务器及终端设备

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002356555A1 (en) * 2001-10-09 2003-04-22 Sirf Technologies, Inc. Method and system for sending location coded images over a wireless network
US8103445B2 (en) * 2005-04-21 2012-01-24 Microsoft Corporation Dynamic map rendering as a function of a user parameter
CN102202256A (zh) * 2010-03-25 2011-09-28 陈冠岭 基于位置的移动虚拟宠物系统及其方法
JP2012027746A (ja) * 2010-07-25 2012-02-09 Nhn Corp コンテンツシステム、サーバ装置及びサーバ装置の動作方法
US8510658B2 (en) * 2010-08-11 2013-08-13 Apple Inc. Population segmentation
CN102013073A (zh) * 2010-12-03 2011-04-13 蒋君伟 信息定点处理方法
CN106964150B (zh) * 2011-02-11 2021-03-02 漳州市爵晟电子科技有限公司 一种动作定位点控制系统及其穿套式定点控制设备
US9782668B1 (en) * 2012-07-31 2017-10-10 Niantic, Inc. Placement of virtual elements in a virtual world associated with a location-based parallel reality game
US20140129342A1 (en) * 2012-11-06 2014-05-08 Apple Inc. Dynamically adjusting invitational content placement opportunities in interactive environments
US9582516B2 (en) * 2013-10-17 2017-02-28 Nant Holdings Ip, Llc Wide area augmented reality location-based services
EP3097545A4 (en) * 2014-01-22 2017-07-19 Speakeasy, Inc. Systems and methods of socially-driven product offerings
CN104540033B (zh) * 2014-12-17 2018-05-29 广州酷狗计算机科技有限公司 节目主播显示方法及装置
CN105450736B (zh) * 2015-11-12 2020-03-17 小米科技有限责任公司 与虚拟现实连接的方法和装置
CN107135243B (zh) * 2016-02-29 2020-10-16 阿里巴巴集团控股有限公司 一种相对位置的确定方法及装置
US11132839B1 (en) * 2016-03-01 2021-09-28 Dreamcraft Attractions Ltd. System and method for integrating real props into virtual reality amusement attractions
US10895950B2 (en) * 2016-12-09 2021-01-19 International Business Machines Corporation Method and system for generating a holographic image having simulated physical properties
CN106920079B (zh) * 2016-12-13 2020-06-30 阿里巴巴集团控股有限公司 基于增强现实的虚拟对象分配方法及装置
US10403050B1 (en) * 2017-04-10 2019-09-03 WorldViz, Inc. Multi-user virtual and augmented reality tracking systems
CN107203902A (zh) * 2017-05-12 2017-09-26 杭州纸箱哥文化传播有限公司 一种虚拟物品发放装置
US11113885B1 (en) * 2017-09-13 2021-09-07 Lucasfilm Entertainment Company Ltd. Real-time views of mixed-reality environments responsive to motion-capture data
CN111742560B (zh) * 2017-09-29 2022-06-24 华纳兄弟娱乐公司 向用户提供影视内容的方法和装置
JP7182862B2 (ja) * 2017-10-30 2022-12-05 株式会社コーエーテクモゲームス ゲームプログラム、記録媒体、ゲーム処理方法
US20190221031A1 (en) * 2018-01-17 2019-07-18 Unchartedvr Inc. Virtual experience control mechanism
US10679412B2 (en) * 2018-01-17 2020-06-09 Unchartedvr Inc. Virtual experience monitoring mechanism
CN108733427B (zh) * 2018-03-13 2020-04-21 Oppo广东移动通信有限公司 输入组件的配置方法、装置、终端及存储介质
US11170465B1 (en) * 2019-01-28 2021-11-09 Uncle Monkey Media Inc. Virtual location management computing system and methods thereof
US20200250430A1 (en) * 2019-01-31 2020-08-06 Dell Products, Lp System and Method for Constructing an Interactive Datacenter Map using Augmented Reality and Available Sensor Data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537582A (zh) * 2016-08-24 2018-09-14 阿里巴巴集团控股有限公司 一种数据处理方法及装置
CN109274977A (zh) * 2017-07-18 2019-01-25 腾讯科技(深圳)有限公司 虚拟道具分配方法、服务器及客户端
CN109284714A (zh) * 2018-09-21 2019-01-29 上海掌门科技有限公司 一种虚拟物品分配、发放及领取方法
CN109829703A (zh) * 2019-01-29 2019-05-31 腾讯科技(深圳)有限公司 虚拟物品发放方法和装置
CN111221416A (zh) * 2020-01-06 2020-06-02 腾讯科技(深圳)有限公司 一种虚拟道具分配的方法、服务器及终端设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3995935A4

Also Published As

Publication number Publication date
US20220148231A1 (en) 2022-05-12
CN111221416B (zh) 2021-12-07
JP7408785B2 (ja) 2024-01-05
CN111221416A (zh) 2020-06-02
JP2022550702A (ja) 2022-12-05
EP3995935A4 (en) 2022-10-05
EP3995935A1 (en) 2022-05-11
KR20220032629A (ko) 2022-03-15

Similar Documents

Publication Publication Date Title
WO2021139328A1 (zh) 一种虚拟道具分配的方法和相关装置
US9204259B2 (en) Indoor localization of mobile devices
US9258681B2 (en) Indoor localization of mobile devices
CN109962939B (zh) 位置推荐方法、装置、服务器、终端及存储介质
US10467311B2 (en) Communication system and method of generating geographic social networks in virtual space
JP2019061698A (ja) 経験上のエキスパートを判断する及び質問をルーティングするシステム及び方法
US8386422B1 (en) Using constructed paths to supplement map data
CN107024221B (zh) 导航路线制定方法及装置
WO2013055980A1 (en) Method, system, and computer program product for obtaining images to enhance imagery coverage
CN106570799A (zh) 一种基于二维码的智能音视频导游的智慧旅游方法
CN112784002B (zh) 一种虚拟场景生成方法、装置、设备和存储介质
CN110672089A (zh) 室内环境中导航的方法及设备
CN109409612A (zh) 一种路径规划方法、服务器及计算机存储介质
CN107025251A (zh) 一种数据推送方法及装置
CN109059934A (zh) 路径规划方法、装置、终端及存储介质
CN108917766A (zh) 一种导航方法和移动终端
JP5697704B2 (ja) 探索サーバ、探索方法および探索プログラム
CN110196951A (zh) 用户匹配方法及设备
CN110087185A (zh) 商圈围栏生成方法、装置、设备及计算机可读存储介质
KR100710045B1 (ko) 동영상 경로 정보 제공 서버 및 동영상 경로 정보 제공방법
CN112539752B (zh) 室内定位方法、室内定位装置
CN111954874A (zh) 标识地理区域内的功能区
CN108595650B (zh) 虚拟羽毛球场的构建方法、系统、设备及存储介质
Khan et al. Indoor navigation systems using annotated maps in mobile augmented reality
CN114372213B (zh) 信息处理方法、装置、设备、存储介质及程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20911764

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020911764

Country of ref document: EP

Effective date: 20220201

ENP Entry into the national phase

Ref document number: 20227005601

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022517927

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE