WO2022166551A1 - 交互方法、装置、电子设备和存储介质 - Google Patents

交互方法、装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2022166551A1
WO2022166551A1 PCT/CN2022/071705 CN2022071705W WO2022166551A1 WO 2022166551 A1 WO2022166551 A1 WO 2022166551A1 CN 2022071705 W CN2022071705 W CN 2022071705W WO 2022166551 A1 WO2022166551 A1 WO 2022166551A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
user
game resource
drawn
drawing area
Prior art date
Application number
PCT/CN2022/071705
Other languages
English (en)
French (fr)
Inventor
黄亦辰
吴彬
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2022166551A1 publication Critical patent/WO2022166551A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Definitions

  • the present disclosure relates to the field of computer technology, and in particular, to an interaction method, apparatus, electronic device, and storage medium.
  • the card drawing method is a common virtual resource extraction method in card games. Different game resources have different card drawing probabilities. In the existing card drawing methods, the user's probability of drawing cards for certain game resources is usually fixed, and the probability of users drawing high-level game resources is usually low, which affects the user's game experience, and the card drawing is implemented in a single way. Not conducive to increasing game interactivity.
  • the embodiments of the present disclosure provide an interaction method, apparatus, electronic device, and storage medium.
  • an embodiment of the present disclosure provides an interaction method, including:
  • a game resource extraction scene is displayed; the game resource extraction scene includes a drawing area;
  • a target game resource matching the image is acquired, as the user's game resource extraction result, and the target game resource is displayed.
  • an embodiment of the present disclosure further provides an interaction device, including:
  • the resource extraction scene display module is used to display the game resource extraction scene in response to the user's game resource extraction request; the game resource extraction scene includes a drawing area;
  • a drawing image receiving module for receiving the image drawn by the user in the drawing area
  • a target game resource acquisition module configured to acquire a target game resource matching the image based on the drawn image, as the user's game resource extraction result, and display the target game resource.
  • embodiments of the present disclosure further provide an electronic device, including a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the electronic device is made to The device implements any of the interaction methods provided in the embodiments of the present disclosure.
  • an embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a computing device, the computing device enables the computing device to implement the embodiment of the present disclosure Provided any of the described interaction methods.
  • the embodiments of the present disclosure for resource extraction games, by acquiring the image drawn by the user in the drawing area, and matching the target game resource based on the image, as the resource extraction result of the user, it is possible to solve the problem that the user draws a specific game in the existing solution.
  • the problem of low probability of resources and relatively simple implementation of resource extraction improves the probability of users drawing specific game resources, enriches the extraction implementation of resource extraction games, and improves the fun and interactivity of games. For example, for a resource with a high resource level in some games, the probability of the user drawing is usually very low.
  • the user can draw an image related to the high-level resource, and match the resource based on the image. High-level resources, so as to improve the efficiency of users drawing high-level game resources.
  • FIG. 1 is a flowchart of an interaction method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a game resource extraction scenario provided by an embodiment of the present disclosure
  • FIG. 3 is a flowchart of another interaction method provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of another game resource extraction scenario provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of a display effect of a user avatar or a specific avatar according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of a display effect of another user avatar or a specific avatar according to an embodiment of the present disclosure
  • FIG. 7 is a schematic structural diagram of an interaction apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 1 is a flowchart of an interaction method provided by an embodiment of the present disclosure, which can be applied to the situation of how to implement resource extraction for a resource extraction type game, and the interaction method can be implemented by an interaction device.
  • the interaction apparatus can be implemented by software and/or hardware, and can be integrated on any electronic device with computing capabilities, such as terminals such as smart phones, tablet computers, notebook computers, and desktop computers.
  • the interaction method provided by the embodiment of the present disclosure may include:
  • the user can trigger the game resource extraction request by touching the resource extraction control or resource extraction prop icon on the game interface; the electronic device responds to the request and displays the game resource extraction scene, and the scene is A summary of the interface display effect when the user extracts resources, the scene may include but is not limited to a drawing area, and the drawing area is used as a response area for the user to draw an image.
  • the specific layout effect of the game resource extraction scene can be designed according to game development requirements, which is not specifically limited in the embodiment of the present disclosure.
  • FIG. 2 is a schematic diagram of a game resource extraction scenario provided by an embodiment of the present disclosure, which is used to illustrate the embodiment of the present disclosure, but should not be construed as a specific limitation to the embodiment of the present disclosure.
  • the drawing area can be displayed in the middle area of the game resource extraction scene. After the user completes the drawing operation in the drawing area, the user can also submit the drawn image by touching the "Submit" control (not shown in FIG. 2 ) in the drawing area.
  • the drawing operations that the user can perform may include, but are not limited to, drawing graphics, adding colors, etc., and in the game resource extraction scene, the user is provided with various optional drawing tools, such as brushes with different line thicknesses, erasers, etc.
  • S102 Receive an image drawn by the user in the drawing area.
  • the electronic device may determine the image drawn by the user in the drawing area in response to the user's drawing image submission operation, and then may determine the target game resource matching the image in the game resource library.
  • receiving the image drawn by the user in the drawing area may include:
  • the image drawn by the user in the drawing area is determined based on the sliding track.
  • the user can swipe in the drawing area with the fingertip or use any other available touch tool (such as a stylus, etc.) to generate a sliding track; if the touch object (ie, the user's fingertip or touch tool) leaves the drawing area, For example, if the touch object moves out of the drawing area or the touch object leaves the game interface, the image drawn by the user can be determined based on the generated sliding trajectory; or, if the user's sliding operation is terminated within a preset time interval (for example, x seconds, It can be set according to actual needs) if no new sliding operation is detected, that is, it is considered that the user's sliding operation is over, and the image drawn by the user is determined based on the generated sliding trajectory.
  • a preset time interval for example, x seconds, It can be set according to actual needs
  • the sliding track may include a track for determining the outline of a graphic, a sliding track for filling a color, and the like. That is, in the process of determining an image drawn by a user based on the sliding track, it may also be determined based on a color filling operation of the user. The color of the image drawn by the user, thereby determining the final effect of the image drawn by the user.
  • the similarity between the drawn image and each game resource in the game resource library can be calculated, and the target game resource is determined according to the similarity, for example, the game corresponding to the similarity exceeding a preset threshold (the value can be set adaptively).
  • the resource is identified as the target game resource.
  • the implementation of similarity calculation can be implemented by any available image similarity calculation method, which is not specifically limited in the embodiment of the present disclosure, such as similarity calculation based on image feature vectors, or similarity based on image edge detection and comparison. calculation etc.
  • the image drawn by the user in the drawing area includes an image associated with a virtual character or an image associated with a virtual prop. That is, the embodiments of the present disclosure can be applied to the situation of extracting virtual characters in the game, and also applicable to the situation of extracting virtual props in the game.
  • the virtual character can be any character in the game, which is differentiated according to the game type.
  • the virtual item can be any item related to the virtual character in the game, such as weapons, armor, decorations, etc.
  • the embodiment of the present disclosure supports the user to draw any image related to the virtual character or virtual prop, so as to realize the extraction of game assets.
  • the image associated with the virtual character includes at least one of a character image, a face image, an expression image, an action image, and an accessory (eg, decoration) image of the virtual character.
  • the electronic device can determine the matching virtual character according to the character image, face image, expression image, action image or accessory image drawn by the user. For similarity matching in the library, the matching target facial expression portrait is first determined, and then based on the corresponding relationship between the target facial expression portrait and the target avatar, the target avatar is determined as the extraction object of the user.
  • Target game resources include cards associated with virtual characters or virtual props.
  • cards associated with virtual characters include but are not limited to image display cards of the virtual characters
  • cards associated with virtual props include but are not limited to style display cards of virtual props.
  • the image display card can also be implemented in the form of image display fragments
  • the style display card can also be implemented in the form of style display fragments.
  • the game can also be designed, and after obtaining multiple image display fragments, it can be used to exchange for a complete virtual character, and after obtaining multiple style display fragments, it can be used to exchange for complete virtual props.
  • acquiring target game resources matching the image including:
  • the similarity between the image and each game resource in the to-be-extracted game resource library is calculated; for the calculation of the similarity, any available similarity calculation method can be adopted, which is not specifically limited in the embodiment of the present disclosure;
  • the target game resource is determined from at least one candidate resource based on a preset rule.
  • the preset rule is used to define how to determine the target game resource from multiple candidate resources.
  • the preset rules include determining the resource with the highest similarity among the candidate resources as the target game resource, or determining the resource with the highest initial draw probability among the candidate resources as the target game resource, or, based on The similarity corresponding to the candidate resource and the initial draw probability determine the target game resource.
  • the candidate resource whose initial draw probability is greater than the first threshold and the similarity is greater than the second threshold is determined as the target game resource.
  • the initial draw probability is the draw hit rate preset for each game resource in the game development process. The greater the initial draw probability, the greater the probability that the corresponding game resource will be drawn by the user.
  • the value of each threshold can be set adaptively.
  • the method provided by the embodiment of the present disclosure further includes:
  • the initial drawing probability of the candidate resource is increased, so as to determine the target game resource from at least one candidate resource whose probability is increased based on a preset rule. That is, after the candidate resources are determined based on the similarity calculation, the initial lottery probability of each candidate resource can be adjusted, for example, based on the initial lottery probability of each candidate resource, the probability is increased by a preset proportion (for example, on the basis of the initial lottery probability. increase by xx%, etc.), or determine the probability increase ratio according to the similarity between each candidate resource and the user-drawn image, and then determine the final selection probability corresponding to each candidate resource based on the probability increase ratio, for example, the greater the similarity corresponding to the candidate resource , the corresponding probability increase ratio is larger.
  • the candidate resource with the highest probability of being drawn can be determined as the target game resource based on the candidate resource after the probability increase; of course, the target game resource can also be determined based on the similarity corresponding to the candidate resource and the probability of the draw after the probability increase. .
  • the probability of the user winning a specific game resource is improved.
  • acquiring target game resources matching the image including:
  • the game resource corresponding to the maximum similarity is determined as the target game resource.
  • the game resource corresponding to the maximum similarity can be directly determined as the target game resource, so as to improve the efficiency of the user in extracting specific game resources.
  • the embodiments of the present disclosure for resource extraction games, by acquiring the image drawn by the user in the drawing area, and matching the target game resource based on the image, as the resource extraction result of the user, it is possible to solve the problem that the user draws a specific game in the existing solution.
  • the problem of low probability of resources and relatively simple implementation of resource extraction improves the probability of users drawing specific game resources, enriches the extraction implementation of resource extraction games, and improves the fun and interactivity of games. For example, for a resource with a high resource level in some games, the probability of the user drawing is usually very low.
  • the user can draw an image related to the high-level resource, and match the resource based on the image. High-level resources, so as to improve the efficiency of users drawing high-level game resources.
  • FIG. 3 is a flowchart of another interaction method provided by an embodiment of the present disclosure, which is further optimized and expanded based on the foregoing technical solution, and may be combined with the foregoing optional implementation manners.
  • the game resource extraction scene may also include a template element, and the template element may be used as a basic component element for a user to draw an image.
  • the template elements may include character image material, expression material, face material, action material, accessory material, virtual prop material, etc. related to game resources.
  • the interaction method provided by the embodiment of the present disclosure may include:
  • the game resource extraction scene includes a drawing area and a template element.
  • FIG. 4 is a schematic diagram of another game resource extraction scenario provided by an embodiment of the present disclosure, which is used to illustrate the embodiment of the present disclosure, but should not be construed as a specific limitation to the embodiment of the present disclosure.
  • the drawing area and template elements are used in the game
  • the specific display position in the resource extraction scene can be flexibly set according to the layout of the game interface.
  • the template element is displayed in the right area of the game resource extraction scene.
  • each template element shown in FIG. 4 is only used as an example of an element, and specific template elements can be preset according to requirements.
  • the user can perform a click operation on the template element, and the electronic device determines the target template element selected by the user according to the user's click operation, and displays the target template element in the drawing area; the user can also perform a drag operation on the template element. , drag the template element to the drawing area, the template element is also the target template element selected by the user.
  • the user can combine the target elements, such as moving and splicing, to obtain the target stitched image, which can be directly used as the image drawn by the user; the user's drawing operation and the target stitched image can also be repeated Combining to obtain the final drawn image, the user's drawing operation can be the drawing of a new image independent of the target stitched image, or it can be redrawing (or called editing) the target stitched image on the basis of the target stitched image, For example, line editing or color editing; of course, the image drawn by the user can also be determined directly based on the user's drawing operation.
  • the target elements such as moving and splicing
  • receiving an image drawn by the user in the drawing area includes:
  • Timing the drawing time is the time when the game resource extraction scene starts to display or the time when the user triggers the drawing area;
  • the image drawn by the user in the drawing area is determined; wherein, the preset drawing time can be adaptively determined according to the game design.
  • time information can also be displayed in the game resource extraction scene (the specific display location can be determined according to the interface layout), which is used to remind the user of the currently available drawing time, which can not only improve the user's game experience, but also improve the user's game experience.
  • the effective drawing image of the user is determined within a limited time, and the time consuming of resource extraction can also be controlled, so as to avoid the long time consuming of resource extraction caused by the long drawing time of the user, thereby avoiding the reduction of the realization efficiency of resource extraction.
  • the method provided by the embodiment of the present disclosure further includes:
  • the user virtual character corresponding to the user can be the virtual character played by the user in the game or the virtual character controlled by the user, and the specific virtual character can be Interactive characters designed for resource extraction scenarios in the game;
  • the method further includes:
  • the expressions and/or actions of the user avatar or the specific avatar are determined, and the user avatar or the specific avatar is displayed based on the determined expressions and/or actions.
  • the electronic device can determine the expressions and/or actions to be displayed by the user's virtual character or a specific virtual character according to preset interactive feedback logic; After the image is created, the expressions and/or actions to be displayed by the user avatar or the specific avatar are determined according to the matching situation between the image and each game resource in the game resource library.
  • a target game resource matching the image drawn by the user is matched in the game resource library, it can be determined that the expression to be displayed by the user's avatar or a specific avatar is happy, and the action to be displayed is also an action related to happiness, such as dancing, etc.; If there is no target game resource matching the image drawn by the user in the game resource library, it can be determined that the expression to be displayed by the user avatar or a specific avatar is disappointment or sadness, and the action to be displayed is also an action related to disappointment or sadness .
  • the electronic device can also evaluate the image according to the preset image evaluation rules, and determine the expressions and/or actions to be displayed by the user avatar or a specific avatar according to the evaluation result, such as evaluating If the result is high, the expressions and/or actions to be displayed are related to happiness, and if the evaluation result is low, the expressions and/or actions to be displayed are related to disappointment or sadness.
  • FIG. 5 is a schematic diagram of a display effect of a user avatar or a specific avatar according to an embodiment of the present disclosure, showing that the expression to be displayed by the user avatar or the specific avatar is happy, and the action to be displayed is also related to happiness
  • FIG. 6 is a schematic diagram of another display effect of a user avatar or a specific avatar according to an embodiment of the present disclosure, showing that the expression to be displayed by the user avatar or the specific avatar is disappointment, and the action to be displayed is also related to disappointment
  • FIG. 5 and FIG. 6 are only used as an example, and should not be construed as a specific limitation to the embodiments of the present disclosure.
  • the fun of resource extraction can be improved , to increase the interactivity of the game and improve the user's game experience.
  • the image drawn by the user is determined to enrich the Then, based on the image drawn by the user, the target game resource is matched as the user's resource extraction result, which solves the problems that the user has a low probability of drawing a specific game resource and the implementation method of resource extraction is relatively simple in the existing solution. , which improves the probability of users drawing specific game resources, enriches the extraction implementation methods of resource extraction games, improves the fun and interactivity of the game, and improves the user's game experience.
  • FIG. 7 is a schematic structural diagram of an interaction device provided by an embodiment of the present disclosure.
  • the device can be implemented by software and/or hardware, and can be integrated on any electronic device with computing capabilities, such as a smart phone, a tablet computer, and a notebook computer. , desktop and other terminals.
  • the interaction apparatus 500 may include a resource extraction scene display module 501, a drawing image receiving module 502, and a target game resource acquisition module 503, wherein:
  • the resource extraction scene display module 501 is used to display the game resource extraction scene in response to the user's game resource extraction request; the game resource extraction scene includes a drawing area;
  • a drawing image receiving module 502 configured to receive an image drawn by the user in the drawing area
  • the target game resource obtaining module 503 is configured to obtain the target game resource matching the image based on the drawn image, as the user's game resource extraction result, and display the target game resource.
  • the game resource extraction scene further includes template elements;
  • the drawing image receiving module 502 includes:
  • a target template element determining unit configured to determine at least one target template element selected by the user in response to the user's selection operation on the template element
  • a first drawing image determining unit configured to determine a drawn image based on a combination of at least one target template element and/or a drawing operation performed by a user in the drawing area.
  • the drawing image receiving module 502 includes:
  • a sliding trajectory acquisition unit used to monitor the user's sliding operation in the drawing area, and obtain the sliding trajectory of the sliding operation
  • the second drawing image determining unit is configured to determine that the user is in the drawing area based on the sliding track in response to the touch object used to generate the sliding operation leaving the drawing area, or a new sliding operation is not detected within a preset time interval after the sliding operation is terminated. image drawn in.
  • the drawing image receiving module 502 includes:
  • the timing unit is used to time the drawing time; the start time of the timing is the time when the game resource extraction scene starts to display or the time when the user triggers the drawing area;
  • the third drawing image determining unit is configured to determine the image drawn by the user in the drawing area according to the drawing operation of the user within the preset time.
  • the target game resource acquisition module 503 includes:
  • a similarity calculation unit used for calculating the similarity between the image and each game resource in the to-be-extracted game resource library based on the drawn image
  • a candidate resource determination unit configured to determine a game resource corresponding to a similarity exceeding a preset threshold as a candidate resource
  • the first target game resource determination unit is configured to determine a target game resource from at least one candidate resource based on a preset rule.
  • the target game resource acquisition module 503 further includes:
  • the probability increasing unit is used to increase the initial drawing probability of the candidate resource, so as to determine the target game resource from at least one candidate resource after the probability increase based on a preset rule.
  • the target game resource acquisition module 503 includes:
  • a similarity calculation unit used for calculating the similarity between the image and each game resource in the to-be-extracted game resource library based on the drawn image
  • the second target game resource determination unit is configured to determine the game resource corresponding to the maximum similarity as the target game resource.
  • the apparatus 500 provided in this embodiment of the present disclosure further includes:
  • the guide animation display module is used to display the game resource extraction guide animation of the user avatar or the specific avatar corresponding to the user;
  • the virtual character display module is used for determining the expression and/or action of the user's virtual character or a specific virtual character based on the drawn image, and displaying the user's virtual character or the specific virtual character based on the determined expression and/or action.
  • the drawn image includes an image associated with a virtual character or an image associated with a virtual prop;
  • the target game resource includes a card associated with the virtual character or virtual prop.
  • the image associated with the virtual character includes at least one of a character image, a face portrait, an expression portrait, an action portrait and an accessory portrait of the virtual character;
  • the card associated with the virtual character includes the image display card of the virtual character
  • Cards associated with virtual props include style display cards for virtual props.
  • the interaction apparatus provided by the embodiments of the present disclosure can execute any of the interaction methods provided by the embodiments of the present disclosure, and has functional modules and beneficial effects corresponding to the execution methods.
  • the interaction apparatus can execute any of the interaction methods provided by the embodiments of the present disclosure, and has functional modules and beneficial effects corresponding to the execution methods.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, which is used to exemplarily describe an electronic device that implements the interaction method provided by the embodiment of the present disclosure.
  • the electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (eg, mobile terminals such as car navigation terminals), etc., and stationary terminals such as digital TVs, desktop computers, smart home devices, wearable electronic devices, servers, and the like.
  • the electronic device shown in FIG. 8 is only an example, and should not impose any limitation on the functions and occupancy scope of the embodiments of the present disclosure.
  • electronic device 600 includes one or more processors 601 and memory 602 .
  • Processor 601 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in electronic device 600 to perform desired functions.
  • CPU central processing unit
  • Processor 601 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in electronic device 600 to perform desired functions.
  • Memory 602 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • Volatile memory may include, for example, random access memory (RAM) and/or cache memory, among others.
  • Non-volatile memory may include, for example, read only memory (ROM), hard disk, flash memory, and the like.
  • One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 601 may execute the program instructions to implement the interaction method provided by the embodiments of the present disclosure, and may also implement other desired functions.
  • Various contents such as input signals, signal components, noise components, etc. may also be stored in the computer-readable storage medium.
  • the interaction method provided by the embodiment of the present disclosure may include: displaying a game resource extraction scene in response to a user's game resource extraction request; the game resource extraction scene includes a drawing area; receiving an image drawn by the user in the drawing area; based on the drawn image , obtain the target game resource that matches the image, use it as the user's game resource extraction result, and display the target game resource.
  • the electronic device 600 may also perform other optional implementations provided by the method embodiments of the present disclosure.
  • the electronic device 600 may also include an input device 603 and an output device 604 interconnected by a bus system and/or other form of connection mechanism (not shown).
  • the input device 603 may also include, for example, a keyboard, a mouse, and the like.
  • the output device 604 can output various information to the outside, including the determined distance information, direction information, and the like.
  • the output device 604 may include, for example, displays, speakers, printers, and communication networks and their connected remote output devices, among others.
  • the electronic device 600 may also include any other suitable components according to the specific application.
  • the embodiments of the present disclosure also provide a computer program product, which includes a computer program or computer program instructions, and the computer program or computer program instructions, when executed by a computing device, enable the computing device to implement the implementation provided by the embodiments of the present disclosure. any interaction method.
  • the computer program product may write program code for performing operations of embodiments of the present disclosure in any combination of one or more programming languages, including object-oriented programming languages, such as Java, C++, etc., as well as conventional procedural programming language, such as "C" language or similar programming language.
  • the program code may execute entirely on the user electronic device, partly on the user electronic device, as a stand-alone software package, partly on the user electronic device and partly on the remote electronic device, or entirely on the remote electronic device execute on.
  • embodiments of the present disclosure may further provide a computer-readable storage medium on which computer program instructions are stored, and when executed by the computing device, the computer program instructions cause the computing device to implement any interaction method provided by the embodiments of the present disclosure.
  • the interaction method provided by the embodiment of the present disclosure may include: in response to a user's game resource extraction request, displaying a game resource extraction scene; the game resource extraction scene includes a drawing area; receiving an image drawn by the user in the drawing area; based on the drawn image , obtain the target game resource that matches the image, use it as the user's game resource extraction result, and display the target game resource.
  • the computing device can also implement other optional implementations provided by the method embodiments of the present disclosure.
  • a computer-readable storage medium can employ any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may include, for example, but not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatuses or devices, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开实施例涉及一种交互方法、装置、电子设备和存储介质,其中,该方法包括:响应于用户的游戏资源抽取请求,展示游戏资源抽取场景;游戏资源抽取场景包括画图区域;接收用户在画图区域中绘制的图像;基于绘制的图像,获取与图像匹配的目标游戏资源,作为用户的游戏资源抽取结果,并展示目标游戏资源。本公开实施例可以提高用户抽中特定游戏资源的概率,丰富资源抽取类游戏的抽取实现方式,提高游戏趣味性和交互性。

Description

交互方法、装置、电子设备和存储介质
优先权信息
本申请要求于2021年02月03日提交的,申请名称为“交互方法、装置、电子设备和存储介质”的、中国专利申请号“202110152534.6”的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及计算机技术领域,尤其涉及一种交互方法、装置、电子设备和存储介质。
背景技术
随着计算机技术的发展,越来越多应用程序中能够提供应用程序中可用的虚拟资源,用户通过抽卡、轮盘等方式开启虚拟资源以供用户在应用程序中使用。
抽卡方式是卡牌类游戏中常见的虚拟资源抽取方式,不同的游戏资源对应的抽卡概率不同。现有的抽卡方式中,用户针对某些游戏资源的抽卡概率通常是固定的,用户抽到高级别的游戏资源的概率通常较低,影响用户的游戏体验,而且抽卡实现方式单一,不利于增加游戏交互性。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开实施例提供了一种交互方法、装置、电子设备和存储介质。
第一方面,本公开实施例提供了一种交互方法,包括:
响应于用户的游戏资源抽取请求,展示游戏资源抽取场景;所述游戏资源抽取场景包括画图区域;
接收所述用户在所述画图区域中绘制的图像;
基于所述绘制的图像,获取与所述图像匹配的目标游戏资源,作为所述用户的游戏资源抽取结果,并展示所述目标游戏资源。
第二方面,本公开实施例还提供了一种交互装置,包括:
资源抽取场景展示模块,用于响应于用户的游戏资源抽取请求,展示游戏资源抽取场景;所述游戏资源抽取场景包括画图区域;
绘制图像接收模块,用于接收所述用户在所述画图区域中绘制的图像;
目标游戏资源获取模块,用于基于所述绘制的图像,获取与所述图像匹配的目标游戏 资源,作为所述用户的游戏资源抽取结果,并展示所述目标游戏资源。
第三方面,本公开实施例还提供了一种电子设备,包括存储器和处理器,其中,所述存储器中存储有计算机程序,当所述计算机程序被所述处理器执行时,使得所述电子设备实现本公开实施例提供的任一所述的交互方法。
第四方面,本公开实施例还提供了一种计算机可读存储介质,所述存储介质中存储有计算机程序,当所述计算机程序被计算设备执行时,使得所述计算设备实现本公开实施例提供的任一所述的交互方法。
本公开实施例提供的技术方案与现有技术相比至少具有如下优点:
在本公开实施例中,针对资源抽取类游戏,通过获取用户在画图区域中绘制的图像,基于该图像匹配目标游戏资源,作为用户的资源抽取结果,解决了现有方案中用户抽中特定游戏资源的概率较低以及资源抽取实现方式较为单一的问题,提高了用户抽中特定游戏资源的概率,丰富了资源抽取类游戏的抽取实现方式,提高了游戏趣味性和交互性。例如,针对部分游戏中资源等级较高的资源,通常用户抽中的概率极低,采用本公开实施例提供的技术方案后,用户可以绘制与该高等级资源相关的图像,基于该图像匹配该高等级资源,从而可以提高用户抽中高等级的游戏资源的效率。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种交互方法的流程图;
图2为本公开实施例提供的一种游戏资源抽取场景的示意图;
图3为本公开实施例提供的另一种交互方法的流程图;
图4为本公开实施例提供的另一种游戏资源抽取场景的示意图;
图5为本公开实施例提供的一种用户虚拟角色或者特定虚拟角色的展示效果的示意图;
图6为本公开实施例提供的另一种用户虚拟角色或者特定虚拟角色的展示效果的示意图;
图7为本公开实施例提供的一种交互装置的结构示意图;
图8为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
图1为本公开实施例提供的一种交互方法的流程图,可以适用于针对资源抽取类游戏如何实现资源抽取的情况,该交互方法可以由交互装置实现。该交互装置可以采用软件和/或硬件实现,并可集成在任意具有计算能力的电子设备上,例如智能手机、平板电脑、笔记本电脑、台式机等终端。
如图1所示,本公开实施例提供的交互方法可以包括:
S101、响应于用户的游戏资源抽取请求,展示游戏资源抽取场景;游戏资源抽取场景包括画图区域。
示例性的,用户在玩游戏过程中,可以通过触控游戏界面上的资源抽取控件或者资源抽取道具图标,触发游戏资源抽取请求;电子设备响应于该请求,展示游戏资源抽取场景,该场景是对用户进行资源抽取时界面展示效果的概括,该场景中可以包括但不限于画图区域,画图区域用于作为用户绘制图像的响应区域。游戏资源抽取场景的具体布局效果可以根据游戏开发要求进行设计,本公开实施例不作具体限定。
图2为本公开实施例提供的一种游戏资源抽取场景的示意图,用于对本公开实施例进行示例性说明,但不应理解为对本公开实施例的具体限定。如图2所示,画图区域可以展示在游戏资源抽取场景的中间区域。用户在画图区域完成绘制操作后,还可以通过触控画图区域中的“提交”控件(图2中未示出),提交绘制的图像。其中,用户可以执行的绘制操作可以包括但不限于图形绘制、添加色彩等,并且,游戏资源抽取场景中为用户提供各种可选择的绘制工具,例如线条粗细不同的画笔、橡皮擦等。
S102、接收用户在画图区域中绘制的图像。
示例性的,电子设备可以响应于用户的绘制图像提交操作,确定用户在画图区域中绘制的图像,进而可以在游戏资源库中确定与该图像匹配的目标游戏资源。
在一种可选实施方式,接收用户在画图区域中绘制的图像可以包括:
监测用户在画图区域中的滑动操作,并获取滑动操作的滑动轨迹;
响应于用于产生滑动操作的触控对象离开画图区域,或者滑动操作中止后的预设 时间间隔内未监测到新滑动操作,基于滑动轨迹确定用户在画图区域中绘制的图像。
用户可以通过指尖或者利用其它任意可用的触控工具(如触控笔等)在画图区域中进行滑动,产生滑动轨迹;如果触控对象(即用户指尖或触控工具)离开画图区域,例如触控对象移出画图区域或者触控对象离开游戏界面,则可以基于已经产生的滑动轨迹确定用户绘制的图像;或者,如果用户的滑动操作中止后的预设时间间隔内(例如x秒,具体可以根据实际需求进行设置)未监测到新滑动操作,即认为用户的滑动操作结束,则基于已经产生的滑动轨迹确定用户绘制的图像。本公开实施例中滑动轨迹可以包括用于确定图形轮廓的轨迹,以及用于填充颜色的滑动轨迹等,即在基于滑动轨迹确定用户绘制的图像过程中,还可以基于用户的色彩填充操作,确定用户绘制图像的颜色,从而确定用户最终绘制的图像效果。
通过对用户的滑动操作进行监控,可以实现及时确定用户的有效绘制图像,尤其是针对向用户提供“一笔画”绘制权限(即用户需要在滑动轨迹不中断的前提下一笔完成绘制)的情况,有助于提高确定用户绘制的有效图像的效率,进而有助于提高资源抽取的效率。
S103、基于绘制的图像,获取与图像匹配的目标游戏资源,作为用户的游戏资源抽取结果,并展示目标游戏资源。
示例性的,可以计算绘制的图像与游戏资源库中各游戏资源的相似度,根据该相似度确定目标游戏资源,例如将超过预设阈值(取值可以适应性设置)的相似度对应的游戏资源确定为目标游戏资源。其中,关于相似度计算的实现,可以采用任意可用的图像相似度计算方式实现,本公开实施例不作具体限定,例如基于图像特征向量的相似度计算,或者基于图像边缘检测与比对的相似度计算等。
可选的,在本公开实施例中,用户在画图区域中绘制的图像包括与虚拟角色关联的图像、或者与虚拟道具关联的图像。即本公开实施例可以适用于针对抽取游戏中的虚拟角色的情形,还可以适用于针对抽取游戏中的虚拟道具的情形。虚拟角色可以是游戏中的任意角色,具体根据游戏类型进行区分,虚拟道具可以是游戏中与虚拟角色相关的任意道具,例如兵器、盔甲、装饰品等。本公开实施例支持用户绘制与虚拟角色或虚拟道具相关的任意图像,以实现游戏资的抽取。
进一步示例性的,与虚拟角色关联的图像包括虚拟角色的人物形象、脸部画像、表情画像、动作画像和附属品(例如装饰品)画像中的至少一种。电子设备可以根据用户绘制的人物形象、脸部画像、表情画像、动作画像或者附属品画像,确定匹配的虚拟角色,例如,用户绘制了某个虚拟角色的表情画像,电子设备可以通过在游戏资源库中进行相似度匹配,首先确定出匹配的目标表情画像,然后基于目标表情画像与 目标虚拟角色的对应关系,确定出目标虚拟角色,作为用户的抽取对象。
目标游戏资源包括与虚拟角色或者虚拟道具关联的卡片,例如与虚拟角色关联的卡片包括但不限于虚拟角色的形象展示卡片,与虚拟道具关联的卡片包括但不限于虚拟道具的样式展示卡片。具体的,形象展示卡片还可以采用形象展示碎片的形式实现,样式展示卡片还可以采用样式展示碎片的形式实现。此外,还可以对游戏进行设计,获取多个形象展示碎片后可以用于兑换完整的虚拟角色,获取多个样式展示碎片后可以用于兑换完整的虚拟道具。
在一种可选实施方式中,基于绘制的图像,获取与图像匹配的目标游戏资源,包括:
基于绘制的图像,计算图像与待抽取游戏资源库中各游戏资源的相似度;关于相似度的计算实现,可以采用任意可用相似度计算方式,本公开实施例不作具体限定;
将超过预设阈值(取值可以适应性设置)的相似度对应的游戏资源确定为候选资源;
基于预设规则从至少一个候选资源中确定目标游戏资源。
其中,该预设规则用于定义如何从多个候选资源中确定目标游戏资源。示例性的,预设规则包括将候选资源中对应的相似度最大的资源确定为目标游戏资源,或者,将候选资源中对应的初始抽中概率最大的资源确定为目标游戏资源,或者,同时基于候选资源对应的相似度和初始抽中概率确定目标游戏资源,例如将初始抽中概率大于第一阈值且相似度大于第二阈值的候选资源确定为目标游戏资源。初始抽中概率是游戏开发过程中为每个游戏资源预先设定的抽取命中率,初始抽中概率越大,对应的游戏资源被用户抽中的概率越大。各个阈值的取值可以适应性设置。
可选的,在将超过预设阈值的相似度对应的游戏资源确定为候选资源之后,本公开实施例提供的方法还包括:
提升候选资源的初始抽中概率,以基于预设规则从概率提升后的至少一个候选资源中确定目标游戏资源。即基于相似度计算确定出候选资源后,可以调整各个候选资源的初始抽中概率,例如基于每个候选资源的初始抽中概率进行预设比例的概率提升(例如在初始抽中概率的基础上提升xx%等),或者按照每个候选资源与用户绘制图像的相似度确定概率提升比例,然后基于概率提升比例确定每个候选资源最终对应的抽中概率,例如候选资源对应的相似度越大,相应的概率提升比例越大。进一步的,可以基于概率提升后的候选资源,将抽中概率最大的候选资源确定为目标游戏资源;当然也可以同时基于候选资源对应的相似度和概率提升后的抽中概率,确定目标游戏资源。
通过对候选资源进行抽中概率提升,然后确定目标游戏资源,提高了用户抽中特定游戏资源的概率。
在一种可选实施方式中,基于绘制的图像,获取与图像匹配的目标游戏资源,包括:
基于绘制的图像,计算图像与待抽取游戏资源库中各游戏资源的相似度;
将最大相似度对应的游戏资源确定为目标游戏资源。
即在确定用户绘制的图像与游戏资源库中各游戏资源的相似度后,可以直接将最大相似度对应的游戏资源确定为目标游戏资源,提高用户抽取特定游戏资源的效率。
在本公开实施例中,针对资源抽取类游戏,通过获取用户在画图区域中绘制的图像,基于该图像匹配目标游戏资源,作为用户的资源抽取结果,解决了现有方案中用户抽中特定游戏资源的概率较低以及资源抽取实现方式较为单一的问题,提高了用户抽中特定游戏资源的概率,丰富了资源抽取类游戏的抽取实现方式,提高了游戏趣味性和交互性。例如,针对部分游戏中资源等级较高的资源,通常用户抽中的概率极低,采用本公开实施例提供的技术方案后,用户可以绘制与该高等级资源相关的图像,基于该图像匹配该高等级资源,从而可以提高用户抽中高等级的游戏资源的效率。
图3为本公开实施例提供的另一种交互方法的流程图,基于上述技术方案进一步优化与扩展,并可以与上述各个可选实施方式进行结合。在本公开实施例中,游戏资源抽取场景中除了包括画图区域外,还可以包括模板元素,该模板元素可以作为用户绘制图像的基础组成元素。模板元素可以包括与游戏资源相关的人物形象素材、表情素材、脸部素材、动作素材、附属品素材、虚拟道具素材等。通过提供模板元素,可以提高用户绘制图像的便捷性,有助于提高用户绘制图像的效率。
如图3所示,本公开实施例提供的交互方法可以包括:
S301、响应于用户的游戏资源抽取请求,展示游戏资源抽取场景;游戏资源抽取场景包括画图区域和模板元素。
图4为本公开实施例提供的另一种游戏资源抽取场景的示意图,用于对本公开实施例进行示例性说明,但不应理解为对本公开实施例的具体限定,画图区域以及模板元素在游戏资源抽取场景中的具体展示位置,可以根据游戏界面布局进行灵活设置。图4作为示例,模板元素展示在游戏资源抽取场景中的右侧区域。并且,图4中展示的各个模板元素仅用于作为元素示例,具体的模板元素可以根据需求进行预先设置。
S302、响应于用户对模板元素的选择操作,确定用户选择的至少一个目标模板元素。
示例性的,用户可以对模板元素进行点选操作,电子设备根据用户的点选操作确 定用户选择的目标模板元素,并将目标模板元素展示在画图区域;用户还可以对模板元素进行拖动操作,将模板元素拖动至画图区域,该模板元素也即用户选择的目标模板元素。
S303、基于至少一个目标模板元素的组合和/或用户在画图区域中的绘制操作确定绘制的图像。
在画图区域中,用户可以对目标元素进行组合,例如进行移动和拼接等,得到目标拼接图像,该目标拼接图像可以直接作为用户绘制的图像;也可以将用户的绘制操作和目标拼接图像进行再次组合,得到最终的绘制图像,用户的绘制操作可以是针对独立于目标拼接图像的新图像的绘制,也可以是在目标拼接图像的基础上对目标拼接图像的再次绘制(或称为编辑),例如线条编辑或者颜色编辑等;当然,也可以直接基于用户的绘制操作确定用户绘制的图像。
S304、基于绘制的图像,获取与图像匹配的目标游戏资源,作为用户的游戏资源抽取结果,并展示目标游戏资源。
在本公开实施例中,可选的,接收用户在画图区域中绘制的图像,包括:
对绘制时间进行计时;计时的开始时间为游戏资源抽取场景开始展示的时间或者用户触发画图区域的时间;
根据用户在预设时间内的绘制操作,确定用户在画图区域中绘制的图像;其中,预设绘制时间可以根据游戏设计进行适应性确定。
继续如图4所示,游戏资源抽取场景中还可以展示时间信息(具体展示位置可以根据界面布局而定),用于提示用户当前可用的绘制时间,这不仅可以提高用户的游戏体验,并在有限时间内确定出用户的有效绘制图像,同时也可以控制资源抽取的耗费时间,避免因为用户绘制时间较长导致资源抽取的耗费时间较长,进而避免资源抽取的实现效率降低。
在上述技术方案的基础上,可选的,在响应于用户的游戏资源抽取请求,展示游戏资源抽取场景之前,本公开实施例提供的方法还包括:
展示与用户对应的用户虚拟角色或者特定虚拟角色的游戏资源抽取引导动画;其中,与用户对应的用户虚拟角色可以是用户在游戏中扮演的虚拟角色或者用户控制的虚拟角色,特定虚拟角色可以是游戏中针对资源抽取情景而设计的互动角色;
相应地,在接收用户在画图区域中绘制的图像之后,还包括:
基于绘制的图像,确定用户虚拟角色或者特定虚拟角色的表情和/或动作,并基于确定的表情和/或动作展示用户虚拟角色或者特定虚拟角色。
示例性的,电子设备接收到用户绘制的图像后,便可以按照预设的交互反馈逻辑, 确定用户虚拟角色或者特定虚拟角色待展示的表情和/或动作;或者,电子设备接收到用户绘制的图像后,根据该图像与游戏资源库中各游戏资源的匹配情况,确定用户虚拟角色或者特定虚拟角色待展示的表情和/或动作。例如在游戏资源库中匹配到了与用户绘制的图像匹配的目标游戏资源,可以确定用户虚拟角色或者特定虚拟角色待展示的表情为高兴,待展示的动作也是与高兴相关的动作,如手舞足蹈等;如果在游戏资源库中未匹配到与用户绘制的图像匹配的目标游戏资源,可以确定用户虚拟角色或者特定虚拟角色待展示的表情为失望或忧伤,待展示的动作也是与失望或忧伤相关的动作。此外,电子设备还可以在接收到用户绘制的图像后,按照预设的图像评估规则,对图像进行评估,根据评估结果确定用户虚拟角色或者特定虚拟角色待展示的表情和/或动作,例如评估结果较高,则待展示的表情和/或动作均与高兴相关,评估结果较低,则待展示的表情和/或动作均与失望或忧伤相关。
图5为本公开实施例提供的一种用户虚拟角色或者特定虚拟角色的展示效果的示意图,示出了用户虚拟角色或者特定虚拟角色待展示的表情为高兴,待展示的动作也是与高兴相关的动作时,用户虚拟角色或者特定虚拟角色的一种展示效果的示意图。
图6为本公开实施例提供的另一种用户虚拟角色或者特定虚拟角色的展示效果的示意图,示出了用户虚拟角色或者特定虚拟角色待展示的表情为失望,待展示的动作也是与失望相关的动作时,用户虚拟角色或者特定虚拟角色的一种展示效果的示意图。
应当理解,图5和图6仅作为一种示例,不应理解为对本公开实施例的具体限定。
通过展示与用户对应的用户虚拟角色或者特定虚拟角色的游戏资源抽取引导动画,以及基于用户的绘制图像确定用户虚拟角色或者特定虚拟角色待展示的表情和/或动作,可以提高资源抽取的趣味性,增加游戏的互动性,提高用户的游戏体验。
在本公开实施例中,针对资源抽取类游戏,首先基于用户在游戏资源抽取场景中选择的至少一个目标模板元素的组合和/或用户在画图区域中的绘制操作,确定用户绘制的图像,丰富了用户绘图的实现方式;然后基于用户绘制的图像匹配目标游戏资源,作为用户的资源抽取结果,解决了现有方案中用户抽中特定游戏资源的概率较低以及资源抽取实现方式较为单一的问题,提高了用户抽中特定游戏资源的概率,丰富了资源抽取类游戏的抽取实现方式,提高了游戏的趣味性和交互性,提高了用户的游戏体验。
图7为本公开实施例提供的一种交互装置的结构示意图,该装置可以采用软件和/或硬件实现,并可集成在任意具有计算能力的电子设备上,例如智能手机、平板电脑、笔记本电脑、台式机等终端。
如图7所示,本公开实施例提供的交互装置500可以包括资源抽取场景展示模块 501、绘制图像接收模块502和目标游戏资源获取模块503,其中:
资源抽取场景展示模块501,用于响应于用户的游戏资源抽取请求,展示游戏资源抽取场景;游戏资源抽取场景包括画图区域;
绘制图像接收模块502,用于接收用户在画图区域中绘制的图像;
目标游戏资源获取模块503,用于基于绘制的图像,获取与图像匹配的目标游戏资源,作为用户的游戏资源抽取结果,并展示目标游戏资源。
可选的,游戏资源抽取场景还包括模板元素;绘制图像接收模块502包括:
目标模板元素确定单元,用于响应于用户对模板元素的选择操作,确定用户选择的至少一个目标模板元素;
第一绘制图像确定单元,用于基于至少一个目标模板元素的组合和/或用户在画图区域中的绘制操作确定绘制的图像。
可选的,绘制图像接收模块502包括:
滑动轨迹获取单元,用于监测用户在画图区域中的滑动操作,并获取滑动操作的滑动轨迹;
第二绘制图像确定单元,用于响应于用于产生滑动操作的触控对象离开画图区域,或者滑动操作中止后的预设时间间隔内未监测到新滑动操作,基于滑动轨迹确定用户在画图区域中绘制的图像。
可选的,绘制图像接收模块502包括:
计时单元,用于对绘制时间进行计时;计时的开始时间为游戏资源抽取场景开始展示的时间或者用户触发画图区域的时间;
第三绘制图像确定单元,用于根据用户在预设时间内的绘制操作,确定用户在画图区域中绘制的图像。
可选的,目标游戏资源获取模块503包括:
相似度计算单元,用于基于绘制的图像,计算图像与待抽取游戏资源库中各游戏资源的相似度;
候选资源确定单元,用于将超过预设阈值的相似度对应的游戏资源确定为候选资源;
第一目标游戏资源确定单元,用于基于预设规则从至少一个候选资源中确定目标游戏资源。
可选的,目标游戏资源获取模块503还包括:
概率提升单元,用于提升候选资源的初始抽中概率,以基于预设规则从概率提升后的至少一个候选资源中确定目标游戏资源。
可选的,目标游戏资源获取模块503包括:
相似度计算单元,用于基于绘制的图像,计算图像与待抽取游戏资源库中各游戏资源的相似度;
第二目标游戏资源确定单元,用于将最大相似度对应的游戏资源确定为目标游戏资源。
可选的,本公开实施例提供的装置500还包括:
引导动画展示模块,用于展示与用户对应的用户虚拟角色或者特定虚拟角色的游戏资源抽取引导动画;
虚拟角色展示模块,用于基于绘制的图像,确定用户虚拟角色或者特定虚拟角色的表情和/或动作,并基于确定的表情和/或动作展示用户虚拟角色或者特定虚拟角色。
可选的,绘制的图像包括与虚拟角色关联的图像、或者与虚拟道具关联的图像;目标游戏资源包括与虚拟角色或者虚拟道具关联的卡片。
可选的,与虚拟角色关联的图像包括虚拟角色的人物形象、脸部画像、表情画像、动作画像和附属品画像中的至少一种;
与虚拟角色关联的卡片包括虚拟角色的形象展示卡片;
与虚拟道具关联的卡片包括虚拟道具的样式展示卡片。
本公开实施例所提供的交互装置可执行本公开实施例所提供的任意交互方法,具备执行方法相应的功能模块和有益效果。本公开装置实施例中未详尽描述的内容可以参考本公开任意方法实施例中的描述。
图8为本公开实施例提供的一种电子设备的结构示意图,用于对实现本公开实施例提供的交互方法的电子设备进行示例性说明。本公开实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机、智能家居设备、可穿戴电子设备、服务器等等的固定终端。图8示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和占用范围带来任何限制。
如图8所示,电子设备600包括一个或多个处理器601和存储器602。
处理器601可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备600中的其他组件以执行期望的功能。
存储器602可以包括一个或多个计算机程序产品,计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。易失性存储器 例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器601可以运行程序指令,以实现本公开实施例提供的交互方法,还可以实现其他期望的功能。在计算机可读存储介质中还可以存储诸如输入信号、信号分量、噪声分量等各种内容。
其中,本公开实施例提供的交互方法可以包括:响应于用户的游戏资源抽取请求,展示游戏资源抽取场景;游戏资源抽取场景包括画图区域;接收用户在画图区域中绘制的图像;基于绘制的图像,获取与图像匹配的目标游戏资源,作为用户的游戏资源抽取结果,并展示目标游戏资源。应当理解,电子设备600还可以执行本公开方法实施例提供的其他可选实施方案。
在一个示例中,电子设备600还可以包括:输入装置603和输出装置604,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。
此外,该输入装置603还可以包括例如键盘、鼠标等等。
该输出装置604可以向外部输出各种信息,包括确定出的距离信息、方向信息等。该输出装置604可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。
当然,为了简化,图8中仅示出了该电子设备600中与本公开有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备600还可以包括任何其他适当的组件。
除了上述方法和设备以外,本公开实施例还提供一种计算机程序产品,其包括计算机程序或计算机程序指令,计算机程序或计算机程序指令在被计算设备执行时使得计算设备实现本公开实施例所提供的任意交互方法。
计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本公开实施例操作的程序代码,程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户电子设备上执行、部分地在用户电子设备上执行、作为一个独立的软件包执行、部分在用户电子设备上且部分在远程电子设备上执行、或者完全在远程电子设备上执行。
此外,本公开实施例还可以提供一种计算机可读存储介质,其上存储有计算机程序指令,计算机程序指令在被计算设备执行时使得计算设备实现本公开实施例所提供的任意交互方法。
其中,本公开实施例提供的交互方法可以包括:响应于用户的游戏资源抽取请求, 展示游戏资源抽取场景;游戏资源抽取场景包括画图区域;接收用户在画图区域中绘制的图像;基于绘制的图像,获取与图像匹配的目标游戏资源,作为用户的游戏资源抽取结果,并展示目标游戏资源。应当理解,计算机程序指令在被计算设备执行时,还可以使得计算设备实现本公开方法实施例提供的其他可选实施方案。
计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (13)

  1. 一种交互方法,其特征在于,包括:
    响应于用户的游戏资源抽取请求,展示游戏资源抽取场景;所述游戏资源抽取场景包括画图区域;
    接收所述用户在所述画图区域中绘制的图像;
    基于所述绘制的图像,获取与所述图像匹配的目标游戏资源,作为所述用户的游戏资源抽取结果,并展示所述目标游戏资源。
  2. 根据权利要求1所述的方法,其特征在于,所述游戏资源抽取场景还包括模板元素;
    所述接收所述用户在所述画图区域中绘制的图像,包括:
    响应于所述用户对所述模板元素的选择操作,确定用户选择的至少一个目标模板元素;
    基于所述至少一个目标模板元素的组合和/或所述用户在画图区域中的绘制操作确定所述绘制的图像。
  3. 根据权利要求1所述的方法,其特征在于,所述接收用户在所述画图区域中绘制的图像,包括:
    监测所述用户在所述画图区域中的滑动操作,并获取所述滑动操作的滑动轨迹;
    响应于用于产生所述滑动操作的触控对象离开所述画图区域,或者所述滑动操作中止后的预设时间间隔内未监测到新滑动操作,基于所述滑动轨迹确定所述用户在所述画图区域中绘制的图像。
  4. 根据权利要求1所述的方法,其特征在于,所述接收所述用户在所述画图区域中绘制的图像,包括:
    对绘制时间进行计时;所述计时的开始时间为所述游戏资源抽取场景开始展示的时间或者所述用户触发所述画图区域的时间;
    根据所述用户在预设时间内的绘制操作,确定用户在所述画图区域中绘制的图像。
  5. 根据权利要求1所述的方法,其特征在于,所述基于所述绘制的图像,获取与所述图像匹配的目标游戏资源,包括:
    基于所述绘制的图像,计算所述图像与待抽取游戏资源库中各游戏资源的相似度;
    将超过预设阈值的相似度对应的游戏资源确定为候选资源;
    基于预设规则从至少一个所述候选资源中确定所述目标游戏资源。
  6. 根据权利要求5所述的方法,其特征在于,在所述将超过预设阈值的相似度对应的游戏资源确定为候选资源之后,还包括:
    提升所述候选资源的初始抽中概率,以基于所述预设规则从概率提升后的至少一个所述候选资源中确定所述目标游戏资源。
  7. 根据权利要求1所述的方法,其特征在于,所述基于所述绘制的图像,获取与所述 图像匹配的目标游戏资源,包括:
    基于所述绘制的图像,计算所述图像与待抽取游戏资源库中各游戏资源的相似度;
    将最大相似度对应的游戏资源确定为所述目标游戏资源。
  8. 根据权利要求1所述的方法,其特征在于,在所述响应于用户的游戏资源抽取请求,展示游戏资源抽取场景之前,还包括:
    展示与所述用户对应的用户虚拟角色或者特定虚拟角色的游戏资源抽取引导动画;
    相应地,在所述接收用户在所述画图区域中绘制的图像之后,还包括:
    基于所述绘制的图像,确定所述用户虚拟角色或者所述特定虚拟角色的表情和/或动作,并基于确定的表情和/或动作展示所述用户虚拟角色或者所述特定虚拟角色。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述绘制的图像包括与虚拟角色关联的图像、或者与虚拟道具关联的图像;
    所述目标游戏资源包括与所述虚拟角色或者所述虚拟道具关联的卡片。
  10. 根据权利要求9所述的方法,其特征在于,所述与虚拟角色关联的图像包括所述虚拟角色的人物形象、脸部画像、表情画像、动作画像和附属品画像中的至少一种;
    与所述虚拟角色关联的卡片包括所述虚拟角色的形象展示卡片;
    与所述虚拟道具关联的卡片包括所述虚拟道具的样式展示卡片。
  11. 一种交互装置,其特征在于,包括:
    资源抽取场景展示模块,用于响应于用户的游戏资源抽取请求,展示游戏资源抽取场景;所述游戏资源抽取场景包括画图区域;
    绘制图像接收模块,用于接收所述用户在所述画图区域中绘制的图像;
    目标游戏资源获取模块,用于基于所述绘制的图像,获取与所述图像匹配的目标游戏资源,作为所述用户的游戏资源抽取结果,并展示所述目标游戏资源。
  12. 一种电子设备,其特征在于,包括存储器和处理器,其中,所述存储器中存储有计算机程序,当所述计算机程序被所述处理器执行时,使得所述电子设备实现权利要求1-10中任一项所述的交互方法。
  13. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有计算机程序,当所述计算机程序被计算设备执行时,使得所述计算设备实现权利要求1-10中任一项所述的交互方法。
PCT/CN2022/071705 2021-02-03 2022-01-13 交互方法、装置、电子设备和存储介质 WO2022166551A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110152534.6 2021-02-03
CN202110152534.6A CN112827171A (zh) 2021-02-03 2021-02-03 交互方法、装置、电子设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022166551A1 true WO2022166551A1 (zh) 2022-08-11

Family

ID=75931851

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/071705 WO2022166551A1 (zh) 2021-02-03 2022-01-13 交互方法、装置、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN112827171A (zh)
WO (1) WO2022166551A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578115A (zh) * 2022-09-21 2023-01-06 支付宝(杭州)信息技术有限公司 资源抽选处理方法及装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112827171A (zh) * 2021-02-03 2021-05-25 北京字跳网络技术有限公司 交互方法、装置、电子设备和存储介质
CN113304475B (zh) * 2021-06-25 2023-09-22 北京字跳网络技术有限公司 一种交互方法、装置、电子设备以及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9463389B1 (en) * 2012-10-05 2016-10-11 Zynga Inc. Methods and systems relating to obtaining game asset value
CN109529325A (zh) * 2018-11-27 2019-03-29 杭州勺子网络科技有限公司 奖励发放方法、装置、游戏管理服务器及可读存储介质
CN110393917A (zh) * 2019-08-26 2019-11-01 网易(杭州)网络有限公司 一种游戏中的抽卡方法及装置
CN110502181A (zh) * 2019-08-26 2019-11-26 网易(杭州)网络有限公司 游戏中抽卡概率确定方法、装置、设备和介质
CN112827171A (zh) * 2021-02-03 2021-05-25 北京字跳网络技术有限公司 交互方法、装置、电子设备和存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389660A (zh) * 2018-09-28 2019-02-26 百度在线网络技术(北京)有限公司 图像生成方法和装置
CN111389017B (zh) * 2020-04-14 2023-12-29 网易(杭州)网络有限公司 游戏中的交互控制方法、装置、电子设备及计算机介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9463389B1 (en) * 2012-10-05 2016-10-11 Zynga Inc. Methods and systems relating to obtaining game asset value
CN109529325A (zh) * 2018-11-27 2019-03-29 杭州勺子网络科技有限公司 奖励发放方法、装置、游戏管理服务器及可读存储介质
CN110393917A (zh) * 2019-08-26 2019-11-01 网易(杭州)网络有限公司 一种游戏中的抽卡方法及装置
CN110502181A (zh) * 2019-08-26 2019-11-26 网易(杭州)网络有限公司 游戏中抽卡概率确定方法、装置、设备和介质
CN112827171A (zh) * 2021-02-03 2021-05-25 北京字跳网络技术有限公司 交互方法、装置、电子设备和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578115A (zh) * 2022-09-21 2023-01-06 支付宝(杭州)信息技术有限公司 资源抽选处理方法及装置
CN115578115B (zh) * 2022-09-21 2023-09-08 支付宝(杭州)信息技术有限公司 资源抽选处理方法及装置

Also Published As

Publication number Publication date
CN112827171A (zh) 2021-05-25

Similar Documents

Publication Publication Date Title
WO2022166551A1 (zh) 交互方法、装置、电子设备和存储介质
JP7078808B2 (ja) リアルタイム手書き認識の管理
US10599393B2 (en) Multimodal input system
CN109525885B (zh) 信息处理方法、装置、电子设备及计算机可读存储介质
US9542949B2 (en) Satisfying specified intent(s) based on multimodal request(s)
CN110090444B (zh) 游戏中行为记录创建方法、装置、存储介质及电子设备
US11734899B2 (en) Headset-based interface and menu system
CN108874136B (zh) 动态图像生成方法、装置、终端和存储介质
US10386931B2 (en) Toggling between presentation and non-presentation of representations of input
US20220091864A1 (en) Page guiding methods, apparatuses, and electronic devices
CN110377220B (zh) 一种指令响应方法、装置、存储介质及电子设备
CN109388309B (zh) 菜单的显示方法、装置、终端及存储介质
US11209975B2 (en) Enhanced canvas environments
CN105700727A (zh) 与透明层以下的应用层的交互方法
CN113282214A (zh) 笔画渲染方法、装置、存储介质以及终端
CN110215686B (zh) 游戏场景中的显示控制方法及装置、存储介质及电子设备
CN111481923A (zh) 摇杆显示方法及装置、计算机存储介质、电子设备
CN108170338A (zh) 信息处理方法、装置、电子设备及存储介质
US20150347364A1 (en) Highlighting input area based on user input
US20240029349A1 (en) Method, apparatus, device and storage medium for interacting with a virtual object
US20240017172A1 (en) Method and apparatus for performing an action in a virtual environment
KR20210023434A (ko) 디스플레이장치 및 그 제어방법
Didehkhorshid et al. Text input in virtual reality using a tracked drawing tablet
US10592104B1 (en) Artificial reality trackpad-based keyboard
KR20150093045A (ko) 스케치 검색 시스템, 사용자 장치, 서비스 제공 장치, 그 서비스 방법 및 컴퓨터 프로그램이 기록된 기록매체

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22748830

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22748830

Country of ref document: EP

Kind code of ref document: A1