CN116430985A - Interaction method and related device - Google Patents

Interaction method and related device Download PDF

Info

Publication number
CN116430985A
CN116430985A CN202210005745.1A CN202210005745A CN116430985A CN 116430985 A CN116430985 A CN 116430985A CN 202210005745 A CN202210005745 A CN 202210005745A CN 116430985 A CN116430985 A CN 116430985A
Authority
CN
China
Prior art keywords
resource
image
matching result
somatosensory
interactive content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210005745.1A
Other languages
Chinese (zh)
Inventor
杨伟俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210005745.1A priority Critical patent/CN116430985A/en
Publication of CN116430985A publication Critical patent/CN116430985A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application discloses an interaction method and a related device, wherein when playing interaction content comprising a plurality of resources, a first resource is played first, if the first resource is played, a first image corresponding to the first resource is displayed, the first image is used for indicating an object to make a corresponding action, a second image of the object is acquired, a matching result of the first image and the second image is acquired, and the second resource included in the interaction content is played according to the matching result. Therefore, when the interactive content is played, after the first resource is played, interaction can be realized in a mode that the object makes corresponding actions according to the first image, and the second resource is played according to the interaction condition, so that more playability is provided, more interactive scenes are opened, the participation degree of the object is higher, and the experience of a user is improved.

Description

Interaction method and related device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an interaction method and a related device.
Background
In order to attract more flow, each large platform provides a mode of resource interaction with users, such as interactive video, so that the users can participate in and continuously interact in the watching process, intervene in the environment in the play, and make selection operation at key scenario nodes to influence the scenario or game trend.
The interactive video is essentially a multi-thread video mode, one video comprises a plurality of sub-videos, and after the current sub-video is played, the next played sub-video is determined according to the selection of a user. In the related art, the sub-video skip is generally implemented through a conventional User Interface (UI) interaction manner. For example, after the current sub-video is played, a plurality of options are provided in the page so that the user clicks through a tool such as a mouse to complete the selection.
However, the above-described method is low in playability, and the user is low in participation and poor in experience.
Disclosure of Invention
In order to solve the technical problems, the application provides an interaction method and a related device for improving the experience of a user.
The embodiment of the application discloses the following technical scheme:
in a first aspect, an embodiment of the present application provides an interaction method, where the method includes:
playing a first resource of a plurality of resources included in the interactive content;
responding to the first resource to finish playing, displaying a first image corresponding to the first resource, wherein the first image is used for indicating an object to make a corresponding action;
acquiring a second image of the object;
Obtaining a matching result of the first image and the second image;
and playing a second resource according to the matching result, wherein the second resource is one resource in the plurality of resources.
In a second aspect, an embodiment of the present application provides an interaction method, where the method includes:
acquiring an authentication request of a terminal device, wherein the authentication request comprises a resource identifier of a first resource and a second image of an object, and the first resource is one of a plurality of resources included in interactive content;
determining a matching result of the second image and a first image, wherein the first image is used for indicating the object to make a corresponding action;
and sending the matching result to the terminal equipment so that the terminal equipment plays a second resource corresponding to the matching result according to the matching result, wherein the second resource is one of a plurality of resources included in the interactive content.
In a third aspect, an embodiment of the present application provides an interaction device, where the interaction device includes: the device comprises a playing unit, a display unit, an acquisition unit and an acquisition unit;
the playing unit is used for playing a first resource in a plurality of resources included in the interactive content;
the display unit is used for responding to the first resource to finish playing, displaying a first image corresponding to the first resource, wherein the first image is used for indicating an object to make a corresponding action;
The acquisition unit is used for acquiring a second image of the object;
the acquisition unit is used for acquiring a matching result of the first image and the second image;
the playing unit is configured to play a second resource according to the matching result, where the second resource is one resource of the multiple resources.
In a fourth aspect, embodiments of the present application provide an interaction device, where the interaction device includes: the device comprises an acquisition unit, a determination unit and a sending unit;
the acquisition unit is used for acquiring an authentication request of the terminal equipment, wherein the authentication request comprises a resource identifier of a first resource and a second image of an object, and the first resource is one of a plurality of resources included in the interactive content;
the determining unit is used for determining a matching result of the second image and a first image, and the first image is used for indicating the object to make a corresponding action;
the sending unit is configured to send the matching result to the terminal device, so that the terminal device plays a second resource corresponding to the matching result according to the matching result, where the second resource is one of multiple resources included in the interactive content.
In a fifth aspect, an embodiment of the present application provides an interaction system, where the system includes a terminal device and a server:
The terminal device is configured to perform the method described in the first aspect;
the server is configured to perform the method according to the second aspect.
In a sixth aspect, embodiments of the present application provide a computer device, the device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the methods of the first and second aspects according to instructions in the program code.
In a seventh aspect, embodiments of the present application provide a computer readable storage medium storing a computer program for performing the methods of the first and second aspects.
In an eighth aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from a computer-readable storage medium by a processor of a computer device, the computer instructions being executed by the processor to cause the computer device to perform the methods of the first and second aspects.
According to the technical scheme, when the interactive content comprising a plurality of resources is played, the first resource is played first, if the first resource is played, a first image corresponding to the first resource is displayed, the first image is used for indicating the object to make a corresponding action, a second image of the object is acquired, a matching result of the first image and the second image is acquired, and the second resource included in the interactive content is played according to the matching result. Therefore, when the interactive content is played, after the first resource is played, interaction can be realized in a mode that the object makes corresponding actions according to the first image, and the second resource is played according to the interaction condition, so that more playability is provided, more interactive scenes are opened, the participation degree of the object is higher, and the experience of a user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a video interaction provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an interactive system according to an embodiment of the present disclosure;
fig. 3 is a signaling interaction diagram of an interaction system according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an interaction method according to an embodiment of the present disclosure;
FIG. 5 is a flowchart of an interaction method according to an embodiment of the present application;
fig. 6 is an application scenario schematic diagram of an interaction method provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of an interactive device according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of an interactive device according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The terms "first," "second," "third," "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be capable of operation in sequences other than those illustrated or described herein, for example. Furthermore, the terms "comprises," "comprising," and "includes" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
In the process of playing video, graphics context or music and other resources, the user can interact with the resources, so that the method has very important significance, more users can be attracted, and the experience of the users can be improved. In one example, the asset is a video that is an interactive video of a race game played by different vehicles, the interactive video including a plurality of sub-videos. For convenience of explanation, please refer to fig. 1, which is a schematic diagram of video interaction provided in the embodiments of the present application, as shown in the drawing, three controls are displayed on the player interface after the current sub-video is played, different controls correspond to different choices of vehicles, and a user clicks different buttons to continue playing different sub-videos according to the different choices. However, the users can only interact through the traditional UI interaction mode, the playability is low, and the experience of the users is poor.
Based on the above, the application provides an interaction method, in the process of interaction with the object, the object is provided to make corresponding actions according to the first image, the resource is continuously played according to the completion condition of the actions, more playability is provided for playing the resource, more interaction scenes are opened, the participation degree of the object is higher, and the experience of the user is improved.
The method is applied to the interactive system shown in fig. 2, and the interactive system comprises a server and terminal equipment as shown in the figure. The server related to the application can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDNs), basic cloud computing services such as big data and artificial intelligence platforms. The terminal device may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a palm computer, a personal computer, a smart television, a smart watch, a vehicle-mounted device, a wearable device, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited herein. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, which is not limited herein. The number of servers and terminal devices is not limited either.
In combination with the above description, the interactive system provided by the application will be described below, where the interactive system includes a terminal device and a server, where the terminal device mainly performs functions of playing interactive content, displaying a first image or collecting a second image, and the server mainly performs functions of determining a matching result of the first image and the second image, so that a matching algorithm and the like can be updated in time.
As a possible implementation manner, the functions implemented by the server may also be implemented by the terminal device, so as to save network requests, and also ensure normal performance of the interaction when the server is abnormal. However, when the amount of computation required by the matching algorithm is large, many resources may be consumed, and the load may not be able to be calculated simply by the terminal device, as a possible implementation manner, a simplified version of the matching algorithm may be built in the terminal device, which may be used as degradation spam when the server identification fails or the server is abnormal.
For convenience of explanation, the following details will be described in terms of a scheme in which a terminal device interacts with a server. Referring to fig. 3, the diagram is a signaling interaction diagram of an interaction system provided in an embodiment of the present application.
S301: the terminal equipment plays a first resource in a plurality of resources included in the interactive content.
In one or more embodiments, an interactive device deployed at a terminal device plays a first resource of a plurality of resources included in an interactive content. Wherein the resources include, but are not limited to, video, audio, or graphics. The interactive content includes a plurality of resources for interaction with the object, for example, 6 sub-videos (resources) of 10 seconds constitute a video (interactive content) of 1 minute. The object is a target for interaction and may be, for example, a user, a thing, or the like.
S302: and the terminal equipment responds to the first resource to finish playing and displays a first image corresponding to the first resource.
In one or more embodiments, the interactive content includes at least a first resource and a second resource, and after the first resource is played, the interactive content enters a link for interaction with the object, that is, a first image corresponding to the first resource is displayed, where the first image is used to instruct the object to make a corresponding action, for example, a yoga action is shown on the first image, so that the object can make the corresponding yoga action according to the instruction of the first image.
It should be noted that the first image may comprise a plurality of sub-images, different sub-images being used to instruct the subject to make different actions, the different actions representing different selections. Thus, the object can make a corresponding selection by imitating the action in one of the sub-images, and further play the corresponding resource according to the selection. For example, when the interactive content is yoga video, a plurality of yoga actions can be displayed on the first image, different yoga actions represent different difficulty levels, so that the yoga level of the object can be determined, and then the yoga video of different levels can be played. Therefore, the selection of the user is enriched through the plurality of sub-images, the participation interest of the user is improved, the immersive experience can be provided for the object, and the experience sense of the user is improved.
It should be noted that, when the object is a user, the first image may instruct the user to perform a corresponding body action by using at least two body parts, where the body parts may be main joint parts such as a head top, a five sense organs, a neck, four limbs, etc. of the human body, so that multiple parts of the body of the user can participate in the interaction, thereby improving the participation degree of the user, enabling the user to have a feeling of being personally on the scene, and improving the experience feeling of the user.
It should be noted that, when the object is a user, the terminal device used by the user is mostly a smart phone, so as to adapt to the image acquisition unit of the smart phone, for example, the camera of the smart phone only shoots the characteristic of the object with a relatively close distance, the user can be instructed to make a corresponding gesture motion through the first image, so that when the second image of the object is acquired subsequently, compared with the image comprising the body motion of the user, the image comprising only the gesture motion of the user is easier to acquire, and the experience of the user is improved.
S303: the terminal device acquires a second image of the object.
In one or more embodiments, after the first image is displayed, the object may perform a corresponding action according to the indication of the first image, at which time the interaction device may call an image acquisition unit, such as a camera, to acquire a second image of the object.
S304: the server receives an authentication request of the terminal device.
The terminal equipment includes the resource identification of the first resource and the second image in the authentication request and sends the authentication request to the server so that the server can determine the matching result of the first image and the second image. It can be understood that the first resource played by the terminal device is obtained from the server, and the first image corresponding to the first resource is obtained while the terminal device obtains the first resource, so that the server can determine the first image according to the resource identifier of the first resource.
It should be noted that, the authentication request may further include a content identifier of the interactive content to which the first resource belongs, so that when the server provides authentication services for multiple terminal devices at the same time, a first image corresponding to each authentication request can be accurately determined.
S305: the server determines a matching result of the second image and the first image.
The method for determining the matching result of the first image and the second image, for example, the coordinate euclidean distance method, the cosine similarity method, and the like, are not particularly limited, and may be set by those skilled in the art according to needs. The following describes an example of the coordinate euclidean distance method.
When the first image indicates the user to perform the corresponding body action, the body key points of the object in the second image can be identified, the body key points can be, for example, main joint parts such as the head top, the five sense organs, the neck, the limbs and the like of the human body, the somatosensory information characteristic value of the second image is extracted according to the body key points, and the matching result of the second image and the first image is determined according to the somatosensory information characteristic value of the second image and the unlocking somatosensory characteristic value corresponding to the first image.
The unlocking motion sensing characteristic value of the first image may be input by an editor of the interactive content or an operator of the resource website, etc. when the interactive content is manufactured, or after the first image is set, the server may identify and obtain the unlocking motion sensing characteristic value by artificial intelligence (Artificial Intelligence, AI) according to the first image, etc., which is not limited in this application.
S306: and the terminal equipment acquires a matching result of the first image and the second image.
The server sends the matching result of the first image and the second image to the terminal device.
S307: and the terminal equipment plays the second resource according to the matching result.
If the matching result obtained by the terminal device is that the second image and the first image are successfully matched, for example, the second resource can be obtained according to the resource identification of the first resource, and the second resource can be continuously played. The second resource may be a resource next to the first resource in the interactive content in time sequence.
If the first image includes a plurality of sub-images, the obtained matching result is thinned to match with a target sub-image in the first image, where the target sub-image is one of the plurality of sub-images included in the first image. At this time, if the matching result obtained by the terminal device is that the second image and the target sub-image are successfully matched, the second resource can be obtained according to the resource identifiers of the target sub-image and the first resource because the subsequent resources corresponding to different sub-images are different, and the second resource can be played.
If the matching result obtained by the terminal device is that the second image and the first image fail to match, S303-S306 may be executed again until a matching result that the matching is successful is obtained, and the second resource is played according to the matching result.
In order to make the technical solution provided by the embodiments of the present application clearer, an interaction method provided by the embodiments of the present application is described below with an example in conjunction with fig. 4. When the interactive content comprising a plurality of resources is played through the playing interface of the terminal equipment, only one resource is played at a time, and when a single resource is played, the playing interface can display a traditional playing progress bar, a pause/playing button and the like. For convenience of explanation, the interactive content includes a first resource and a second resource, and the first resource and the second resource are exemplified as videos. Referring to fig. 4, the playing interface 401 of the terminal device 400 plays the interactive content, the playing interface 401 plays the first resource first, if the playing of the first resource is completed, the playing interface displays a first image for the user to make a comparison with the heart work at the top of the head, then starts the camera, collects a second image of the user, sends the second image and the resource identifier of the first resource to the server in the form of an authentication request, thereby obtaining a matching result, and when the first image and the second image are successfully matched, plays the second resource according to the matching result.
According to the technical scheme, when the interactive content comprising a plurality of resources is played, the first resource is played first, if the first resource is played, a first image corresponding to the first resource is displayed, the first image is used for indicating the object to make a corresponding action, a second image of the object is acquired, a matching result of the first image and the second image is acquired, and the second resource included in the interactive content is played according to the matching result. Therefore, when the interactive content is played, after the first resource is played, interaction can be realized in a mode that the object makes corresponding actions according to the first image, and the second resource is played according to the interaction condition, so that more playability is provided, more interactive scenes are opened, the participation degree of the object is higher, and the experience of a user is improved.
As a possible implementation manner, before S301, the terminal device may obtain the interactive content from the server, and since the interactive content includes a plurality of resources, two ways are provided according to a manner in which the terminal device obtains the resources, which will be described below respectively.
First kind: the terminal device only obtains one resource at a time from the server.
After the terminal equipment acquires the triggering operation of the object, if a user clicks on a playing interface of the terminal equipment to play the interactive content, the terminal equipment sends a content identifier comprising the interactive content to a server, the server determines the interactive content corresponding to the content identifier according to the content identifier, and the server judges the resource type of a first resource in the interactive content, and if the first resource is of a somatosensory locking type, the server sends the first resource to the terminal equipment; if the first resource is of a non-somatosensory locking type, acquiring a third resource, taking the third resource as the first resource, and judging the resource type of the first resource again.
For example, the interactive content includes three sections of sub-videos, the resource type of the first section of sub-video is a non-somatosensory locking type, the resource type of the second section of sub-video is a non-somatosensory locking type, and the resource type of the third section of sub-video is a somatosensory locking type, so that after obtaining the content identifier of the interactive content sent by the terminal equipment, the server judges that the resource type of the first section of sub-video is a non-somatosensory locking type, and then obtains the second section of sub-video. After judging that the resource type of the second section of sub-video is the non-somatosensory locking type, acquiring a third section of sub-video, and judging that the resource type of the third section of sub-video is the somatosensory locking type, and then sending the first section of sub-video, the second section of sub-video and the third section of sub-video to the terminal equipment as first resources.
Second kind: the terminal equipment obtains all interactive contents at one time.
After the terminal equipment acquires the triggering operation of the object, acquiring the interactive content according to the content identifier of the interactive content carried by the triggering operation, judging the resource type of the first resource in the interactive content, acquiring the third resource if the first resource is of a non-somatosensory locking type, taking the third resource as the first resource, and judging the resource type of the first resource again. If the first resource is of the somatosensory lock type, the step S304 is executed.
As a possible implementation, unlike the foregoing, the first resource is only one resource. Referring to fig. 5, a flowchart of an interaction method according to an embodiment of the present application is shown.
S501: the first resource playing is completed.
S502: judging whether the first resource is of a somatosensory locking type, if not, executing S503; if yes, then S504 is performed.
S503: and playing the second resource.
S504: the second image is acquired by an image acquisition unit.
S505: the resource identification of the first resource and the second image of the object are sent to the server included in the authentication request.
The server determines a matching result of the second image and the first image, and sends the matching result to the terminal device, as a possible implementation manner, if the matching result is that the first image and the second image are successfully matched, then the resource identifier of the second resource corresponding to the matching result is sent to the terminal device. And if the first image and the second image are failed to be matched, returning the empty resource identification to the terminal equipment.
S506: judging whether the resource identifier is empty, if so, executing S504; if not, S507 is executed.
S507: and playing the second resource according to the resource identification of the second resource.
As a possible implementation manner, the content of the matching result may be different according to whether the first image includes a plurality of sub-images, which will be described below.
First kind: the first image does not include a sub-image.
In other words, the first image only includes one action, and the matching result may be that the matching is successful or unsuccessful.
For example, the unlock somatosensory characteristic values of the first image are expressed in the form of a two-dimensional array such as [ [10,20,30], [30,25,20,15], [1,2,3,4] ]. In this case, the unlocked motion sensing characteristic value of the first image may be strictly matched with the motion sensing information characteristic value of the second image, or a certain error may exist, which is not specifically limited in the present application, and the following description will be given with an error of 10%.
The dimension of the somatosensory information characteristic value is the same as that of the unlocking somatosensory characteristic value, and the somatosensory information characteristic value is 90-110% of the unlocking somatosensory characteristic value. For example, when the characteristic value of the somatosensory information is [ [9,21,31], [30,25,20,15], [1,2,3,4] ], the matching result is that the second image is successfully matched with the first image. For another example, when the feature value of the somatosensory information is [ [10,20,30], [30,25,20,15] ], the matching result is that the second image and the first image fail to match (the dimension is different); when the characteristic value of the somatosensory information is [ [1,2,3], [30,25,20,15], [1,2,3,4] ], the matching result is that the second image and the first image fail to match (the error is overlarge).
Second kind: the first image includes a plurality of sub-images.
Or, the first image includes a plurality of sub-images, and different sub-images are used for indicating the object to make different actions, so as to correspond to different unlocking motion sensing characteristic values. At this time, the matching result may be which sub-image in the first image is successfully matched, or the second image is failed to be matched with the first image.
For example, a target sub-image is determined from a plurality of sub-images included in a first image according to the somatosensory information characteristic value of a second image, the unlocking somatosensory characteristic value of the target sub-image and the somatosensory information characteristic value of the second image meet preset conditions, and a matching result of the second image and the first image is determined according to the somatosensory information characteristic value of the second image and the unlocking somatosensory characteristic value of the target sub-image.
For example, compared with the unlocking motion sensing characteristic values of other sub-images, the unlocking motion sensing characteristic value of the target sub-image is closest to the motion sensing information characteristic value of the second image, if the approaching degree is 90%, the second image is successfully matched with the target sub-image in the first image, and if the approaching degree is 50%, the second image is failed to be matched with the first image.
In the following, referring to fig. 6, taking a resource as an example of a video, an application scenario of the interaction method provided in the embodiment of the present application is described by taking an operation side, a background side and a user side as examples.
Operation side: the video editor or the operator of the video website can upload the video to the background or edit the video through the terminal device. For example, a media asset editing and recording platform may be provided for use by an operator, and the media asset editing and recording platform may be a World Wide Web (Web) platform or the like. Specifically, an editor of the video or an operator of the video website inputs related information of the video, such as a somatosensory unlocking feature value, a resource identifier, a content identifier and the like, through a media asset editing and inputting platform, and specifically, table 1 can be referred to.
TABLE 1
Information field Meaning of field
video_id Content identification
point_id Resource identification
time Time point
lock_type Resource type
pattern Unlocking somatosensory characteristic value
next_pass_point_id If the authentication passes, the resource identification of the second resource
next_fail_point_id May be empty for scenes where authentication does not pass.
The terminal device can play different resources according to the time point. The resource types are classified into a somatosensory lock type and a non-somatosensory lock type, and the somatosensory lock type may be represented by 1 and the non-somatosensory lock type may be represented by 0. The unlocking somatosensory characteristic value is used for comparing with the somatosensory information characteristic value and judging whether the second image is successfully matched with the first image.
Background side: the server further comprises a media resource platform, an authentication platform and an image recognition platform. The media resource platform is used for storing interactive contents, resource identifiers and other information fields and the like which are input by the operation side through the media resource editing and inputting platform, and provides resources and the like for the user side. After the authentication platform acquires the resource identification and the second image sent by the user side, the unlocking somatosensory characteristic value of the first image is acquired from the media resource platform according to the resource identification, the second image is sent to the image recognition platform to acquire the somatosensory information characteristic value of the second image, so that a matching result is determined according to the unlocking somatosensory characteristic value and the somatosensory information characteristic value, and the matching result is returned to the user side.
User side: the terminal equipment comprises a player for playing the interactive content, a user plays the interactive content through the player, and the player acquires playing information such as stream information, resource identification and the like of resources from the media resource platform. If the first resource is played, the player displays the first image, calls a camera provided by the terminal equipment to collect a second image of the user, and sends the second image and the resource identifier to the authentication platform, so that a matching result is obtained.
Aiming at the interaction method provided by the embodiment, the embodiment of the application also provides an interaction device.
Referring to fig. 7, a schematic diagram of an interaction device according to an embodiment of the present application is shown. As shown in fig. 7, the interaction device 700 includes: a playing unit 701, a display unit 702, an acquisition unit 703, and an acquisition unit 704;
the playing unit 701 is configured to play a first resource of a plurality of resources included in the interactive content;
the display unit is configured to display, in response to the first resource completing playing, a first image 702 corresponding to the first resource, where the first image is used to instruct an object to make a corresponding action;
the acquisition unit 703 is configured to acquire a second image of the object;
the acquiring unit 704 is configured to acquire a matching result of the first image and the second image;
the playing unit 701 is configured to play a second resource according to the matching result, where the second resource is one resource of the plurality of resources.
As a possible implementation manner, the apparatus further includes a receiving unit, a determining unit, and an executing unit;
the receiving unit is used for receiving triggering operation of the object, wherein the triggering operation comprises a content identifier of the interactive content;
The obtaining unit 704 is configured to obtain a first resource of the interactive content according to the content identifier.
As a possible implementation, the apparatus further comprises a receiving unit;
the receiving unit is used for receiving triggering operation of the object, wherein the triggering operation comprises a content identifier of the interactive content;
the acquiring unit 704 is configured to acquire the interactive content according to the content identifier;
the determining unit is used for determining the resource type of the first resource of the interactive content;
the execution unit is used for responding to the condition that the resource type of the first resource is a somatosensory locking type and executing the step of acquiring the second image of the object;
and responding to the resource type of the first resource as a non-somatosensory locking type, acquiring a third resource, taking the third resource as the first resource, and executing the step of determining the resource type of the first resource in a plurality of resources included in the interactive content.
As a possible implementation manner, the obtaining unit 704 is configured to:
and acquiring the second resource according to the resource identifier of the first resource.
As a possible implementation manner, the obtaining unit 704 is configured to:
And acquiring the second resource according to a target sub-image and the resource identifier of the first resource, wherein the target sub-image is a sub-image included in the first image.
According to the technical scheme, when the interactive content comprising a plurality of resources is played, the first resource is played first, if the first resource is played, a first image corresponding to the first resource is displayed, the first image is used for indicating the object to make a corresponding action, a second image of the object is acquired, a matching result of the first image and the second image is acquired, and the second resource included in the interactive content is played according to the matching result. Therefore, when the interactive content is played, after the first resource is played, interaction can be realized in a mode that the object makes corresponding actions according to the first image, and the second resource is played according to the interaction condition, so that more playability is provided, more interactive scenes are opened, the participation degree of the object is higher, and the experience of a user is improved.
Referring to fig. 8, a schematic diagram of an interaction device according to an embodiment of the present application is shown. As shown in fig. 8, the interaction device 800 includes: an acquisition unit 801, a determination unit 802, and a transmission unit 803;
The obtaining unit 801 is configured to obtain an authentication request of a terminal device, where the authentication request includes a resource identifier of a first resource and a second image of an object, and the first resource is one of a plurality of resources included in the interactive content;
the determining unit 802 is configured to determine a matching result of the second image and a first image, where the first image is used to instruct the object to make a corresponding action;
the sending unit 803 is configured to send the matching result to the terminal device, so that the terminal device plays, according to the matching result, a second resource corresponding to the matching result, where the second resource is one of multiple resources included in the interactive content.
As a possible implementation, the first image is configured to instruct the subject to make a corresponding body action, and the determining unit 802 is configured to:
identifying human body key points of the object in the second image;
extracting somatosensory information characteristic values of the second image according to the human body key points;
and determining a matching result of the second image and the first image according to the somatosensory information characteristic value of the second image and the unlocking somatosensory characteristic value corresponding to the first image.
As a possible implementation manner, the first image includes a plurality of sub-images, different sub-images correspond to different unlocking somatosensory feature values, and the determining unit 802 is configured to:
determining a target sub-image from the plurality of sub-images according to the somatosensory information characteristic values of the second image, wherein the unlocking somatosensory characteristic values of the target sub-image and the somatosensory information characteristic values of the second image meet preset conditions;
and determining a matching result of the second image and the first image according to the somatosensory information characteristic value of the second image and the unlocking somatosensory characteristic value of the target sub-image.
As a possible implementation manner, the apparatus further includes a receiving unit and an executing unit;
the receiving unit is used for receiving the content identifier sent by the terminal equipment;
the determining unit 802 is configured to determine, according to the content identifier, interactive content corresponding to the content identifier; determining a resource type of a first resource in a plurality of resources included in the interactive content;
the execution unit is used for responding to the condition that the resource type of the first resource is a somatosensory locking type and sending the first resource to the terminal equipment;
And responding to the resource type of the first resource as a non-somatosensory locking type, acquiring a third resource, taking the third resource as the first resource, and executing the step of determining the resource type of the first resource in a plurality of resources included in the interactive content.
According to the technical scheme, when the interactive content comprising a plurality of resources is played, the first resource is played first, if the first resource is played, a first image corresponding to the first resource is displayed, the first image is used for indicating the object to make a corresponding action, a second image of the object is acquired, a matching result of the first image and the second image is acquired, and the second resource included in the interactive content is played according to the matching result. Therefore, when the interactive content is played, after the first resource is played, interaction can be realized in a mode that the object makes corresponding actions according to the first image, and the second resource is played according to the interaction condition, so that more playability is provided, more interactive scenes are opened, the participation degree of the object is higher, and the experience of a user is improved.
The foregoing interactive system may be a computer device, which may be a server, or may be a terminal device, where the foregoing interactive device may be built in the server or the terminal device, and the computer device provided in the embodiments of the present application will be described from the perspective of hardware materialization. Fig. 9 is a schematic structural diagram of a server, and fig. 10 is a schematic structural diagram of a terminal device.
Referring to fig. 9, fig. 9 is a schematic diagram of a server structure provided in an embodiment of the present application, where the server 1400 may vary considerably in configuration or performance, and may include one or more central processing units (Central Processing Units, CPU) 1422 (e.g., one or more processors) and memory 1432, one or more storage media 1430 (e.g., one or more mass storage devices) that store applications 1442 or data 1444. Wherein the memory 1432 and storage medium 1430 can be transitory or persistent storage. The program stored in the storage medium 1430 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, a CPU 1422 may be provided in communication with the storage medium 1430 to execute a series of instruction operations in the storage medium 1430 on the server 1400.
Server 1400 may also include one or more power supplies 1426, one or more wired or wireless network interfaces 1450, one or more input/output interfaces 1458, and/or one or more operating systems 1441, such as a Windows Server TM ,Mac OS X TM ,Unix TM ,Linux TM ,FreeBSD TM Etc.
The steps performed by the server in the above embodiments may be based on the server structure shown in fig. 9.
Wherein, the CPU 1422 is configured to perform the following steps:
playing a first resource of a plurality of resources included in the interactive content;
responding to the first resource to finish playing, displaying a first image corresponding to the first resource, wherein the first image is used for indicating an object to make a corresponding action;
acquiring a second image of the object;
obtaining a matching result of the first image and the second image;
and playing a second resource according to the matching result, wherein the second resource is one resource in the plurality of resources.
Or the following steps are executed:
acquiring an authentication request of a terminal device, wherein the authentication request comprises a resource identifier of a first resource and a second image of an object, and the first resource is one of a plurality of resources included in interactive content;
determining a matching result of the second image and a first image, wherein the first image is used for indicating the object to make a corresponding action;
and sending the matching result to the terminal equipment so that the terminal equipment plays a second resource corresponding to the matching result according to the matching result, wherein the second resource is one of a plurality of resources included in the interactive content.
Optionally, the CPU 1422 may further perform method steps of any specific implementation of the interaction method in the embodiments of the present application.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present application. Fig. 10 is a block diagram illustrating a part of a structure of a smart phone related to a terminal device provided in an embodiment of the present application, where the smart phone includes: radio Frequency (RF) circuitry 1510, memory 1520, input unit 1530, display unit 1540, sensor 1550, audio circuitry 1560, wireless fidelity (Wireless Fidelity, wiFi) module 1570, processor 1580, and power supply 1590. Those skilled in the art will appreciate that the smartphone structure shown in fig. 10 is not limiting of the smartphone and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes each component of the smart phone in detail with reference to fig. 10:
the RF circuit 1510 may be used for receiving and transmitting signals during a message or a call, and particularly, after receiving downlink information of a base station, the signal is processed by the processor 1580; in addition, the data of the design uplink is sent to the base station. Generally, RF circuitry 1510 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (Low Noise Amplifier, LNA for short), a duplexer, and the like. In addition, the RF circuitry 1510 may also communicate with networks and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to global system for mobile communications (Global System of Mobile communication, GSM for short), general packet radio service (General Packet Radio Service, GPRS for short), code division multiple access (Code Division Multiple Access, CDMA for short), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA for short), long term evolution (Long Term Evolution, LTE for short), email, short message service (Short Messaging Service, SMS for short), and the like.
The memory 1520 may be used to store software programs and modules, and the processor 1580 implements various functional applications and data processing of the smartphone by running the software programs and modules stored in the memory 1520. The memory 1520 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, phonebooks, etc.) created according to the use of the smart phone, etc. In addition, memory 1520 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1530 may be used to receive input numerical or character information and generate key signal inputs related to user settings and function control of the smart phone. In particular, the input unit 1530 may include a touch panel 1531 and other input devices 1532. The touch panel 1531, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 1531 or thereabout by using any suitable object or accessory such as a finger, a stylus, etc.), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch panel 1531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 1580, and can receive and execute commands sent from the processor 1580. In addition, the touch panel 1531 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 1530 may include other input devices 1532 in addition to the touch panel 1531. In particular, other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 1540 may be used to display information input by a user or information provided to the user and various menus of the smart phone. The display unit 1540 may include a display panel 1541, and optionally, the display panel 1541 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1531 may cover the display panel 1541, and when the touch panel 1531 detects a touch operation thereon or thereabout, the touch operation is transferred to the processor 1580 to determine the type of touch event, and then the processor 1580 provides a corresponding visual output on the display panel 1541 according to the type of touch event. Although in fig. 10, the touch panel 1531 and the display panel 1541 are two separate components to implement the input and input functions of the smart phone, in some embodiments, the touch panel 1531 may be integrated with the display panel 1541 to implement the input and output functions of the smart phone.
The smartphone may also include at least one sensor 1550, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 1541 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 1541 and/or the backlight when the smartphone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for identifying the application of the gesture of the smart phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration identification related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the smart phone are not described in detail herein.
Audio circuitry 1560, speaker 1561, and microphone 1562 may provide an audio interface between a user and a smart phone. The audio circuit 1560 may transmit the received electrical signal converted from audio data to the speaker 1561, and be converted into a sound signal by the speaker 1561 for output; on the other hand, the microphone 1562 converts the collected sound signals into electrical signals, which are received by the audio circuit 1560 for conversion into audio data, which is processed by the audio data output processor 1580 for transmission to, for example, another smart phone via the RF circuit 1510 or for output to the memory 1520 for further processing.
WiFi belongs to a short-distance wireless transmission technology, and a smart phone can help a user to send and receive emails, browse webpages, access streaming media and the like through a WiFi module 1570, so that wireless broadband Internet access is provided for the user. Although fig. 10 shows WiFi module 1570, it is understood that it does not belong to the essential constitution of a smartphone, and can be omitted entirely as desired within the scope of not changing the essence of the invention.
Processor 1580 is a control center of the smartphone, connects various parts of the entire smartphone with various interfaces and lines, performs various functions of the smartphone and processes data by running or executing software programs and/or modules stored in memory 1520, and invoking data stored in memory 1520. In the alternative, processor 1580 may include one or more processing units; preferably, the processor 1580 can integrate an application processor and a modem processor, wherein the application processor primarily processes operating systems, user interfaces, application programs, and the like, and the modem processor primarily processes wireless communications. It is to be appreciated that the modem processor described above may not be integrated into the processor 1580.
The smart phone also includes a power source 1590 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 1580 via a power management system, such as to provide for managing charging, discharging, and power consumption.
Although not shown, the smart phone may further include a camera, a bluetooth module, etc., which will not be described herein.
In an embodiment of the present application, the memory 1520 included in the smart phone may store program codes and transmit the program codes to the processor.
The processor 1580 included in the smart phone may execute the interaction method provided in the foregoing embodiment according to the instructions in the program code.
The embodiment of the application also provides a computer readable storage medium for storing a computer program, where the computer program is used to execute the interaction method provided in the above embodiment.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the interaction methods provided in the various alternative implementations of the above aspects.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, where the above program may be stored in a computer readable storage medium, and when the program is executed, the program performs steps including the above method embodiments; and the aforementioned storage medium may be at least one of the following media: read-Only Memory (ROM), RAM, magnetic disk or optical disk, etc.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment is mainly described in a different point from other embodiments. In particular, for the apparatus and system embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, with reference to the description of the method embodiments in part. The apparatus and system embodiments described above are merely illustrative, in which elements illustrated as separate elements may or may not be physically separate, and elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing is merely one specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered in the protection scope of the present application. Further combinations of the present application may be made to provide further implementations based on the implementations provided in the above aspects. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method of interaction, the method comprising:
playing a first resource of a plurality of resources included in the interactive content;
responding to the first resource to finish playing, displaying a first image corresponding to the first resource, wherein the first image is used for indicating an object to make a corresponding action;
acquiring a second image of the object;
obtaining a matching result of the first image and the second image;
and playing a second resource according to the matching result, wherein the second resource is one resource in the plurality of resources.
2. The method according to claim 1, wherein the method further comprises:
Receiving a triggering operation of the object, wherein the triggering operation comprises a content identifier of the interactive content;
and acquiring a first resource of the interactive content according to the content identifier.
3. The method according to claim 1, wherein the method further comprises:
receiving a triggering operation of the object, wherein the triggering operation comprises a content identifier of the interactive content;
acquiring the interactive content according to the content identifier;
determining a resource type of a first resource of the interactive content;
responsive to the resource type of the first resource being a somatosensory lock type, performing the step of acquiring a second image of the object;
and responding to the resource type of the first resource as a non-somatosensory locking type, acquiring a third resource, taking the third resource as the first resource, and executing the step of determining the resource type of the first resource in a plurality of resources included in the interactive content.
4. A method according to claim 3, characterized in that the method further comprises:
and acquiring the second resource according to the resource identifier of the first resource.
5. A method according to claim 3, wherein the first image comprises a plurality of sub-images, different sub-images being used to instruct the subject to take different actions, the method further comprising:
And acquiring the second resource according to a target sub-image and the resource identifier of the first resource, wherein the target sub-image is a sub-image included in the first image.
6. A method of interaction, the method comprising:
acquiring an authentication request of a terminal device, wherein the authentication request comprises a resource identifier of a first resource and a second image of an object, and the first resource is one of a plurality of resources included in interactive content;
determining a matching result of the second image and a first image, wherein the first image is used for indicating the object to make a corresponding action;
and sending the matching result to the terminal equipment so that the terminal equipment plays a second resource corresponding to the matching result according to the matching result, wherein the second resource is one of a plurality of resources included in the interactive content.
7. The method of claim 6, wherein the first image is used to instruct the subject to make a corresponding physical action, and wherein the determining a match of the second image to the first image comprises:
identifying human body key points of the object in the second image;
extracting somatosensory information characteristic values of the second image according to the human body key points;
And determining a matching result of the second image and the first image according to the somatosensory information characteristic value of the second image and the unlocking somatosensory characteristic value corresponding to the first image.
8. The method of claim 7, wherein the first image includes a plurality of sub-images, different sub-images correspond to different unlock somatosensory feature values, and the determining a matching result of the second image and the first image according to the somatosensory information feature value of the second image and the unlock somatosensory feature value corresponding to the first image includes:
determining a target sub-image from the plurality of sub-images according to the somatosensory information characteristic values of the second image, wherein the unlocking somatosensory characteristic values of the target sub-image and the somatosensory information characteristic values of the second image meet preset conditions;
and determining a matching result of the second image and the first image according to the somatosensory information characteristic value of the second image and the unlocking somatosensory characteristic value of the target sub-image.
9. The method of claim 6, wherein the method further comprises:
receiving a content identifier sent by the terminal equipment;
determining interactive content corresponding to the content identifier according to the content identifier;
Determining a resource type of a first resource in a plurality of resources included in the interactive content;
responding to the resource type of the first resource as a somatosensory locking type, and sending the first resource to the terminal equipment;
and responding to the resource type of the first resource as a non-somatosensory locking type, acquiring a third resource, taking the third resource as the first resource, and executing the step of determining the resource type of the first resource in a plurality of resources included in the interactive content.
10. An interactive apparatus, the apparatus comprising: the device comprises a playing unit, a display unit, an acquisition unit and an acquisition unit;
the playing unit is used for playing a first resource in a plurality of resources included in the interactive content;
the display unit is used for responding to the first resource to finish playing, displaying a first image corresponding to the first resource, wherein the first image is used for indicating an object to make a corresponding action;
the acquisition unit is used for acquiring a second image of the object;
the acquisition unit is used for acquiring a matching result of the first image and the second image;
the playing unit is configured to play a second resource according to the matching result, where the second resource is one resource of the multiple resources.
11. An interactive apparatus, the apparatus comprising: the device comprises an acquisition unit, a determination unit and a sending unit;
the acquisition unit is used for acquiring an authentication request of the terminal equipment, wherein the authentication request comprises a resource identifier of a first resource and a second image of an object, and the first resource is one of a plurality of resources included in the interactive content;
the determining unit is used for determining a matching result of the second image and a first image, and the first image is used for indicating the object to make a corresponding action;
the sending unit is configured to send the matching result to the terminal device, so that the terminal device plays a second resource corresponding to the matching result according to the matching result, where the second resource is one of multiple resources included in the interactive content.
12. An interactive system, characterized in that the system comprises a terminal device and a server:
the terminal device being configured to perform the method of any one of claims 1-5;
the server being adapted to perform the method of any of claims 6-9.
13. A computer device, the device comprising a processor and a memory:
The memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method of any of claims 1-5 or the method of any of claims 6-9 according to instructions in the program code.
14. A computer readable storage medium for storing a computer program for performing the method of any one of claims 1-5 or for performing the method of any one of claims 6-9.
15. A computer program product comprising a computer program or instructions; the computer program or instructions, when executed by a processor, performs the method of any of claims 1-5, or performs the method of any of claims 6-9.
CN202210005745.1A 2022-01-04 2022-01-04 Interaction method and related device Pending CN116430985A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210005745.1A CN116430985A (en) 2022-01-04 2022-01-04 Interaction method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210005745.1A CN116430985A (en) 2022-01-04 2022-01-04 Interaction method and related device

Publications (1)

Publication Number Publication Date
CN116430985A true CN116430985A (en) 2023-07-14

Family

ID=87093125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210005745.1A Pending CN116430985A (en) 2022-01-04 2022-01-04 Interaction method and related device

Country Status (1)

Country Link
CN (1) CN116430985A (en)

Similar Documents

Publication Publication Date Title
WO2018126885A1 (en) Game data processing method
CN108632658B (en) Bullet screen display method and terminal
CN106303733B (en) Method and device for playing live special effect information
CN107730261B (en) Resource transfer method and related equipment
CN108279948B (en) Application program starting method and mobile terminal
CN113018848B (en) Game picture display method, related device, equipment and storage medium
CN107908765B (en) Game resource processing method, mobile terminal and server
CN113810732B (en) Live content display method, device, terminal, storage medium and program product
CN109618218B (en) Video processing method and mobile terminal
CN109495638B (en) Information display method and terminal
CN112774194B (en) Virtual object interaction method and related device
CN108616448A (en) A kind of the path recommendation method and mobile terminal of Information Sharing
CN108521365B (en) Method for adding friends and mobile terminal
CN111158624A (en) Application sharing method, electronic equipment and computer readable storage medium
CN108089935B (en) Application program management method and mobile terminal
CN107566909B (en) Barrage-based video content searching method and user terminal
CN110471895B (en) Sharing method and terminal device
CN109582820B (en) Song playing method, terminal equipment and server
US10419816B2 (en) Video-based check-in method, terminal, server and system
KR102263977B1 (en) Methods, devices, and systems for performing information provision
CN115068941A (en) Game image quality recommendation method and device, computer equipment and storage medium
CN115382201A (en) Game control method and device, computer equipment and storage medium
CN116430985A (en) Interaction method and related device
CN109561424B (en) Data identifier generation method and mobile terminal
CN110889102A (en) Image unlocking method and device, computer readable storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40089553

Country of ref document: HK