CN110703913B - Object interaction method and device, storage medium and electronic device - Google Patents

Object interaction method and device, storage medium and electronic device Download PDF

Info

Publication number
CN110703913B
CN110703913B CN201910927056.4A CN201910927056A CN110703913B CN 110703913 B CN110703913 B CN 110703913B CN 201910927056 A CN201910927056 A CN 201910927056A CN 110703913 B CN110703913 B CN 110703913B
Authority
CN
China
Prior art keywords
client
interaction
track
action
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910927056.4A
Other languages
Chinese (zh)
Other versions
CN110703913A (en
Inventor
廖中远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910927056.4A priority Critical patent/CN110703913B/en
Publication of CN110703913A publication Critical patent/CN110703913A/en
Application granted granted Critical
Publication of CN110703913B publication Critical patent/CN110703913B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Abstract

The application discloses an object interaction method and device, a storage medium and an electronic device. The method comprises the following steps: acquiring an object interaction request triggered by a first object in a first client; responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client, and displaying a virtual prompt track matched with the target graph mark in a superposition manner in the scene picture; when detecting that the action track corresponding to the gesture action of the first object is matched with the virtual prompt track, sending interaction prompt information to at least one second client; and displaying the interaction result returned by at least one second client in the first client. The method and the device solve the technical problem that the efficiency of object interaction in the live broadcast scene is limited by the mode provided by the related technology.

Description

Object interaction method and device, storage medium and electronic device
Technical Field
The present application relates to the field of computers, and in particular, to an object interaction method and apparatus, a storage medium, and an electronic apparatus.
Background
In the live broadcast process, in order to realize real-time interaction between the anchor and the audience, the related technology provides a mode for identifying the anchor gesture, corresponding images are generated by identifying the gesture action of the anchor, and the images are pushed to the audience for display, so that the picture sense of the gesture action is enhanced, and more audiences are attracted to pay attention to the anchor.
However, the related art provides a way that the anchor transmits the image information to the audience in one direction, and the audience can only watch the image information, but cannot realize the two-way interaction process with the anchor. That is, the above-described manner of unidirectional information transfer limits the efficiency of interaction between the anchor and the viewer in a live scene.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides an object interaction method and device, a storage medium and an electronic device, which at least solve the technical problem that the efficiency of object interaction in a live scene is limited in a mode provided by related technologies.
According to an aspect of an embodiment of the present invention, there is provided an object interaction method, including: acquiring an object interaction request triggered by a first object in a first client, wherein the object interaction request carries a target graphic identifier of a graphic to be interacted; responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client, and displaying a virtual prompting track matched with the target graph mark in a superposition manner in the scene picture; when detecting that the action track corresponding to the gesture action of the first object is matched with the virtual prompt track, sending interaction prompt information to at least one second client; and displaying an interaction result returned by at least one second client in the first client, wherein the interaction result comprises a statistical result of the graphic identifiers returned by the second client.
According to another aspect of an embodiment of the present invention, there is provided an object interaction method, including: the method comprises the steps of obtaining interaction prompt information sent by a first client, wherein the interaction prompt information comprises the following steps: a group of image sets including motion trajectories of gesture motions using the first object of the first client; displaying the set of image sets in a second client; inputting a graphic identifier matched with the gesture track of the gesture of the first object in the second client; and sending the graphic identification to a server.
According to still another aspect of the embodiment of the present invention, there is also provided an object interaction apparatus, including: the first acquisition unit is used for acquiring an object interaction request triggered by a first object in the first client, wherein the object interaction request carries a target graphic identifier of a graphic to be interacted; the first display unit is used for responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client side, and displaying a virtual prompt track matched with the target graph mark in a superposition mode in the scene picture; the first sending unit is used for sending interaction prompt information to at least one second client side when detecting an action track corresponding to the gesture action of the first object and matching with the virtual prompt track; and the second display unit is used for displaying the interaction result returned by at least one second client in the first client, wherein the interaction result comprises a statistical result of the graphic identifiers returned by the second client.
According to still another aspect of the embodiment of the present invention, there is also provided an object interaction apparatus, including: the system comprises an acquisition unit, a first client and a second client, wherein the acquisition unit is used for acquiring interaction prompt information sent by the first client, and the interaction prompt information comprises: a group of image sets including motion trajectories of gesture motions using the first object of the first client; a display unit for displaying the group of image sets in the second client; an input unit configured to input, in the second client, a graphic identifier that matches the gesture trajectory of the first object; and the sending unit is used for sending the graphic identification to the server.
According to still another aspect of the embodiment of the present invention, there is also provided an object interaction system including: the system comprises a first client, a second client and a third client, wherein the first client is used for acquiring an object interaction request triggered by a first object in the first client, and the object interaction request carries a target graphic identifier of a graphic to be interacted; the method is also used for responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client, and displaying a virtual prompting track matched with the target graph mark in a superposition manner in the scene picture; the method is also used for sending interaction prompt information to at least one second client when detecting the action track corresponding to the gesture action of the first object and matching with the virtual prompt track; the second client is used for acquiring the interaction prompt information and also used for acquiring a graphic identifier which is input by a second object and is matched with the action track of the gesture according to the interaction prompt information; the graphical identification transmission server is also used for transmitting the graphical identification matched with the action track of the gesture action; the server is used for acquiring the graphic identifier returned by the second client, carrying out statistics to obtain a statistical result, and sending the statistical result to the first client; the first client is further configured to display an interaction result returned by at least one second client, where the interaction result includes the statistics result.
According to a further aspect of embodiments of the present invention, there is also provided a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the above-described object interaction method when run.
According to still another aspect of the embodiments of the present invention, there is further provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the above-mentioned object interaction method through the computer program.
In the embodiment of the invention, after an object interaction request carrying a target graphic identifier of a graphic to be interacted triggered by a first object in a first client is acquired, responding to the object interaction request, displaying a scene picture of an environment where the first object is located in the first client through an augmented reality (Augmented Reality, AR for short) technology, and displaying a virtual prompt track corresponding to the graphic to be interacted in a superposition manner so as to prompt the first object to complete gesture actions according to the virtual prompt track. Further, the first client sends the interaction prompt information to at least one second client so as to prompt the second object using the second client to input the graphic identifier matched with the action track of the gesture action, and the interaction result returned by the second client is displayed in the first client, so that the first client and the second client in the live broadcast scene realize bidirectional interaction, the interaction channel between the first client and the second client is expanded, the interaction mode in the live broadcast process is enriched, the interaction diversified effect is achieved, and the problem of lower interaction efficiency caused by unidirectional information transmission in the relevant live broadcast technology is further solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic illustration of an application environment of an alternative object interaction method according to an embodiment of the application;
FIG. 2 is a flow chart of an alternative method of object interaction according to an embodiment of the application;
FIG. 3 is a flow chart of another alternative method of object interaction according to an embodiment of the application;
FIG. 4 is a schematic diagram of an alternative method of object interaction according to an embodiment of the application;
FIG. 5 is a schematic diagram of another alternative object interaction method according to an embodiment of the application;
FIG. 6 is a schematic diagram of yet another alternative object interaction method in accordance with an embodiment of the application;
FIG. 7 is a schematic diagram of yet another alternative object interaction method in accordance with an embodiment of the application;
FIG. 8 is a schematic diagram of yet another alternative object interaction method in accordance with an embodiment of the application;
FIG. 9 is a schematic diagram of yet another alternative object interaction method in accordance with an embodiment of the application;
FIG. 10 is a schematic diagram of yet another alternative object interaction method in accordance with an embodiment of the application;
FIG. 11 is a schematic diagram of yet another alternative object interaction method according to an embodiment of the invention;
FIG. 12 is a schematic diagram of yet another alternative object interaction method in accordance with an embodiment of the invention;
FIG. 13 is a flow chart of yet another alternative method of object interaction in accordance with an embodiment of the present invention;
FIG. 14 is a schematic structural view of an alternative object interaction device in accordance with an embodiment of the invention;
FIG. 15 is a schematic structural view of another alternative object interaction device in accordance with an embodiment of the present invention;
FIG. 16 is a schematic diagram of an alternative electronic device according to an embodiment of the invention;
fig. 17 is a schematic structural view of an alternative electronic device according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of the embodiment of the present invention, there is provided an object interaction method, optionally, as an optional implementation manner, the object interaction method may be applied to, but is not limited to, an object interaction system in a network environment as shown in fig. 1, where the object interaction system may include, but is not limited to: terminal device 102, network 110, server 112, and terminal device 120. Further, assume that the terminal device 102 uses a first account number (e.g., ID-1) to run a first client, and the terminal device 120 uses a second account number (e.g., ID-2) to run a second client, where the first client may be a client of a host in a live broadcast process, and the second client may be a client of a viewer focusing on the host in the live broadcast process, and the two clients have an association relationship.
The terminal device 102 includes a man-machine interaction screen 104, a processor 106 and a memory 108. The man-machine interaction screen 104 is used for obtaining an object interaction request through a man-machine interaction interface, displaying a scene picture of an environment where a first object of the first client is located, displaying a virtual prompt track corresponding to a graph to be interacted in a superposition manner in the scene picture, and displaying an interaction result; the processor 106 is configured to control the display process in response to the object interaction request, and is further configured to control detecting a gesture of the first object, and send interaction prompt information to at least one second client when detecting that an action track of the gesture matches the virtual prompt track. The memory 108 is configured to store a scene image of the environment where the first object is located, the to-be-interacted graphic and a mapping relationship between the graphic identifier thereof, and the interaction prompt information.
The server 112 includes a database 114 and a processing engine 116, where the processing engine 116 is configured to obtain the graphic identifiers returned by the second clients, and perform statistics to obtain a statistics result. The database 114 is used to store the statistics. The processing engine 116 is further configured to return the statistics to the terminal device 102 where the first client is located.
The terminal device 120 includes a man-machine interaction screen 122, a processor 124 and a memory 126. The man-machine interaction screen 122 is used for interacting prompt information. The processor 124 is configured to obtain the graphic identifier input by the second object, and send the graphic identifier to the server 112, so that the server 112 sends the statistical result after statistics to the terminal device 102. The memory 126 is used for storing interaction prompt information and inputted graphic identifiers.
The specific process comprises the following steps: step S102-S106, displaying an interactive interface of a first client on a man-machine interaction screen 104 in a terminal device 102, and acquiring an object interaction request triggered by operation, wherein the object interaction request carries a target graphic identifier of a graphic to be interacted (an loving graphic as shown in FIG. 1); and responding to the object interaction request, displaying a scene picture of the environment where the first object of the first client is located, and displaying a virtual prompt track matched with the target pattern mark in a superposition mode in the scene picture. And then, under the condition that the action track of the gesture action of the first object is detected to be matched with the virtual prompt track, the interactive prompt information is determined to be sent to at least one second client. The interaction prompt information is used for prompting a second object of the second client to input a graphic identifier matched with the gesture motion track. The second object and the first object are related objects, for example, the second object focuses on the first object and is a fan of the first object.
In step S108, the interaction prompt message is sent to the terminal device 120 (shown as a terminal device 120 in fig. 1) where at least one second client is located through the network 110. Then, as shown in steps S110-S112, the terminal device 120 prompts the second object to input the graphic identifier matched with the gesture track of the gesture through the man-machine interaction screen 122, and obtains the input graphic identifier. In step S114, the inputted graphic identifier is transmitted to the server 112 via the network 110.
After the server 112 obtains the graphic identifiers returned by the second clients, step S116 is executed to perform statistics on the graphic identifiers, so as to obtain a statistical result. Then, the interaction result including the statistics result is sent to the terminal device 102 where the first client is located through the network 110, in step S118. After the terminal device 120 acquires the above-mentioned interaction result, in step S120, the above-mentioned interaction result is displayed on the man-machine interaction screen 104 (the time when the second object corresponding to each second client is returned is displayed as shown in fig. 1).
It should be noted that, in this embodiment, after an object interaction request carrying a target graphic identifier of a graphic to be interacted triggered by a first object in a first client is obtained, the scene image of the environment where the first object is located is displayed in the first client through an augmented reality (Augmented Reality, abbreviated as AR) technology in response to the object interaction request, and a virtual prompting track corresponding to the graphic to be interacted is displayed in a superimposed manner, so as to prompt the first object to complete gesture actions according to the virtual prompting track. Further, the first client sends the interaction prompt information to at least one second client so as to prompt the second object using the second client to input the graphic identifier matched with the action track of the gesture action, and the interaction result returned by the second client is displayed in the first client, so that the first client and the second client in the live broadcast scene realize bidirectional interaction, the interaction channel between the first client and the second client is expanded, the interaction mode in the live broadcast process is enriched, the interaction diversified effect is achieved, and the problem of lower interaction efficiency caused by unidirectional information transmission in the relevant live broadcast technology is further solved.
Alternatively, in this embodiment, the above-mentioned message processing method may be applied to, but not limited to, a terminal device, which may be, but not limited to, a terminal device supporting running an application client, such as a mobile phone, a tablet computer, a notebook computer, and a PC. The server and the terminal device may implement data interaction through a network, which may include, but is not limited to, a wireless network or a wired network. Wherein the wireless network comprises: bluetooth, WIFI, and other networks that enable wireless communications. The wired network may include, but is not limited to: wide area network, metropolitan area network, local area network. The above is merely an example, and is not limited in any way in the present embodiment.
Optionally, as an optional implementation manner, as shown in fig. 2, the object interaction method includes:
s202, acquiring an object interaction request triggered by a first object in a first client, wherein the object interaction request carries a target graphic identifier of a graphic to be interacted;
s204, responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client, and displaying a virtual prompt track matched with the target graph mark in a superposition manner in the scene picture;
S206, when the action track corresponding to the gesture action of the first object is detected and is matched with the virtual prompt track, sending interaction prompt information to at least one second client;
and S208, displaying the interaction result returned by at least one second client in the first client, wherein the interaction result comprises a statistical result of the graphic identifiers returned by the second client.
Optionally, in this embodiment, the above-mentioned object interaction method may be, but is not limited to, an object interaction method applied to a live scene, where the live scene may be, but is not limited to, provided in a client of at least one of the following applications: live applications, audio playback applications, video playback applications, space sharing applications, etc., which may include, but are not limited to: long video playback platform applications (e.g., platform applications for providing various composite videos that are played for a longer period of time), and short video sharing platform applications (e.g., platform applications for providing a single video that is played for a longer period of time less than a predetermined threshold). That is, here, the entry of the live scene may be, but not limited to, provided in the form of a jump link in a different existing application client, which is merely an example, and is not limited in any way in the present embodiment. In this embodiment, the first client may be, but not limited to, a hosting client for providing the play content, and the second client may be, but not limited to, a viewer client for viewing the play content. In other words, the second object using the second client is a fan of the first object using the first client, and the two have an association relationship.
Optionally, in this embodiment, acquiring the object interaction request triggered by the first object in the first client may include, but is not limited to: displaying a trigger key of the object interaction function in a man-machine interaction interface of the first client; opening the object interaction function in response to the operation executed by the trigger key; randomly determining a group of candidate graphs from a graph database to be used as a candidate graph set; displaying the candidate graph set in a first client; acquiring a target graph selected from the candidate graph set as a graph to be interacted; and generating the object interaction request by using the target graph identification of the target graph.
It should be noted that, in this embodiment, the candidate graphics in the candidate graphics set may be, but not limited to, a simple drawing graphic with a single line configured in advance, so that the first object may copy the candidate graphics through gesture actions. For example, the graphics described above may be, but are not limited to: triangle, five-pointed star, love etc. In addition, in this embodiment, the number of candidate graphics included in the candidate graphics set may be one or more, and the number may be flexibly configured, which is not limited in this embodiment.
In this embodiment, the display manner of the trigger key for triggering the object interaction function may include, but is not limited to, at least one of the following: 1) Displaying the key icons of the trigger keys in the function key floating layer provided by the first client; 2) And directly displaying the key icons of the trigger keys in the playing interface of the first client. In addition, the manner of triggering the object interaction function may further include: the key icon of the triggering key is hidden, and the shortcut operation is executed on the playing interface of the first client to trigger the object interaction function, wherein the shortcut operation may include, but is not limited to, at least one of the following: double clicking the playing interface, executing sliding operation according to a preset direction on the playing interface, and the like. The above is merely an example, and the manner of triggering the object interaction function and displaying the key icon of the triggering key for triggering the object interaction function in this embodiment is not limited in any way.
Optionally, in this embodiment, after the object interaction request is acquired, a scene picture of the environment where the first object is located may be displayed by using, but not limited to, an augmented reality (Augmented Reality, abbreviated as AR) technology, and a virtual hint track corresponding to the graphic to be interacted may be displayed in a superimposed manner. The AR technology is a new technology for seamlessly integrating real world information and virtual world information, and is characterized in that virtual information is applied to the real world and perceived by human senses through simulation and superposition after scientific technologies such as a computer and the like, so that a real environment and a virtual object are superposed on the same picture or space in real time and exist simultaneously. That is, in the display interface of the first client, while displaying the real scene picture of the environment where the first object is located, the virtual prompt track corresponding to the graph to be interacted can be displayed in a superimposed manner, and in the first client, the superposition display of the real environment and the virtual track is realized, so that the purpose of prompting the track of the first object is achieved.
In this embodiment, the display manner of the virtual hint track may include, but is not limited to: 1) Static graphics, such as the virtual hint track is statically displayed by a dotted track; 2) Dynamic graphics, such as a drawing process for prompting graphics to be interacted with. The above is merely an example, and this is not limited in any way in the present embodiment.
Optionally, in the present embodiment, detecting the gesture action of the first object may include, but is not limited to: after the hand of the first object is identified in the acquired image, detecting the position of a key point corresponding to the hand; the change in the location of the keypoint is tracked to determine a gesture motion trajectory of the gesture motion of the first object.
It should be noted that the above-mentioned key point positions may be, but are not limited to, one or more positions set for bones according to hands. Tracking the change of the position of the key point can determine the action track of the gesture action executed by the first object so as to recognize the gesture action of the first object. Further, by comparing the gesture motion track with the virtual prompt track, it can be determined whether the first object completes the to-be-interacted graph according to the prompt. In the process of executing the gesture action by the first object, the completion progress of the first object may be synchronously displayed on the virtual prompting track, but is not limited to. For example, in the case where the virtual hint track is a static dashed track, if it is detected that the gesture currently performed by the first object has completed a part of the graph, the completion progress may be synchronously displayed on the virtual hint track, for example, the track of the completed part of the graph is displayed as a solid track. The above is merely an example, and is not limited in any way in the present embodiment.
Optionally, in this embodiment, if it is detected that the gesture track of the gesture action of the first object matches the virtual hint track, it is determined that the drawing of the to-be-interacted graph is completed, and interaction hint information may be sent to at least one second client, where the interaction hint information may include, but is not limited to: the system comprises a group of image sets containing action tracks of gesture actions of the first object and related prompt information, wherein the virtual prompt track is defaulted in the group of image sets, and the related prompt information is used for prompting a second object of a second client to input a graphic identifier corresponding to the action track so as to complete an object interaction task initiated by the first client. Wherein, the related prompt information may include, but is not limited to, at least one of the following: text prompt information, image prompt information, moving picture prompt information, video prompt information, voice prompt information and the like.
It should be noted that, in this embodiment, after the interaction prompt information is received, the graphic identifier corresponding to the action track may be directly input in the input window by using the second object of the second client, or the graphic identifier corresponding to the action track may be input through the voice, and further, the conversion control in the second client may complete the conversion from the voice to the text.
Further, in the present embodiment, after the second client finishes inputting the graphic identifier, it will be sent to the server. The server analyzes and counts the graph identifications returned by the second clients: determining whether the returned graphic identification matches a target graphic identification provided by the first client; under the condition of matching, sorting target account numbers used for logging in the second clients according to the return time of each second client to obtain an account number sequence; and pushing the account sequence to the first client as an interaction result for display, so that the first client intuitively displays the interaction result, thereby completing bidirectional interaction between the first client and the second client, and achieving the effects of expanding an interaction channel in a live broadcast scene and enriching the diversity of interaction.
Specifically, the description is made with reference to the example shown in fig. 3. In the process of live broadcasting, the anchor triggers an object interaction request through the anchor client, wherein the object interaction request carries a target graphic identifier (e.g. a "01" identifier love heart) of a graphic to be interacted (e.g. a love heart). And responding to the object interaction request, generating and displaying a virtual prompt track at the anchor client, and then acquiring an image containing the anchor through a camera in the terminal equipment where the anchor client is positioned so as to identify gesture actions of the anchor through the image. And sending the image to the audience client, and acquiring an answer (a graph mark corresponding to the action track of the anchor gesture action) corresponding to the gesture action returned by the audience through the audience client. And then the server compares the returned answers through intelligent operation and sends the compared results (such as answer list) to the anchor client. The anchor client will display the answer list. Thereby completing the object interaction task triggered by the object interaction request.
According to the embodiment provided by the application, after the object interaction request carrying the target graphic identification of the graphic to be interacted triggered by the first object in the first client is obtained, responding to the object interaction request, displaying a scene picture of the environment where the first object is located in the first client through an augmented reality (Augmented Reality, AR for short) technology, and displaying a virtual prompting track corresponding to the graphic to be interacted in a superposition manner so as to prompt the first object to finish gesture actions according to the virtual prompting track. Further, the first client sends the interaction prompt information to at least one second client so as to prompt the second object using the second client to input the graphic identifier matched with the action track of the gesture action, and the interaction result returned by the second client is displayed in the first client, so that the first client and the second client in the live broadcast scene realize bidirectional interaction, the interaction channel between the first client and the second client is expanded, the interaction mode in the live broadcast process is enriched, the interaction diversified effect is achieved, and the problem of lower interaction efficiency caused by unidirectional information transmission in the relevant live broadcast technology is further solved.
As an alternative solution, in response to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client, and displaying, in a superimposed manner, a virtual hint track matched with the target graphic identifier on the scene picture includes:
S1, responding to an object interaction request, and calling a camera in terminal equipment where a first client is located to acquire a scene picture of an environment where the first object is currently located;
s2, displaying the acquired scene picture in the first client;
s3, determining the superposition display position of the virtual prompting track on the scene picture in the first client;
and S4, displaying the virtual prompting track on the superposition display position.
It should be noted that, displaying a scene picture of the environment where the first object acquired by the camera is currently located in the first client, and displaying a virtual prompt track corresponding to the to-be-interacted graph, where the displaying process is to add the virtual track in the real environment by using an AR technology, so as to realize real and virtual interaction.
Further, in the present embodiment, the superimposed display position of the virtual hint track described above may include, but is not limited to, the following configuration:
1) And before triggering the object interaction request, the function configuration interface configuration of the first client is performed in advance. That is, the superimposed display positions of the virtual hint tracks corresponding to the graphics to be interacted with, such as the center of the screen, the lower left of the screen, etc., are uniformly configured in advance.
2) After triggering the object interaction request, the superposition display position of the virtual hint track is reconfigured. That is, the superimposed display position of the virtual hint track may be flexibly selected and configured according to the position where the first object actually appears in the scene, so as to avoid the virtual hint track from blocking the first object.
In this embodiment, the determining the overlapping display position of the virtual hint track may, but is not limited to, determining the display coordinates of the virtual hint track, so that the virtual hint track is accurately displayed in an overlapping manner on the acquired scene.
The specific description is described with reference to fig. 4: suppose a first object logs in to a first client (the anchor client) for live (assuming the current live time is 11:10 a.m.) using an account number "small black". Wherein the second object focusing on the first object includes: bluish white and bluish red. That is, the bluish white and bluish red view live content through the logged-in second client, respectively.
Further, after the small black triggering object interaction request, determining that a target graph mark of a graph to be interacted is 'love', then a camera in the terminal equipment where the first client is located will collect a scene picture of the environment where the small black is currently located, determining a superposition display position of a virtual prompt track corresponding to the 'love' in the scene picture, and then displaying an interface as shown in fig. 4 in the first client: and displaying a virtual prompting track (a dotted line track shown in the figure) corresponding to the 'love' in the center of the scene picture.
According to the embodiment provided by the application, after the camera in the terminal equipment where the first client is located is called to collect the scene picture of the environment where the first object is currently located, after the collected scene picture is displayed, the display coordinates of the overlapped display position of the virtual prompting track on the scene picture are determined, so that the virtual prompting track can be accurately displayed on the display coordinates, and other important contents in the scene picture are prevented from being blocked by the virtual prompting track.
As an alternative, before sending the interaction prompt information to the at least one second client, the method further includes:
s1, calling a camera in terminal equipment where a first client is located to acquire an image sequence corresponding to a first object;
s2, detecting gesture actions of the first object in the image sequence.
In this embodiment, the camera invoked when detecting the first object may be, but not limited to, a depth camera, or two or more cameras. The depth is detected through the camera so as to abstract skeleton information of the hand of the first object and further determine the position of the key point corresponding to the hand. And then the image area where the hand is located is independently stripped from the acquired image, so that the camera can track the movement change process of the key point position of the hand, thereby distinguishing the left hand from the right hand and determining the movement track of the hand, and the corresponding gesture action can be conveniently identified. The two or more cameras are used for comparing images acquired at the same moment to calculate depth information by utilizing the difference of the images, so that three-dimensional imaging is realized, and the aim of identifying gesture actions of a first object is fulfilled.
In addition, in this embodiment, in order to recognize the gesture of the first object, the image recognition may be performed based on, but not limited to, an image sequence acquired by the camera, where the image sequence may include one image or multiple images. For example, as shown in FIG. 5, for a single gesture action (as shown, "like", "attention", "victory", "heart ratio") may be recognized directly from the keypoint locations. However, for a more complex gesture (such as "love heart" as shown in fig. 4), the recognition needs to be performed according to the changing process of the positions of the key points in the multiple images acquired according to a certain time sequence. The above is merely an example, and this is not limited in any way in the present embodiment.
Optionally, in this embodiment, after detecting the gesture action of the first object in the image sequence, the method further includes:
s21, determining the position of a key point corresponding to the hand of the first object in each image in the image sequence;
s22, tracking the positions of all key points in the image sequence to determine the action track of the gesture action;
s23, determining that the action track of the gesture action is matched with the virtual prompting track under the condition that the track similarity between the action track and the virtual prompting track is larger than the target threshold value.
In this embodiment, in the process of executing the gesture action by the first object, the action track of the gesture action is correspondingly displayed in the first client, and if the track similarity between the action track and the virtual prompting track is greater than the target threshold, it is determined that the action track and the virtual prompting track are matched. The gesture of the first object may be performed along the virtual prompting track, or may be performed at another position in the actual environment. The manner provided in this embodiment is to compare the track similarity of the two, and does not limit the overlap ratio of the display positions. Further, in the case where the display positions of the two are not coincident, the completion progress may be displayed in the virtual hint track. The display manner of the completion schedule may include, but is not limited to: the coverage is displayed directly in the virtual hint track, the percentage of completion, etc. may also be displayed. The above is merely an example, and there is no limitation in this embodiment.
The following description is made with reference to fig. 6: after determining the positions of the key points corresponding to the hand parts of the first object, which are included in one image, the change of the positions of the key points in the plurality of images after the one image can be further tracked so as to determine the action track of the gesture action. As shown in fig. 6, the dashed line is a virtual hint track corresponding to the "love heart" of the to-be-interacted graph, and the solid line is an action track of the gesture action currently completed by the first object, i.e. the current progress is half of the "love heart".
Further, in the case where it is detected that the track similarity between the motion track and the virtual hint track is greater than the target threshold, that is, in the case where it is detected that the solid line has completely covered the dotted line, as shown in fig. 7, it is determined that the motion track of the gesture motion matches the virtual hint track, and the first object "small black" has completed the gesture motion of the whole "love".
According to the embodiment provided by the application, after the positions of the key points corresponding to the hands of the first object in each image in the image sequence are determined, the action track of the gesture action is determined by tracking the positions of the key points in the image sequence, so that the gesture action is accurately identified. Further, by comparing the two tracks, whether the current virtual prompting track is completed or not is determined by utilizing the track similarity, and therefore the interaction prompting information sent to the second client is automatically triggered under the condition that the completion is detected, and the objectivity and fairness of the object interaction process are ensured.
As an alternative, sending the interaction prompt information to the at least one second client includes:
s1, a group of image sets containing action tracks of gesture actions are sent to at least one second client side, so that a second object inputs graphic identifiers matched with the action tracks of the gesture actions in the second client side, wherein each image in the group of image sets is provided with a default virtual prompt track, and the second object and the first object are associated objects.
It should be noted that, in the live broadcast application scenario, the association relationship between the second object and the first object may be, but is not limited to, a concern relationship, and if the first object is a host, the second object is a viewer focusing on the host, and the two objects are associated. The above is merely an example, and this is not limited in any way in the present embodiment.
Specifically described in connection with the example shown in fig. 8, the first client will send a set of images containing the action trajectory of the gesture action of the first object to the second client for display. As shown in fig. 8, assume that the interface shown in fig. 8 is a live interface presented by the second client logged in by the second object "xiaobai", and an action track of a gesture action executed by xiaobai will be presented in the interface, such as a track with an arrow shown in fig. 8, but a virtual prompt track presented in the first client will not be displayed.
According to the embodiment provided by the application, after the first client sends the interaction prompt information to the second client, the second client presents a group of image sets containing the action track of the gesture action of the first object, and does not intuitively present the virtual prompt track, so that the second object using the second client can guess the corresponding graphic mark according to the action track presented in the group of image sets, and the aim of realizing bidirectional interaction with the first object using the first client is fulfilled.
As an alternative, before displaying the interaction result returned by at least one second client in the first client, the method includes:
s1, a server acquires a graphic identifier returned by at least one second client;
s2, the server sequentially compares the graphic identifier returned by at least one second client with the target graphic identifier;
s3, the server acquires a target account number which is logged in by a second client and corresponds to the graph identifier matched with the target graph identifier according to the comparison result;
s4, the server sorts the target accounts according to the return time to obtain an account sequence;
s5, the server sends the account number sequence to the first client.
Optionally, in this embodiment, displaying, in the first client, the interaction result returned by the at least one second client includes at least one of:
1) Directly displaying an account number sequence in a play window of a first client;
2) Creating a popup window in the first client to display an account sequence;
3) A sub-page is created in the first client to display an account sequence.
The specific description is provided in connection with the examples shown in fig. 9-10: assume that the interface shown in fig. 9 is a live interface presented by the second client logged in by the second object "xiaobai", and an action track of a gesture action executed by the first object "xiaobai" will be presented in the interface. Further, as shown in fig. 9, the second object "small white" will input a graphic mark "love heart" corresponding to the observed motion trajectory through a dialog box.
Then, in this embodiment, the server may obtain the graphic identifiers returned by each second client, and sequentially compare the returned graphic identifiers with the target graphic identifiers, and determine the target account number registered by the second client that is successfully matched. Assuming that the target account number comprises an account number "xiaobai" and an account number "xiaohong", further sorting the account numbers according to the return time to obtain an account number sequence, and sending the account number sequence to the first client for display, wherein the xiaohong returns a graphic identifier "love heart" corresponding to a correct answer within 0.9 seconds as shown in fig. 10, and the top of the lead is presented; the xiaobai returns a graphic mark 'love' corresponding to the correct answer within 1.1 seconds, and the second name is located.
It should be noted that the process of obtaining and comparing the graphic identifiers and determining the target account number and the sequence of obtaining the account number may be performed in the first client, which is not limited in this embodiment.
In addition, in this embodiment, the account number sequence may also be sent to the second client, and the second client may perform synchronous display, and for the display mode in the second client, reference may be made to the display mode of the first client, which is not described herein.
According to the embodiment provided by the application, the graphic identifiers returned by the second clients are compared with the target graphic identifiers so as to obtain the statistical result through the statistical comparison result, so that the interactive result of the object interaction task triggered by the current first object is generated, and the interactive result is returned to the first client for visual display, thereby achieving the purpose of bidirectional interaction.
As an alternative, after displaying the account sequence in the first client, the method further includes:
s1, responding to the operation executed on the account sequence, and transferring target resources to target accounts in the account sequence.
It should be noted that, in this embodiment, after the first client side obtains the account number sequence, the first client side may also, but is not limited to, rewards the target account number in the account number sequence, for example, transferring a certain target resource. In this embodiment, the target resource may include, but is not limited to, a virtual resource in a virtual scene, such as virtual currency, a virtual gift, and the like. In this embodiment, the content of the rewards and the rewarding manner are not limited.
As an alternative, acquiring the object interaction request triggered by the first object in the first client includes:
S1, displaying a candidate graph set in a first client;
s2, obtaining a graph corresponding to a target graph identifier selected from the candidate graph set, and taking the graph as a graph to be interacted;
and S3, generating an object interaction request by using the target graphic identifier.
Described in detail with reference to fig. 11-12: as shown in fig. 11, in the playing interface of the first client logged in by the first object "small black", a function key floating layer is triggered and displayed, where a key icon of the object interaction function is displayed in the function key floating layer, as an icon in a dashed box shown in fig. 11.
Further, in response to the operation performed on the key icon, a group of candidate graphics is randomly determined from the graphics database as a candidate graphics set, and the candidate graphics set is displayed in the first client, as shown in fig. 12, where the candidate graphics set includes: peach heart, five-pointed star, small fish, house, etc. And then selecting a graph to be interacted (such as love heart) from the candidate graph set, and generating the object interaction request by using the target graph identification of the graph.
According to the embodiment of the application, the candidate graphics randomly generated by the graphic database are displayed in the first client so as to conveniently select the graphics to be interacted, thereby realizing the object interaction task between the quick triggering and the second client.
According to another aspect of the embodiment of the invention, an object interaction method is also provided. As shown in fig. 13, the apparatus includes:
s1302, acquiring interaction prompt information sent by a first client, wherein the interaction prompt information comprises: a set of images including an action trajectory of a gesture action using a first object of a first client;
s1304 displaying a set of images in the second client;
s1306, inputting a graphic identifier matched with the action track of the gesture action of the first object in the second client;
s1308, the graphic identification is transmitted to the server.
The object interaction process implemented in the second client provided in this embodiment may refer to the above embodiment, and this embodiment is not described herein again.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
According to another aspect of the embodiment of the present invention, there is also provided an object interaction device for implementing the above object interaction method. As shown in fig. 14, the apparatus includes:
1) A first obtaining unit 1402, configured to obtain an object interaction request triggered by a first object in a first client, where the object interaction request carries a target graphic identifier of a graphic to be interacted;
2) A first display unit 1404, configured to respond to the object interaction request, display a scene picture of an environment where the first object is located on the first client, and superimpose and display a virtual hint track matched with the target graphic identifier on the scene picture;
3) A first sending unit 1406, configured to send interaction prompt information to at least one second client when detecting an action track corresponding to a gesture of the first object and matching the action track with the virtual prompt track;
4) And a second display unit 1408, configured to display, in the first client, an interaction result returned by at least one second client, where the interaction result includes a statistical result of the graphic identifier returned by the second client.
The object interaction device provided in the embodiment may be applied to the first client, but is not limited to the first client, and specific examples may refer to the above embodiment, which is not described herein again.
According to another aspect of the embodiment of the present invention, there is also provided an object interaction device for implementing the above object interaction method. As shown in fig. 15, the apparatus includes:
1) The obtaining unit 1502 is configured to obtain interaction prompt information sent by a first client, where the interaction prompt information includes: a set of images including an action trajectory of a gesture action using a first object of a first client;
2) A display unit 1504 for displaying a set of images in a second client;
3) An input unit 1506, configured to input, in the second client, a graphical identifier that matches the gesture motion trajectory of the first object;
4) A transmitting unit 1508 for transmitting the graphic identification to the server.
The object interaction device provided in the embodiment may be applied to the second client, but is not limited to the second client, and specific examples may refer to the above embodiment, which is not described herein again.
According to another aspect of the embodiment of the present invention, there is also provided an object interaction system for implementing the above object interaction method. The device comprises:
1) The system comprises a first client, a second client and a third client, wherein the first client is used for acquiring an object interaction request triggered by a first object in the first client, and the object interaction request carries a target graphic identifier of a graphic to be interacted; the method is also used for responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client side, and displaying a virtual prompt track matched with the target graph mark in a superposition manner in the scene picture; the method is also used for sending interaction prompt information to at least one second client side under the condition that the action track corresponding to the gesture action of the first object is detected and is matched with the virtual prompt track;
2) The second client is used for acquiring interaction prompt information and also used for acquiring a graph identifier which is input by a second object and matched with the action track of the gesture action according to the interaction prompt information; the graphical identification transmission server is also used for transmitting the graphical identification matched with the action track of the gesture action;
3) The server is used for acquiring the graphic identifier returned by the second client, carrying out statistics to obtain a statistical result, and sending the statistical result to the first client;
4) The first client is further configured to display an interaction result returned by the at least one second client, where the interaction result includes a statistical result.
The internal structure of the object interaction system provided in this embodiment may be, but not limited to, that shown in fig. 1, and examples of specific interaction processes may refer to the above embodiments, which are not described herein again.
According to a further aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the above-described object interaction method, as shown in fig. 16, the electronic device comprising a memory 1602 and a processor 1604, the memory 1602 having stored therein a computer program, the processor 1604 being arranged to perform the steps of any of the method embodiments described above by means of the computer program.
Alternatively, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of the computer network.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, acquiring an object interaction request triggered by a first object in a first client, wherein the object interaction request carries a target graphic identifier of a graphic to be interacted;
s2, responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client, and displaying a virtual prompt track matched with the target graph mark in a superposition manner in the scene picture;
s3, when an action track corresponding to the gesture action of the first object is detected and is matched with the virtual prompt track, sending interaction prompt information to at least one second client;
and S4, displaying an interaction result returned by at least one second client in the first client, wherein the interaction result comprises a statistical result of the graphic identifiers returned by the second client.
Alternatively, it will be understood by those skilled in the art that the structure shown in fig. 16 is only schematic, and the electronic device may also be a terminal device such as a smart phone (e.g. an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 16 is not limited to the structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in fig. 16, or have a different configuration than shown in fig. 16.
The memory 1602 may be used to store software programs and modules, such as program instructions/modules corresponding to the object interaction methods and apparatuses in the embodiments of the present invention, and the processor 1604 executes the software programs and modules stored in the memory 1602 to perform various functional applications and data processing, i.e., to implement the object interaction methods described above. Memory 1602 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1602 may further include memory located remotely from the processor 1604, which may be connected to the terminal by a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1602 may be, but is not limited to, used for storing information such as graphics to be interacted with, graphics identifiers and corresponding relationships thereof, and acquired scene images. As an example, as shown in fig. 16, the memory 1602 may include, but is not limited to, a first acquiring unit 1402, a first display unit 1404, a detecting unit 1406, a first transmitting unit 1408, and a second display unit 1410 in the object interaction device. In addition, other module units in the above object interaction device may be included, but are not limited to, and are not described in detail in this example.
Optionally, the transmission device 1606 is used to receive or transmit data via a network. Specific examples of the network described above may include wired networks and wireless networks. In one example, the transmission means 1606 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices and routers via a network cable to communicate with the internet or a local area network. In one example, the transmission device 1606 is a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
In addition, the electronic device further includes: a display 1608 for displaying virtual hint tracks, interactive results, etc.; and a connection bus 1610 for connecting the respective module components in the above-described electronic device.
According to a further aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the above-mentioned object interaction method, as shown in fig. 17, the electronic device comprising a memory 1702 and a processor 1704, the memory 1702 having stored therein a computer program, the processor 1704 being arranged to execute the steps of any of the method embodiments described above by means of the computer program.
Alternatively, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of the computer network.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, acquiring interaction prompt information sent by a first client, wherein the interaction prompt information comprises: a set of images including an action trajectory of a gesture action using a first object of a first client;
s2, displaying a group of image sets in a second client;
s3, inputting a graphic identifier matched with the gesture motion track of the first object in the second client;
s4, sending the graphic identification to the server.
Alternatively, it will be understood by those skilled in the art that the structure shown in fig. 17 is only schematic, and the electronic device may also be a terminal device such as a smart phone (e.g. an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 17 is not limited to the structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 17, or have a different configuration than shown in FIG. 17.
The memory 1702 may be used to store software programs and modules, such as program instructions/modules corresponding to the object interaction methods and apparatuses in the embodiments of the present invention, and the processor 1704 executes the software programs and modules stored in the memory 1702 to perform various functional applications and data processing, that is, implement the object interaction methods described above. Memory 1702 may include high-speed random access memory, but may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 1702 may further include memory located remotely from processor 1704, which may be connected to the terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. Wherein the memory 1702 may be used to store, but is not limited to, information such as graphical identifiers. As an example, as shown in fig. 17, the memory 1702 may include, but is not limited to, the acquisition unit 1502, the display unit 1504, the input unit 1506, and the transmission unit 1508 in the object interaction device. In addition, other module units in the above object interaction device may be included, but are not limited to, and are not described in detail in this example.
Optionally, the transmission device 1706 described above is used to receive or transmit data via a network. Specific examples of the network described above may include wired networks and wireless networks. In one example, the transmission apparatus 1706 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices and routers via a network cable to communicate with the internet or a local area network. In one example, the transmission device 1706 is a Radio Frequency (RF) module that is configured to communicate wirelessly with the internet.
In addition, the electronic device further includes: a display 1708 for displaying interactive prompt information and the like; and a connection bus 1710 for connecting the respective module parts in the above-described electronic device.
According to a further aspect of embodiments of the present invention, there is also provided a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for executing the steps of:
S1, acquiring an object interaction request triggered by a first object in a first client, wherein the object interaction request carries a target graphic identifier of a graphic to be interacted;
s2, responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client, and displaying a virtual prompt track matched with the target graph mark in a superposition manner in the scene picture;
s3, when an action track corresponding to the gesture action of the first object is detected and is matched with the virtual prompt track, sending interaction prompt information to at least one second client;
and S4, displaying an interaction result returned by at least one second client in the first client, wherein the interaction result comprises a statistical result of the graphic identifiers returned by the second client.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be further configured to store a computer program for performing the steps of:
s1, acquiring interaction prompt information sent by a first client, wherein the interaction prompt information comprises: a set of images including an action trajectory of a gesture action using a first object of a first client;
s2, displaying a group of image sets in a second client;
S3, inputting a graphic identifier matched with the gesture motion track of the first object in the second client;
s4, sending the graphic identification to the server.
Alternatively, in this embodiment, it will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be performed by a program for instructing a terminal device to execute the steps, where the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method described in the embodiments of the present invention.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (15)

1. An object interaction method, comprising:
acquiring an object interaction request triggered by a first object in a first client, wherein the object interaction request carries a target graphic identifier of a graphic to be interacted;
responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client, and displaying a virtual prompt track matched with the target graph mark in a superposition manner in the scene picture;
when detecting that the action track corresponding to the gesture action of the first object is matched with the virtual prompt track, sending interaction prompt information to at least one second client;
And displaying an interaction result returned by at least one second client in the first client, wherein the interaction result comprises a statistical result of the graphic identifiers returned by the second client.
2. The method of claim 1, wherein displaying a scene of the environment in which the first object is located on the first client in response to the object interaction request, and displaying a virtual hint track matching the target graphical identifier in the scene in an overlaid manner includes:
responding to the object interaction request, and calling a camera in terminal equipment where the first client is located to acquire the scene picture of the environment where the first object is currently located;
displaying the acquired scene picture in the first client;
determining the superposition display position of the virtual prompting track on the scene picture in the first client;
and displaying the virtual prompting track on the superposition display position.
3. The method of claim 1, further comprising, prior to said sending the interaction prompt to the at least one second client:
invoking a camera in terminal equipment where the first client is located to acquire an image sequence corresponding to the first object;
A gesture action of the first object is detected in the sequence of images.
4. A method according to claim 3, further comprising, after detecting a gesture action of the first object in the sequence of images:
determining a key point position corresponding to the hand of the first object in each image in the image sequence;
tracking the positions of all key points in the image sequence to determine the action track of the gesture action;
and under the condition that the track similarity between the action track and the virtual prompting track is larger than a target threshold value, determining that the action track of the gesture action is matched with the virtual prompting track.
5. The method of claim 1, wherein the sending the interaction hint information to the at least one second client comprises:
and sending a group of image sets containing the action track of the gesture action to at least one second client side so that a second object inputs a graphic identifier matched with the action track of the gesture action in the second client side, wherein each image in the group of image sets defaults to the virtual prompt track, and the second object and the first object are associated objects.
6. The method of claim 5, comprising, before displaying the interaction result returned by at least one of the second clients in the first client:
the server acquires at least one graphic identifier returned by the second client;
the server sequentially compares the graphic identifier returned by at least one second client with the target graphic identifier;
the server acquires a target account number which is logged in by the second client and corresponds to the graph identifier matched with the target graph identifier according to the comparison result;
the server sorts the target account numbers according to the return time to obtain an account number sequence;
and the server sends the account number sequence to the first client.
7. The method of claim 6, wherein displaying the interaction results returned by at least one of the second clients in the first client comprises at least one of:
directly displaying the account number sequence in a play window of the first client;
creating a popup window in the first client to display the account number sequence;
creating a sub-page in the first client to display the account number sequence.
8. The method of claim 6, further comprising, after displaying the sequence of accounts in the first client:
and transferring target resources to the target account in the account sequence in response to the operation executed on the account sequence.
9. The method according to any one of claims 1 to 8, wherein the obtaining an object interaction request triggered by the first object in the first client comprises:
displaying a candidate graphic set in the first client;
acquiring a graph corresponding to the target graph identifier selected from the candidate graph set as the graph to be interacted;
and generating the object interaction request by using the target graphic identifier.
10. An object interaction method, comprising:
the method comprises the steps of obtaining interaction prompt information sent by a first client, wherein the interaction prompt information comprises the following steps: a group of image sets comprising action tracks of gesture actions of a first object of the first client, wherein the action tracks are action tracks of gesture actions of the first object according to virtual prompt tracks, the virtual prompt tracks are displayed in a superimposed manner in a scene picture of an environment where the first object is located, and the scene picture is displayed in the first client;
Displaying the set of image sets in a second client;
inputting a graphic identifier matched with the action track of the gesture action of the first object in the second client;
and sending the graphic identification to a server so that the server sends the graphic identification to the first client, and displaying the statistical result of the graphic identification in the first client.
11. An object interaction device, comprising:
the first acquisition unit is used for acquiring an object interaction request triggered by a first object in the first client, wherein the object interaction request carries a target graphic identifier of a graphic to be interacted;
the first display unit is used for responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client side, and displaying a virtual prompt track matched with the target graph mark in a superposition mode in the scene picture;
the first sending unit is used for sending interaction prompt information to at least one second client side under the condition that an action track corresponding to the gesture action of the first object is detected and is matched with the virtual prompt track;
And the second display unit is used for displaying the interaction result returned by at least one second client in the first client, wherein the interaction result comprises a statistical result of the graphic identifiers returned by the second client.
12. An object interaction device, comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring interaction prompt information sent by a first client, and the interaction prompt information comprises: a group of image sets comprising action tracks of gesture actions of a first object of the first client, wherein the action tracks are action tracks of gesture actions of the first object according to virtual prompt tracks, the virtual prompt tracks are displayed in a superimposed manner in a scene picture of an environment where the first object is located, and the scene picture is displayed in the first client;
a display unit for displaying the group of image sets in a second client;
an input unit, configured to input, in the second client, a graphical identifier that matches the action trajectory of the gesture action of the first object;
and the sending unit is used for sending the graphic identification to a server so that the server can send the graphic identification to the first client and display the statistical result of the graphic identification in the first client.
13. An object interaction system, comprising:
the system comprises a first client, a second client and a third client, wherein the first client is used for acquiring an object interaction request triggered by a first object in the first client, and the object interaction request carries a target graphic identifier of a graphic to be interacted; the method is also used for responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client, and displaying a virtual prompt track matched with the target graph mark in a superposition manner in the scene picture; the interaction prompt information is sent to at least one second client side under the condition that the action track corresponding to the gesture action of the first object is detected and is matched with the virtual prompt track;
the second client is used for acquiring the interaction prompt information and also used for acquiring a graphic identifier which is input by a second object and is matched with the action track of the gesture according to the interaction prompt information; the graphical identification matched with the action track of the gesture action is sent to the server;
the server is used for acquiring the graphic identifier returned by the second client, carrying out statistics to obtain a statistical result, and sending the statistical result to the first client;
The first client is further configured to display an interaction result returned by at least one second client, where the interaction result includes the statistical result.
14. A computer readable storage medium comprising a stored program, wherein the program when run performs the method of any one of claims 1 to 9 or performs the method of claim 10.
15. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of the claims 1 to 9 or to execute the method according to claim 10 by means of the computer program.
CN201910927056.4A 2019-09-27 2019-09-27 Object interaction method and device, storage medium and electronic device Active CN110703913B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910927056.4A CN110703913B (en) 2019-09-27 2019-09-27 Object interaction method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910927056.4A CN110703913B (en) 2019-09-27 2019-09-27 Object interaction method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN110703913A CN110703913A (en) 2020-01-17
CN110703913B true CN110703913B (en) 2023-09-26

Family

ID=69196977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910927056.4A Active CN110703913B (en) 2019-09-27 2019-09-27 Object interaction method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN110703913B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113325948B (en) * 2020-02-28 2023-02-07 华为技术有限公司 Air-isolated gesture adjusting method and terminal
CN111461005B (en) * 2020-03-31 2023-11-28 腾讯科技(深圳)有限公司 Gesture recognition method and device, computer equipment and storage medium
CN111627039A (en) * 2020-05-09 2020-09-04 北京小狗智能机器人技术有限公司 Interaction system and interaction method based on image recognition
CN112286439A (en) * 2020-11-03 2021-01-29 广东科徕尼智能科技有限公司 Terminal interaction method, device and storage medium based on touch track
CN113253901A (en) * 2021-03-15 2021-08-13 北京字跳网络技术有限公司 Interaction method, device, equipment and storage medium in live broadcast room
CN115469735A (en) * 2021-05-28 2022-12-13 北京字节跳动网络技术有限公司 Interaction method and device based on gestures and client
CN113727147A (en) * 2021-08-27 2021-11-30 上海哔哩哔哩科技有限公司 Gift presenting method and device for live broadcast room
CN114115524B (en) * 2021-10-22 2023-08-18 青岛海尔科技有限公司 Interaction method of intelligent water cup, storage medium and electronic device
CN114363685A (en) * 2021-12-20 2022-04-15 咪咕文化科技有限公司 Video interaction method and device, computing equipment and computer storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103501445A (en) * 2013-10-12 2014-01-08 青岛旲天下智能科技有限公司 Gesture-based interaction two-way interactive digital TV box system and implementation method
CN106804007A (en) * 2017-03-20 2017-06-06 合网络技术(北京)有限公司 The method of Auto-matching special efficacy, system and equipment in a kind of network direct broadcasting
CN107124664A (en) * 2017-05-25 2017-09-01 百度在线网络技术(北京)有限公司 Exchange method and device applied to net cast
WO2017198143A1 (en) * 2016-05-18 2017-11-23 中兴通讯股份有限公司 Video processing method, video playback method, set-top box, and vr apparatus
CN108260021A (en) * 2018-03-08 2018-07-06 乐蜜有限公司 Living broadcast interactive method and apparatus
CN109905754A (en) * 2017-12-11 2019-06-18 腾讯科技(深圳)有限公司 Virtual present collection methods, device and storage equipment
CN110139142A (en) * 2019-05-16 2019-08-16 北京达佳互联信息技术有限公司 Virtual objects display methods, device, terminal and storage medium
CN110267051A (en) * 2019-05-16 2019-09-20 北京奇艺世纪科技有限公司 A kind of method and device of data processing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150288927A1 (en) * 2014-04-07 2015-10-08 LI3 Technology Inc. Interactive Two-Way Live Video Communication Platform and Systems and Methods Thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103501445A (en) * 2013-10-12 2014-01-08 青岛旲天下智能科技有限公司 Gesture-based interaction two-way interactive digital TV box system and implementation method
WO2017198143A1 (en) * 2016-05-18 2017-11-23 中兴通讯股份有限公司 Video processing method, video playback method, set-top box, and vr apparatus
CN106804007A (en) * 2017-03-20 2017-06-06 合网络技术(北京)有限公司 The method of Auto-matching special efficacy, system and equipment in a kind of network direct broadcasting
CN107124664A (en) * 2017-05-25 2017-09-01 百度在线网络技术(北京)有限公司 Exchange method and device applied to net cast
CN109905754A (en) * 2017-12-11 2019-06-18 腾讯科技(深圳)有限公司 Virtual present collection methods, device and storage equipment
CN108260021A (en) * 2018-03-08 2018-07-06 乐蜜有限公司 Living broadcast interactive method and apparatus
CN110139142A (en) * 2019-05-16 2019-08-16 北京达佳互联信息技术有限公司 Virtual objects display methods, device, terminal and storage medium
CN110267051A (en) * 2019-05-16 2019-09-20 北京奇艺世纪科技有限公司 A kind of method and device of data processing

Also Published As

Publication number Publication date
CN110703913A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
CN110703913B (en) Object interaction method and device, storage medium and electronic device
CN106303555B (en) A kind of live broadcasting method based on mixed reality, device and system
CN106920079A (en) Virtual objects distribution method and device based on augmented reality
CN108632632B (en) Live webcast data processing method and device
CN106605218A (en) Method of collecting and processing computer user data during interaction with web-based content
CN107633441A (en) Commodity in track identification video image and the method and apparatus for showing merchandise news
CN113965811A (en) Play control method and device, storage medium and electronic device
CN110472099B (en) Interactive video generation method and device and storage medium
CN110149549B (en) Information display method and device
CN113178015A (en) House source interaction method and device, electronic equipment and storage medium
CN108635863B (en) Live webcast data processing method and device
CN113691829B (en) Virtual object interaction method, device, storage medium and computer program product
CN106572359A (en) Method and device for synchronously playing panoramic video on multiple terminals
CN109068161A (en) A kind of equipment connection method for building up and device
CN109754329B (en) Electronic resource processing method, terminal, server and storage medium
CN108829868A (en) data display method and device, storage medium and electronic device
CN109314802A (en) Game based on position in game is carried out with application
CN113573090A (en) Content display method, device and system in game live broadcast and storage medium
CN110198472B (en) Video resource playing method and device
CN108574878B (en) Data interaction method and device
CN112188223B (en) Live video playing method, device, equipment and medium
CN113244609A (en) Multi-picture display method and device, storage medium and electronic equipment
WO2018057921A1 (en) System and method for situation awareness in immersive digital experiences
KR20220159968A (en) Conference handling method and system using avatars
CN111510746B (en) Media resource delivery method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40020924

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant