CN110703913A - Object interaction method and device, storage medium and electronic device - Google Patents

Object interaction method and device, storage medium and electronic device Download PDF

Info

Publication number
CN110703913A
CN110703913A CN201910927056.4A CN201910927056A CN110703913A CN 110703913 A CN110703913 A CN 110703913A CN 201910927056 A CN201910927056 A CN 201910927056A CN 110703913 A CN110703913 A CN 110703913A
Authority
CN
China
Prior art keywords
client
interaction
track
displaying
identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910927056.4A
Other languages
Chinese (zh)
Other versions
CN110703913B (en
Inventor
廖中远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910927056.4A priority Critical patent/CN110703913B/en
Publication of CN110703913A publication Critical patent/CN110703913A/en
Application granted granted Critical
Publication of CN110703913B publication Critical patent/CN110703913B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Abstract

The invention discloses an object interaction method and device, a storage medium and an electronic device. The method comprises the following steps: acquiring an object interaction request triggered by a first object in a first client; responding to the object interaction request, displaying a scene picture of the environment where the first object is located at the first client, and superposing and displaying a virtual prompt track matched with the target graphic identifier in the scene picture; sending interactive prompt information to at least one second client under the condition that an action track corresponding to the gesture action of the first object is detected to be matched with the virtual prompt track; and displaying the interaction result returned by the at least one second client in the first client. The invention solves the technical problem that the mode provided by the related technology limits the efficiency of object interaction in the live scene.

Description

Object interaction method and device, storage medium and electronic device
Technical Field
The present application relates to the field of computers, and in particular, to an object interaction method and apparatus, a storage medium, and an electronic apparatus.
Background
In the live broadcast process, in order to realize real-time interaction between a main broadcast and audiences, the related technology provides a mode for identifying gestures of the main broadcast, generates a corresponding image by identifying the gestures of the main broadcast, and pushes the image to the audiences for display so as to enhance the picture sense of the gestures and attract more audiences to pay attention to the main broadcast.
However, the related art provides a manner that the anchor unidirectionally transfers image information to viewers, and the viewers can only view the image information, but cannot realize a bidirectional interaction process with the anchor. That is, the above-mentioned way of unidirectionally communicating information limits the efficiency of interaction between the anchor and the viewer in a live scene.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an object interaction method and device, a storage medium and an electronic device, which at least solve the technical problem that the mode provided by the related technology limits the efficiency of object interaction in a live scene.
According to an aspect of an embodiment of the present invention, there is provided an object interaction method, including: acquiring an object interaction request triggered by a first object in a first client, wherein the object interaction request carries a target graph identifier of a graph to be interacted; responding to the object interaction request, displaying a scene picture of the environment where the first object is located at the first client, and superposing and displaying a virtual prompt track matched with the target graphic identifier in the scene picture; sending interactive prompt information to at least one second client under the condition that an action track corresponding to the gesture action of the first object is detected and is matched with the virtual prompt track; and displaying an interaction result returned by at least one second client in the first client, wherein the interaction result comprises a statistical result of the graphic identifier returned by the second client.
According to another aspect of the embodiments of the present invention, there is provided an object interaction method, including: acquiring interaction prompt information sent by a first client, wherein the interaction prompt information comprises: a set of images including a motion trajectory of a gesture motion of a first object using the first client; displaying the set of image sets in a second client; inputting a graphic identifier matched with the motion track of the gesture motion of the first object in the second client; and sending the graphic identifier to a server.
According to another aspect of the embodiments of the present invention, there is also provided an object interaction apparatus, including: the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring an object interaction request triggered by a first object in a first client, and the object interaction request carries a target graph identifier of a graph to be interacted; the first display unit is used for responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client, and superposing and displaying a virtual prompt track matched with the target graphic identifier in the scene picture; the first sending unit is used for sending interaction prompt information to at least one second client side when detecting that an action track corresponding to the gesture action of the first object is matched with the virtual prompt track; and the second display unit is used for displaying an interaction result returned by at least one second client in the first client, wherein the interaction result comprises a statistical result of the graph identifier returned by the second client.
According to another aspect of the embodiments of the present invention, there is also provided an object interaction apparatus, including: an obtaining unit, configured to obtain an interaction prompt message sent by a first client, where the interaction prompt message includes: a set of images including a motion trajectory of a gesture motion of a first object using the first client; a display unit, configured to display the group of image sets in a second client; an input unit configured to input a graphic identifier matching the motion trajectory of the gesture motion of the first object in the second client; and the sending unit is used for sending the graphic identifier to the server.
According to another aspect of the embodiments of the present invention, there is also provided an object interaction system, including: the system comprises a first client, a second client and a third client, wherein the first client is used for acquiring an object interaction request triggered by a first object in the first client, and the object interaction request carries a target graph identifier of a graph to be interacted; the first client is also used for responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client, and superposing and displaying a virtual prompt track matched with the target graphic identifier in the scene picture; the first client is further used for sending interaction prompt information to at least one second client under the condition that an action track corresponding to the gesture action of the first object is detected and matched with the virtual prompt track; the second client is used for acquiring the interaction prompt information and acquiring a graphic identifier which is input by a second object and matched with the action track of the gesture action according to the interaction prompt information; the server is also used for sending the graphic identifier matched with the action track of the gesture action; the server is used for acquiring the graph identifier returned by the second client, performing statistics to obtain a statistical result, and sending the statistical result to the first client; the first client is further configured to display an interaction result returned by at least one second client, where the interaction result includes the statistical result.
According to still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the above object interaction method when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the object interaction method through the computer program.
In the embodiment of the invention, after an object interaction request carrying a target graph identifier of a graph to be interacted, which is triggered by a first object in a first client, is acquired, the object interaction request is responded, a scene picture of an environment where the first object is located is displayed in the first client through an Augmented Reality (AR) technology, and a virtual prompt track corresponding to the graph to be interacted is displayed in an overlapping manner, so that the first object is prompted to complete a gesture action according to the virtual prompt track. Further, the first client sends the interaction prompt information to at least one second client to prompt that a second object of the second client inputs a graphic identifier matched with the action track of the gesture action, and an interaction result returned by the second client is displayed in the first client, so that the first client and the second client in a live broadcast scene realize bidirectional interaction, interaction channels between the first client and the second client are expanded, interaction modes in a live broadcast process are enriched, an interactive diversification effect is achieved, and the problem of low interaction efficiency caused by unidirectional information transmission in related live broadcast technologies is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of an application environment for an alternative method of object interaction in accordance with an embodiment of the present invention;
FIG. 2 is a flow diagram of an alternative method of object interaction, according to an embodiment of the invention;
FIG. 3 is a flow diagram of an alternative method of object interaction according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an alternative method of object interaction, according to an embodiment of the invention;
FIG. 5 is a schematic diagram of another alternative object interaction method according to an embodiment of the invention;
FIG. 6 is a schematic diagram of yet another alternative object interaction method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of yet another alternative object interaction method according to an embodiment of the invention;
FIG. 8 is a schematic diagram of yet another alternative object interaction method in accordance with embodiments of the invention;
FIG. 9 is a schematic diagram of yet another alternative object interaction method in accordance with embodiments of the invention;
FIG. 10 is a schematic diagram of yet another alternative object interaction method in accordance with embodiments of the invention;
FIG. 11 is a schematic diagram of yet another alternative method of object interaction in accordance with embodiments of the invention;
FIG. 12 is a schematic diagram of yet another alternative object interaction method in accordance with embodiments of the invention;
FIG. 13 is a flow chart of yet another alternative method of object interaction in accordance with an embodiment of the present invention;
FIG. 14 is a schematic structural diagram of an alternative object interaction device according to an embodiment of the present invention;
FIG. 15 is a schematic diagram of an alternative object interaction device, according to an embodiment of the present invention;
FIG. 16 is a schematic diagram of an alternative electronic device according to an embodiment of the invention;
fig. 17 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of the embodiments of the present invention, an object interaction method is provided, optionally, as an optional implementation manner, the object interaction method may be but is not limited to be applied to an object interaction system in a network environment as shown in fig. 1, where the object interaction system may include but is not limited to: terminal device 102, network 110, server 112, and terminal device 120. Further, it is assumed that the terminal device 102 operates a first client using a first account (e.g., ID-1), and the terminal device 120 operates a second client using a second account (e.g., ID-2), where the first client may be a client of a main broadcast in a live broadcast process, and the second client may be a client of a viewer who focuses on the main broadcast in the live broadcast process, and the first client and the second client have an association relationship.
The terminal device 102 includes a human-computer interaction screen 104, a processor 106 and a memory 108. The human-computer interaction screen 104 is used for acquiring an object interaction request through a human-computer interaction interface, displaying a scene picture of an environment where a first object using a first client is located, displaying a virtual prompt track corresponding to a graph to be interacted in the scene picture in an overlapping mode, and displaying an interaction result; the processor 106 is configured to control the display process in response to the object interaction request, and is further configured to control to detect a gesture motion of the first object, and send interaction prompt information to at least one second client when detecting that a motion trajectory of the gesture motion matches the virtual prompt trajectory. The memory 108 is used for storing the scene picture of the environment where the first object is located, the mapping relationship between the to-be-interacted figure and the figure identifier thereof, and the interaction prompt information.
The server 112 includes a database 114 and a processing engine 116, where the processing engine 116 is configured to obtain the graph identifiers returned by the second clients, and perform statistics to obtain statistical results. The database 114 is used for storing the above statistical results. The processing engine 116 is further configured to return the statistical result to the terminal device 102 where the first client is located.
The terminal device 120 includes a human-computer interaction screen 122, a processor 124 and a memory 126. The human-computer interaction screen 122 is used for interacting prompt messages. The processor 124 is configured to obtain the graphical identifier input by the second object, and further configured to send the graphical identifier to the server 112, so that the server 112 sends the statistical result to the terminal device 102. The memory 126 is used for storing the interaction prompt information and the input graphic identifier.
The specific process comprises the following steps: step S102-S106, displaying an interaction interface of a first client on the human-computer interaction screen 104 in the terminal device 102, and obtaining an object interaction request triggered by an operation, where the object interaction request carries a target graphic identifier of a to-be-interacted graphic (e.g., "love" graphic shown in fig. 1); and responding to the object interaction request, displaying a scene picture of the environment where the first object using the first client is located, and superposing and displaying a virtual prompt track matched with the target graphic identifier in the scene picture. And then determining to send the interactive prompt information to at least one second client under the condition that the action track of the gesture action of the first object is matched with the virtual prompt track. And the interaction prompt information is used for prompting a second object of the second client to input a graphic identifier matched with the action track of the gesture action. The second object and the first object are related objects, and the second object focuses on the first object and is a fan of the first object.
In step S108, the interaction hint information is sent to at least one terminal device 120 (shown as a terminal device 120 in fig. 1) where the second client is located through the network 110. Then, in steps S110-S112, the terminal device 120 prompts the second object to input a graphical identifier matching with the motion trajectory of the gesture motion through the human-computer interaction screen 122, and obtains the input graphical identifier. In step S114, the inputted graphic identifier is transmitted to the server 112 via the network 110.
After obtaining the graph identifiers returned by each second client, the server 112 performs step S116 to perform statistics on the graph identifiers to obtain a statistical result. Then, the interaction result including the statistical result is sent to the terminal device 102 of the first client through the network 110, in step S118. After the terminal device 120 obtains the interaction result, in step S120, the interaction result is displayed in the human-computer interaction screen 104 (for example, the time returned by the second object corresponding to each second client is displayed as shown in fig. 1).
It should be noted that, in this embodiment, after an object interaction request, triggered by a first object in a first client, carrying a target graphic identifier of a to-be-interacted graphic is acquired, the object interaction request is responded, a scene picture of an environment where the first object is located is displayed in the first client through an Augmented Reality (AR) technology, and a virtual prompt track corresponding to the to-be-interacted graphic is displayed in an overlapping manner, so as to prompt the first object to complete a gesture action according to the virtual prompt track. Further, the first client sends the interaction prompt information to at least one second client to prompt that a second object of the second client inputs a graphic identifier matched with the action track of the gesture action, and an interaction result returned by the second client is displayed in the first client, so that the first client and the second client in a live broadcast scene realize bidirectional interaction, interaction channels between the first client and the second client are expanded, interaction modes in a live broadcast process are enriched, an interactive diversification effect is achieved, and the problem of low interaction efficiency caused by unidirectional information transmission in related live broadcast technologies is solved.
Optionally, in this embodiment, the message processing method may be, but is not limited to, applied to a terminal device, and the terminal device may be, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a PC, and other terminal devices that support running of an application client. The server and the terminal device may implement data interaction through a network, which may include but is not limited to a wireless network or a wired network. Wherein, this wireless network includes: bluetooth, WIFI, and other networks that enable wireless communication. Such wired networks may include, but are not limited to: wide area networks, metropolitan area networks, and local area networks. The above is merely an example, and this is not limited in this embodiment.
Optionally, as an optional implementation manner, as shown in fig. 2, the object interaction method includes:
s202, acquiring an object interaction request triggered by a first object in a first client, wherein the object interaction request carries a target graph identifier of a graph to be interacted;
s204, responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client, and superposing and displaying a virtual prompt track matched with the target graphic identifier in the scene picture;
s206, sending interactive prompt information to at least one second client under the condition that an action track corresponding to the gesture action of the first object is detected and matched with the virtual prompt track;
and S208, displaying the interaction result returned by at least one second client in the first client, wherein the interaction result comprises the statistical result of the graph identifier returned by the second client.
Optionally, in this embodiment, the object interaction method may be, but is not limited to, an object interaction method applied in a live scene, where the live scene may be, but is not limited to, provided in a client of at least one of the following applications: a live application, an audio playing application, a video playing application, a space sharing application, and the like, where the video playing application may include but is not limited to: long video playback platform applications (e.g., platform applications for providing various composite videos having longer playback durations), and short video sharing platform applications (e.g., platform applications for providing a single video having a playback duration less than a predetermined threshold). That is, here, the entry of the live scene may be, but is not limited to, set in a different existing application client in the form of a jump link, which is only an example, and this is not limited in this embodiment. In this embodiment, the first client may be, but is not limited to, a main client providing the playing content, and the second client may be, but is not limited to, a viewer client viewing the playing content. In other words, the second object using the second client is a fan of the first object using the first client, and both have an association relationship.
Optionally, in this embodiment, the obtaining of the object interaction request triggered by the first object in the first client may include but is not limited to: displaying a trigger key of the object interaction function in a man-machine interaction interface of a first client; responding to the operation executed by the trigger key to start the object interaction function; randomly determining a group of candidate graphs from a graph database to serve as a candidate graph set; displaying the candidate graph set in the first client; acquiring a target graph selected from the candidate graph set as a graph to be interacted; and generating the object interaction request by using the target graph identification of the target graph.
It should be noted that, in this embodiment, the candidate graphics in the candidate graphics set may be, but are not limited to, single simple-stroke graphics with pre-configured lines, so that the first object may copy the candidate graphics through gesture actions. For example, the above-mentioned figures may be, but are not limited to: triangle, five-pointed star, love and other simple stroke figures. In addition, in this embodiment, the number of candidate patterns included in the candidate pattern set may be one or more, and the number may be flexibly configured, which is not limited in this embodiment.
In this embodiment, the display mode of the trigger button for triggering the object interaction function may include, but is not limited to, at least one of the following: 1) displaying the key icons of the trigger keys in a function key floating layer provided by a first client; 2) and directly displaying the key icon of the trigger key in a playing interface of the first client. In addition, the manner for triggering the object interaction function may further include: hiding the key icon of the trigger key, and triggering the object interaction function by performing shortcut operation on a play interface of the first client, wherein the shortcut operation may include but is not limited to at least one of the following: double-clicking the playing interface, executing sliding operation in the playing interface according to the set direction, and the like. The above is merely an example, and the manner of triggering the object interaction function and displaying the key icon of the trigger key for triggering the object interaction function is not limited in this embodiment.
Optionally, in this embodiment, after the object interaction request is obtained, a scene picture of an environment where the first object is located may be displayed by, but not limited to, using an Augmented Reality (AR) technology, and a virtual prompt track corresponding to the to-be-interacted graph is displayed in an overlapping manner. The AR technology is a new technology for seamlessly integrating real world information and virtual world information, and the real environment and a virtual object are overlaid to the same picture or space in real time and exist simultaneously by applying virtual information to the real world and being perceived by human senses through scientific technologies such as computers and the like after simulation. That is to say, in the display interface of the first client, while the real scene picture of the environment where the first object is located is displayed, the virtual prompt track corresponding to the graph to be interacted can be displayed in an overlapping manner, so that the overlapping display of the real environment and the virtual track is realized in the first client, and the purpose of performing track prompt on the first object is achieved.
In this embodiment, the display manner of the virtual prompt track may include, but is not limited to: 1) static graphics, such as displaying the virtual hint track statically with a dashed track; 2) and dynamic graphics, such as a drawing process of a graphic to be interacted is prompted by a dynamic graph. The above is merely an example, and this is not limited in this embodiment.
Optionally, in this embodiment, detecting the gesture action of the first object may include, but is not limited to: after a hand of a first object is identified in the acquired image, detecting a key point position corresponding to the hand; the change of the position of the key point is tracked to determine the action track of the gesture action of the first object.
It should be noted that the aforementioned key point positions may be, but are not limited to, one or more positions set according to the bones of the hand. Tracking the change of the positions of the key points, the motion track of the gesture motion performed by the first object can be determined, so that the gesture motion of the first object can be recognized. Further, by comparing the motion track of the gesture motion with the virtual prompt track, whether the first object completes the graph to be interacted according to the prompt can be determined. During the process that the first object performs the gesture action, the completion progress of the first object can also be synchronously displayed on the virtual prompt track but is not limited to the virtual prompt track. For example, in a case that the virtual prompt trajectory is a static dotted trajectory, if it is detected that the gesture action currently performed by the first object has completed a partial graph, the completion progress may be synchronously displayed on the virtual prompt trajectory, for example, the completed partial graph trajectory is displayed as a solid trajectory. The above is merely an example, and this is not limited in this embodiment.
Optionally, in this embodiment, when it is detected that the motion trajectory of the gesture motion of the first object matches the virtual prompt trajectory, it is determined that the drawing of the to-be-interacted graph is completed, and then interaction prompt information may be sent to at least one second client, where the interaction prompt information may include but is not limited to: the virtual prompting method comprises a group of image sets containing action tracks of gesture actions of the first object and related prompting information, wherein the virtual prompting tracks are absent in the group of image sets, and the related prompting information is used for prompting a second object of a second client to input a graphic identifier corresponding to the action tracks so as to finish the object interaction task initiated by the first client. The related prompt information may include, but is not limited to, at least one of the following: text prompt information, image prompt information, motion picture prompt information, video prompt information, voice prompt information and the like.
It should be noted that, in this embodiment, after receiving the interaction prompt information, the second object using the second client may directly input the graphical identifier corresponding to the action track in the input window, or may input the graphical identifier corresponding to the action track through voice, and further, the conversion control in the second client may complete the conversion from voice to text.
In addition, in the embodiment, after the second client finishes inputting the graphic identifier, the graphic identifier is sent to the server. The server analyzes and counts the graph identifiers returned by the second clients: determining whether the returned graphic identifier is matched with a target graphic identifier provided by the first client side; under the condition of matching, sequencing target accounts for logging in the second clients according to the return time of each second client to obtain an account sequence; and the account sequence is pushed to the first client as an interaction result to be displayed, so that the first client can visually display the interaction result, thereby completing the two-way interaction between the first client and the second client, achieving the effects of expanding the interaction channel in the live broadcast scene and enriching the diversity of the interaction.
This is explained in particular with reference to the example shown in fig. 3. In steps S302-S312, the anchor triggers an object interaction request through the anchor client during the live broadcast, wherein the object interaction request carries a target graphic identifier (e.g. love is identified by "01") of a graphic to be interacted (e.g. love). And responding to the object interaction request, generating and displaying a virtual prompt track at the anchor client, and then acquiring an image containing the anchor through a camera in the terminal equipment where the anchor client is located so as to identify the gesture action of the anchor through the image. And sending the image to a viewer client, and acquiring an answer (a graphic identifier corresponding to the action track of the main gesture action) corresponding to the gesture action returned by the viewer through the viewer client. And then the server compares the returned answers through intelligent operation and sends the compared result (such as an answer list) to the anchor client. And the anchor client displays the answer list. Thereby completing the object interaction task triggered by the object interaction request.
According to the embodiment provided by the application, after an object interaction request which is triggered by a first object in a first client and carries a target graph identifier of a graph to be interacted is obtained, the object interaction request is responded, a scene picture of an environment where the first object is located is displayed in the first client through an Augmented Reality (AR) technology, a virtual prompt track corresponding to the graph to be interacted is displayed in an overlapping mode, and the first object is prompted to complete a gesture action according to the virtual prompt track. Further, the first client sends the interaction prompt information to at least one second client to prompt that a second object of the second client inputs a graphic identifier matched with the action track of the gesture action, and an interaction result returned by the second client is displayed in the first client, so that the first client and the second client in a live broadcast scene realize bidirectional interaction, interaction channels between the first client and the second client are expanded, interaction modes in a live broadcast process are enriched, an interactive diversification effect is achieved, and the problem of low interaction efficiency caused by unidirectional information transmission in related live broadcast technologies is solved.
As an optional scheme, in response to the object interaction request, displaying a scene picture of an environment where the first object is located at the first client, and displaying a virtual prompt track matched with the target graphic identifier in the scene picture in an overlapping manner includes:
s1, responding to the object interaction request, and calling a camera in the terminal equipment where the first client is located to acquire a scene picture of the current environment where the first object is located;
s2, displaying the collected scene picture in the first client;
s3, determining the superposition display position of the virtual prompt track on the scene picture in the first client;
and S4, displaying the virtual prompting track on the superposition display position.
It should be noted that, while the scene picture of the current environment where the first object acquired by the camera is located is displayed in the first client, the virtual prompt track corresponding to the to-be-interacted figure is displayed, and the display process is realized by adding the virtual track in the real environment by adopting an AR technology, so as to realize real and virtual interaction.
In addition, in this embodiment, the superimposed display position of the virtual prompt trajectory may include, but is not limited to, the following configuration modes:
1) and before triggering the object interaction request, configuring a function configuration interface of the first client in advance. That is, the superimposed display positions of the virtual prompt tracks corresponding to the to-be-interacted graphics, such as the center of the screen, the lower left of the screen, and the like, are uniformly configured in advance.
2) And after the object interaction request is triggered, configuring the superposition display position of the virtual prompt track. That is to say, the superimposed display position for configuring the virtual prompt track can be flexibly selected according to the position of the first object actually appearing in the scene picture, so as to avoid the virtual prompt track from blocking the first object.
In this embodiment, the determining the superimposed display position of the virtual prompt trajectory may be, but is not limited to, determining the display coordinates of the virtual prompt trajectory, so that the virtual prompt trajectory is accurately superimposed and displayed on the captured scene picture.
The following is explained with reference to fig. 4: assume that a first object logs on to a first client (anchor client) for live broadcast in real time using the account "little black" (assume that the current live broadcast time is 11:10 am). Wherein the second object focusing on the first object comprises: small white and small red. That is, the small white and the small red watch the contents being live in the small black through the second client logged in, respectively.
Further, after the small black triggers the object interaction request, it is determined that the target graph identifier of the graph to be interacted is "love", then the camera in the terminal device where the first client is located will collect the scene picture of the current environment where the small black is located, and determine the superimposed display position of the virtual prompt track corresponding to the "love" in the scene picture, and then display the interface shown in fig. 4 in the first client: and displaying a scene picture of the current environment in small black, and displaying a virtual prompt track (shown as a dotted track in the figure) corresponding to the 'love center' in the center of the scene picture.
By the embodiment provided by the application, after the camera in the terminal equipment where the first client is located is called to collect the scene picture of the current environment where the first object is located, the display coordinates of the superimposed display position of the virtual prompt track on the scene picture can be determined after the collected scene picture is displayed, so that the virtual prompt track can be accurately displayed on the display coordinates, and the virtual prompt track is prevented from blocking other important contents in the scene picture.
As an optional scheme, before sending the interaction prompt information to the at least one second client, the method further includes:
s1, calling a camera in the terminal equipment where the first client is located to acquire an image sequence corresponding to the first object;
s2, a gesture motion of the first object is detected in the sequence of images.
It should be noted that, in this embodiment, the camera invoked when detecting the first object may be, but is not limited to, a depth camera, or two or more cameras. The depth is detected through the camera, so that skeleton information of the hand of the first object is abstracted, and the position of a key point corresponding to the hand is further determined. Then the image area where the hand is located is separated from the collected image, so that the camera can track the movement change process of the key point position of the hand, the left hand and the right hand are distinguished, the movement track of the hand is determined, and the corresponding gesture action is conveniently recognized. The two or more cameras are used for comparing images acquired at the same moment to calculate depth information by using the difference of the images, so that three-dimensional imaging is realized, and the purpose of recognizing the gesture action of the first object is achieved.
In addition, in the present embodiment, in order to recognize the gesture motion of the first object, image recognition may be performed based on an image sequence acquired by a camera, where the image sequence may include one image or multiple images. For example, as shown in FIG. 5, for a single gesture action (as shown, "like," "watch," "win," "heart"), recognition can be made directly from the keypoint location. However, for more complicated gesture actions (such as "love heart" shown in fig. 4), the recognition is performed according to the variation process of the key point positions in a plurality of images collected according to a certain time sequence. The above is merely an example, and this is not limited in this embodiment.
Optionally, in this embodiment, after detecting the gesture motion of the first object in the image sequence, the method further includes:
s21, determining the position of a key point corresponding to the hand of the first object in each image in the image sequence;
s22, tracking the positions of all key points in the image sequence to determine the motion track of the gesture motion;
and S23, determining that the action track of the gesture action is matched with the virtual prompt track under the condition that the track similarity between the action track and the virtual prompt track is greater than a target threshold value.
It should be noted that, in this embodiment, in the process of executing the gesture action by the first object, the action track of the gesture action is correspondingly displayed in the first client, and when the track similarity between the action track and the virtual prompt track is greater than the target threshold, it is determined that the action track and the virtual prompt track are matched. The gesture motion of the first object may be completed along the virtual prompt track, or may be completed at other positions in the actual environment. The method provided in this embodiment is used to compare the trajectory similarity of the two, and does not limit the overlap ratio of the display positions. Further, in the case where the display positions of the two are not coincident, the completion progress may be presented in the virtual prompt track. The display mode of the completion progress may include, but is not limited to: the coverage is directly displayed in the virtual prompt track, and the completion percentage and the like can also be displayed. The above is merely an example, and no limitation is made thereto in the present embodiment.
The description will be made with reference to fig. 6: after the key point position corresponding to the hand including the first object "small black" in one image is determined, the change of each key point position in the multiple images after the one image can be further tracked to determine the motion track of the gesture motion. As shown in fig. 6, the dotted line is a virtual prompt track corresponding to the "love heart" of the graph to be interacted, and the solid line is an action track of the gesture action currently completed by the first object, that is, the current progress is half the "love heart".
Further, in a case where it is detected that the trajectory similarity between the motion trajectory and the virtual prompt trajectory is greater than the target threshold, that is, in a case where it is detected that the solid line completely covers the dotted line, as shown in fig. 7, it is determined that the motion trajectory of the gesture motion matches the virtual prompt trajectory, and the first object "black and small" has completed the entire "love heart" gesture motion.
According to the embodiment provided by the application, after the key point position corresponding to the hand of the first object in each image in the image sequence is determined, the action track of the gesture action is determined by tracking each key point position in the image sequence, so that the gesture action is accurately identified. Further, the two tracks are compared to determine whether the current virtual prompt track is completed or not by using the track similarity, so that the interaction prompt information sent to the second client side is automatically triggered under the condition that the completion is detected, and the objectivity and fairness of the object interaction process are ensured.
As an optional scheme, sending the interaction prompt message to the at least one second client includes:
and S1, sending a group of image sets containing the motion track of the gesture motion to at least one second client, so that the second object inputs the graphic identification matched with the motion track of the gesture motion in the second client, wherein the virtual prompt track is in default in each image in the group of image sets, and the second object and the first object are related objects.
It should be noted that, in the scene of the live application, the association relationship between the second object and the first object may be, but is not limited to, an attention relationship, and if the first object is a main broadcast, the second object is a viewer who pays attention to the main broadcast, and the two are associated objects. The above is merely an example, and this is not limited in this embodiment.
Specifically, referring to the example shown in fig. 8, the first client sends a set of image sets including the motion trajectory of the gesture motion of the first object to the second client for display. As shown in fig. 8, assuming that the interface shown in fig. 8 is a live interface presented by the second client to which the second object "pinkish" is logged, an action track of a gesture action performed in a small black state, such as a track with an arrow shown in fig. 8, will be presented in the interface, but a virtual prompt track presented in the first client will not be displayed.
By the embodiment provided by the application, after the first client sends the interaction prompt information to the second client, the second client presents a group of image sets containing the action tracks of the gesture actions of the first object, and does not visually present the virtual prompt tracks, so that the second object using the second client guesses the corresponding graphical identification according to the action tracks presented in the group of image sets, and the purpose of realizing bidirectional interaction with the first object using the first client is achieved.
As an optional scheme, before displaying the interaction result returned by the at least one second client in the first client, the method includes:
s1, the server obtains at least one graphic mark returned by the second client;
s2, the server compares the graph identifier returned by the at least one second client with the target graph identifier in sequence;
s3, the server acquires a target account number logged in by the second client corresponding to the graphic identifier matched with the target graphic identifier according to the comparison result;
s4, the server sorts the target account according to the return time to obtain an account sequence;
s5, the server sends the account sequence to the first client.
Optionally, in this embodiment, displaying, in the first client, the interaction result returned by the at least one second client includes at least one of:
1) directly displaying an account number sequence in a playing window of a first client;
2) creating a popup window in a first client to display an account sequence;
3) a sub-page is created in the first client to display the sequence of accounts.
The description is made with specific reference to the examples shown in fig. 9-10: suppose the interface shown in fig. 9 is a live interface presented by a second client to which the second object "pinkish" is logged, and an action track of a gesture action performed by the first object "pinkish" will be presented in the interface. Further, as shown in fig. 9, the second object "small white" will input a graphic identifier "love heart" corresponding to the viewed motion trajectory through a dialog box.
Then, in this embodiment, the server may obtain the graphical identifier returned by each second client, sequentially compare the returned graphical identifier with the target graphical identifier, and determine the target account number logged in by the second client that is successfully matched. Assuming that the target account includes account numbers "small white" and "small red", the account numbers are further sorted according to the return time to obtain an account number sequence, and the account number sequence is sent to the first client to be displayed, as shown in fig. 10, the small red returns a graphic identifier "love heart" corresponding to a correct answer within 0.9 second, and is positioned at a top list; the Chinese cabbage returns the graphic identifier 'love heart' corresponding to the correct answer within 1.1 second, and is positioned at the second name.
It should be noted that, the process of obtaining and comparing the graphic identifier, determining the target account and obtaining the account sequence may also be, but is not limited to, completed in the first client, and this is not limited in this embodiment.
In addition, in this embodiment, the account number sequence may also be sent to a second client, and synchronous display is performed in the second client, and for a display manner in the second client, reference may be made to the display manner of the first client, which is not described herein again.
According to the embodiment provided by the application, the graph identification returned by each second client is compared with the target graph identification, so that a statistical result is obtained by counting the comparison result, an interaction result aiming at the object interaction task triggered by the current first object is further generated, the interaction result is returned to the first client to be visually displayed, and the purpose of bidirectional interaction is achieved.
As an optional scheme, after displaying the account number sequence in the first client, the method further includes:
s1, responding to the operation executed on the account sequence, transferring the target resource to the target account in the account sequence.
It should be noted that, in this embodiment, after the first client acquires the account sequence, the first client may also, but is not limited to, award a target account in the account sequence, such as transferring a certain target resource. In the present embodiment, the target resource may include, but is not limited to, a virtual resource in a virtual scene, such as virtual currency, a virtual gift, and the like. In this embodiment, the content and manner of the reward are not limited at all.
As an optional scheme, obtaining an object interaction request triggered by a first object in a first client includes:
s1, displaying the candidate graph set in the first client;
s2, acquiring a graph corresponding to the target graph identifier selected from the candidate graph set as a graph to be interacted;
and S3, generating an object interaction request by using the target graphic identification.
The description is made with specific reference to fig. 11-12: as shown in fig. 11, in the play interface of the first client where the first object "small black" is logged in, a function key floating layer is triggered and displayed, where a key icon of the object interaction function is displayed in the function key floating layer, such as an icon in a dashed box shown in fig. 11.
Further, in response to the operation performed on the key icon, a group of candidate graphs is randomly determined from the graph database as a candidate graph set, and the candidate graph set is displayed in the first client, as shown in fig. 12, where the candidate graph set includes: peach heart, five-pointed star, small fish, house, etc. Then, a graph to be interacted (such as 'love') is selected from the candidate graph set, and the object interaction request is generated by using the target graph identification of the graph.
According to the embodiment provided by the application, the candidate graphs randomly generated by using the graph database are displayed in the first client side, so that the graphs to be interacted are selected conveniently, and the object interaction task between the first client side and the second client side is rapidly triggered.
According to another aspect of the embodiment of the invention, an object interaction method is also provided. As shown in fig. 13, the apparatus includes:
s1302, acquiring an interaction prompting message sent by the first client, where the interaction prompting message includes: a set of images including a motion trajectory of a gesture motion of a first object using a first client;
s1304, displaying a set of image sets in the second client;
s1306, inputting a graphic identifier matched with the action track of the gesture action of the first object in the second client;
s1308, the graphics identifier is sent to the server.
The object interaction process implemented in the second client provided in this embodiment may refer to the foregoing embodiment, and this embodiment is not described herein again.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiment of the present invention, an object interaction apparatus for implementing the object interaction method is also provided. As shown in fig. 14, the apparatus includes:
1) a first obtaining unit 1402, configured to obtain an object interaction request triggered by a first object in a first client, where the object interaction request carries a target graph identifier of a graph to be interacted;
2) a first display unit 1404, configured to respond to the object interaction request, display a scene picture of an environment where the first object is located at the first client, and superimpose and display a virtual prompt track matching the target graphic identifier in the scene picture;
3) a first sending unit 1406, configured to send interaction prompt information to at least one second client when detecting that an action trajectory corresponding to the gesture action of the first object matches the virtual prompt trajectory;
4) the second display unit 1408 is configured to display, in the first client, an interaction result returned by at least one second client, where the interaction result includes a statistical result of the graphical identifier returned by the second client.
The object interaction device provided in this embodiment may be applied to, but is not limited to, the first client, and for a specific example, reference may be made to the foregoing embodiment, which is not described herein again.
According to another aspect of the embodiment of the present invention, an object interaction apparatus for implementing the object interaction method is also provided. As shown in fig. 15, the apparatus includes:
1) an obtaining unit 1502, configured to obtain an interaction prompt message sent by a first client, where the interaction prompt message includes: a set of images including a motion trajectory of a gesture motion of a first object using a first client;
2) a display unit 1504 for displaying a set of image sets in the second client;
3) an input unit 1506 configured to input a graphical identifier matching a motion trajectory of the gesture motion of the first object in the second client;
4) a sending unit 1508, configured to send the graphical identifier to the server.
The object interaction device provided in this embodiment may be applied to, but is not limited to, a second client, and for a specific example, reference may be made to the foregoing embodiment, which is not described herein again.
According to another aspect of the embodiment of the invention, an object interaction system for implementing the object interaction method is also provided. The device includes:
1) the system comprises a first client, a second client and a third client, wherein the first client is used for acquiring an object interaction request triggered by a first object in the first client, and the object interaction request carries a target graph identifier of a graph to be interacted; the first client is also used for responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client, and superposing and displaying a virtual prompt track matched with the target graphic identifier in the scene picture; the interaction prompting information sending module is further used for sending interaction prompting information to at least one second client side under the condition that an action track corresponding to the gesture action of the first object is detected to be matched with the virtual prompting track;
2) the second client is used for acquiring the interaction prompt information and acquiring a graphic identifier which is input by the second object and is matched with the action track of the gesture action according to the interaction prompt information; the server is also used for sending the graphic identification matched with the action track of the gesture action;
3) the server is used for acquiring the graph identifier returned by the second client, carrying out statistics to obtain a statistical result and sending the statistical result to the first client;
4) the first client is further used for displaying an interaction result returned by the at least one second client, wherein the interaction result comprises a statistical result.
The internal structure of the object interaction system provided in this embodiment may be, but is not limited to, as shown in fig. 1, and for an example of a specific interaction process, reference may be made to the foregoing embodiment, which is not described herein again.
According to yet another aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the object interaction method, as shown in fig. 16, the electronic device includes a memory 1602 and a processor 1604, the memory 1602 stores therein a computer program, and the processor 1604 is configured to execute the steps in any one of the method embodiments by the computer program.
Optionally, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring an object interaction request triggered by a first object in a first client, wherein the object interaction request carries a target graphic identifier of a graphic to be interacted;
s2, responding to the object interaction request, displaying a scene picture of the environment where the first object is located at the first client, and superposing and displaying a virtual prompt track matched with the target graphic identifier in the scene picture;
s3, sending interactive prompt information to at least one second client side when detecting that the action track corresponding to the gesture action of the first object is matched with the virtual prompt track;
and S4, displaying the interaction result returned by at least one second client in the first client, wherein the interaction result comprises the statistical result of the graphic identifier returned by the second client.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 16 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 16 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 16, or have a different configuration than shown in FIG. 16.
The memory 1602 may be configured to store software programs and modules, such as program instructions/modules corresponding to the object interaction method and apparatus in the embodiment of the present invention, and the processor 1604 executes various functional applications and data processing by running the software programs and modules stored in the memory 1602, that is, implements the object interaction method. The memory 1602 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1602 can further include memory located remotely from the processor 1604, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1602 may be specifically but not limited to be used for storing the to-be-interacted graphics, the graphics identifier and the corresponding relationship thereof, and the collected scene picture and other information. As an example, as shown in fig. 16, the memory 1602 may include, but is not limited to, a first obtaining unit 1402, a first display unit 1404, a detecting unit 1406, a first sending unit 1408, and a second display unit 1410 in the object interaction apparatus. In addition, other module units in the object interaction apparatus may also be included, but are not limited to, and are not described in this example again.
Optionally, the transmission device 1606 is configured to receive or transmit data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 1606 includes a Network adapter (NIC) that can be connected to a router via a Network line to communicate with the internet or a local area Network. In one example, the transmission device 1606 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In addition, the electronic device further includes: a display 1608 for displaying a virtual prompt trajectory, an interaction result, and the like; and a connection bus 1610 for connecting respective module components in the above-described electronic apparatus.
According to yet another aspect of the embodiments of the present invention, there is also provided an electronic apparatus for implementing the object interaction method, as shown in fig. 17, the electronic apparatus includes a memory 1702 and a processor 1704, the memory 1702 stores a computer program, and the processor 1704 is configured to execute the steps in any one of the method embodiments through the computer program.
Optionally, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, obtaining the interactive prompt information sent by the first client, wherein the interactive prompt information includes: a set of images including a motion trajectory of a gesture motion of a first object using a first client;
s2, displaying a set of image sets in the second client;
s3, inputting a graphic identification matched with the action track of the gesture action of the first object in the second client;
and S4, sending the graphic identifier to the server.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 17 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 17 is a diagram illustrating the structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 17, or have a different configuration than shown in FIG. 17.
The memory 1702 may be used to store software programs and modules, such as program instructions/modules corresponding to the object interaction method and apparatus in the embodiments of the present invention, and the processor 1704 executes various functional applications and data processing by running the software programs and modules stored in the memory 1702, that is, implements the object interaction method described above. The memory 1702 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1702 may further include memory located remotely from the processor 1704, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1702 may be used for storing information such as graphic identifiers, but is not limited thereto. As an example, as shown in fig. 17, the memory 1702 may include, but is not limited to, the obtaining unit 1502, the display unit 1504, the input unit 1506, and the sending unit 1508 of the object interaction apparatus. In addition, other module units in the object interaction apparatus may also be included, but are not limited to, and are not described in this example again.
Optionally, the above-mentioned transmission device 1706 is used for receiving or sending data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 1706 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmission device 1706 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In addition, the electronic device further includes: a display 1708 for displaying interactive prompt information and the like; and a connection bus 1710 for connecting the respective module parts in the above-described electronic apparatus.
According to a further aspect of an embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring an object interaction request triggered by a first object in a first client, wherein the object interaction request carries a target graphic identifier of a graphic to be interacted;
s2, responding to the object interaction request, displaying a scene picture of the environment where the first object is located at the first client, and superposing and displaying a virtual prompt track matched with the target graphic identifier in the scene picture;
s3, sending interactive prompt information to at least one second client side when detecting that the action track corresponding to the gesture action of the first object is matched with the virtual prompt track;
and S4, displaying the interaction result returned by at least one second client in the first client, wherein the interaction result comprises the statistical result of the graphic identifier returned by the second client.
Optionally, in this embodiment, the computer-readable storage medium may be further configured to store a computer program for executing the following steps:
s1, obtaining the interactive prompt information sent by the first client, wherein the interactive prompt information includes: a set of images including a motion trajectory of a gesture motion of a first object using a first client;
s2, displaying a set of image sets in the second client;
s3, inputting a graphic identification matched with the action track of the gesture action of the first object in the second client;
and S4, sending the graphic identifier to the server.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (15)

1. An object interaction method, comprising:
acquiring an object interaction request triggered by a first object in a first client, wherein the object interaction request carries a target graph identifier of a graph to be interacted;
responding to the object interaction request, displaying a scene picture of the environment where the first object is located at the first client, and superposing and displaying a virtual prompt track matched with the target graphic identifier in the scene picture;
sending interactive prompt information to at least one second client under the condition that an action track corresponding to the gesture action of the first object is detected to be matched with the virtual prompt track;
and displaying an interaction result returned by at least one second client in the first client, wherein the interaction result comprises a statistical result of the graphic identifier returned by the second client.
2. The method of claim 1, wherein the displaying, in response to the object interaction request, a scene of an environment in which the first object is located at the first client, and displaying a virtual prompt track matching the target graphical identifier in an overlaid manner in the scene comprises:
responding to the object interaction request, and calling a camera in terminal equipment where the first client is located to acquire the scene picture of the current environment where the first object is located;
displaying the collected scene picture in the first client;
determining the superposition display position of the virtual prompt track on the scene picture in the first client;
and displaying the virtual prompt track on the superposition display position.
3. The method of claim 1, further comprising, before the sending the interactive prompt information to the at least one second client:
calling a camera in the terminal equipment where the first client is located to acquire an image sequence corresponding to the first object;
detecting a gesture motion of the first object in the sequence of images.
4. The method of claim 3, after detecting the gesture motion of the first object in the sequence of images, further comprising:
determining a keypoint location corresponding to a hand of the first object within each image in the sequence of images;
tracking the position of each key point in the image sequence to determine the motion track of the gesture motion;
and under the condition that the track similarity between the action track and the virtual prompt track is greater than a target threshold value, determining that the action track of the gesture action is matched with the virtual prompt track.
5. The method of claim 1, wherein sending the interactive prompt message to the at least one second client comprises:
sending a group of image sets containing the action track of the gesture action to at least one second client so as to enable a second object to input a graphic identification matched with the action track of the gesture action in the second client, wherein the virtual prompt track is defaulted in each image in the group of image sets, and the second object and the first object are related objects.
6. The method of claim 5, wherein before displaying the interaction result returned by at least one of the second clients in the first client, the method comprises:
the server acquires at least one graph identifier returned by the second client;
the server sequentially compares the graphic identifier returned by at least one second client with the target graphic identifier;
the server acquires a target account number logged in by the second client corresponding to the graphic identifier matched with the target graphic identifier according to the comparison result;
the server sorts the target account according to the return time to obtain an account sequence;
and the server sends the account number sequence to the first client.
7. The method of claim 6, wherein displaying, in the first client, at least one interaction result returned by the second client comprises at least one of:
directly displaying the account number sequence in a playing window of the first client;
creating a popup in the first client to display the account number sequence;
creating a sub-page in the first client to display the account sequence.
8. The method of claim 6, wherein after displaying the sequence of accounts in the first client, further comprising:
and responding to the operation executed on the account sequence, and transferring the target resource to the target account in the account sequence.
9. The method according to any one of claims 1 to 8, wherein the obtaining of the object interaction request triggered by the first object in the first client comprises:
displaying a set of candidate graphics in the first client;
acquiring a graph corresponding to the target graph identifier selected from the candidate graph set as the graph to be interacted;
and generating the object interaction request by utilizing the target graphic identification.
10. An object interaction method, comprising:
acquiring interaction prompt information sent by a first client, wherein the interaction prompt information comprises: a set of images including a motion trajectory of a gesture motion of a first object using the first client;
displaying the set of image collections in a second client;
inputting a graphical identifier matched with the action track of the gesture action of the first object in the second client;
and sending the graphic identifier to a server.
11. An object interaction apparatus, comprising:
the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring an object interaction request triggered by a first object in a first client, and the object interaction request carries a target graph identifier of a graph to be interacted;
the first display unit is used for responding to the object interaction request, displaying a scene picture of the environment where the first object is located at the first client, and superposing and displaying a virtual prompt track matched with the target graphic identifier in the scene picture;
the first sending unit is used for sending interaction prompt information to at least one second client side when detecting that an action track corresponding to the gesture action of the first object is matched with the virtual prompt track;
and the second display unit is used for displaying at least one interaction result returned by the second client in the first client, wherein the interaction result comprises a statistical result of the graph identifier returned by the second client.
12. An object interaction apparatus, comprising:
an obtaining unit, configured to obtain an interaction prompt message sent by a first client, where the interaction prompt message includes: a set of images including a motion trajectory of a gesture motion of a first object using the first client;
a display unit for displaying the set of image sets in a second client;
an input unit, configured to input, in the second client, a graphical identifier matching the motion trajectory of the gesture motion of the first object;
and the sending unit is used for sending the graphic identifier to a server.
13. An object interaction system, comprising:
the system comprises a first client, a second client and a third client, wherein the first client is used for acquiring an object interaction request triggered by a first object in the first client, and the object interaction request carries a target graph identifier of a graph to be interacted; the first client is also used for responding to the object interaction request, displaying a scene picture of the environment where the first object is located on the first client, and superposing and displaying a virtual prompt track matched with the target graphic identifier in the scene picture; the first object is used for detecting the gesture action of the first object, and the gesture action of the first object is matched with the virtual prompt track;
the second client is used for acquiring the interaction prompt information and acquiring a graphic identifier which is input by a second object and matched with the action track of the gesture action according to the interaction prompt information; the server is also used for sending the graphic identification matched with the action track of the gesture action to the server;
the server is used for acquiring the graph identifier returned by the second client, carrying out statistics to obtain a statistical result and sending the statistical result to the first client;
the first client is further configured to display an interaction result returned by at least one second client, where the interaction result includes the statistical result.
14. A computer-readable storage medium comprising a stored program, wherein the program when executed performs the method of claims 1 to 9 or performs the method of claim 10.
15. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of claims 1 to 9 or to execute the method of claim 10 by means of the computer program.
CN201910927056.4A 2019-09-27 2019-09-27 Object interaction method and device, storage medium and electronic device Active CN110703913B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910927056.4A CN110703913B (en) 2019-09-27 2019-09-27 Object interaction method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910927056.4A CN110703913B (en) 2019-09-27 2019-09-27 Object interaction method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN110703913A true CN110703913A (en) 2020-01-17
CN110703913B CN110703913B (en) 2023-09-26

Family

ID=69196977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910927056.4A Active CN110703913B (en) 2019-09-27 2019-09-27 Object interaction method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN110703913B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461005A (en) * 2020-03-31 2020-07-28 腾讯科技(深圳)有限公司 Gesture recognition method and device, computer equipment and storage medium
CN111627039A (en) * 2020-05-09 2020-09-04 北京小狗智能机器人技术有限公司 Interaction system and interaction method based on image recognition
CN112286439A (en) * 2020-11-03 2021-01-29 广东科徕尼智能科技有限公司 Terminal interaction method, device and storage medium based on touch track
CN113253901A (en) * 2021-03-15 2021-08-13 北京字跳网络技术有限公司 Interaction method, device, equipment and storage medium in live broadcast room
CN113325948A (en) * 2020-02-28 2021-08-31 华为技术有限公司 Air-isolated gesture adjusting method and terminal
CN113727147A (en) * 2021-08-27 2021-11-30 上海哔哩哔哩科技有限公司 Gift presenting method and device for live broadcast room
CN114115524A (en) * 2021-10-22 2022-03-01 青岛海尔科技有限公司 Interaction method of intelligent water cup, storage medium and electronic device
CN114363685A (en) * 2021-12-20 2022-04-15 咪咕文化科技有限公司 Video interaction method and device, computing equipment and computer storage medium
WO2022247650A1 (en) * 2021-05-28 2022-12-01 北京字节跳动网络技术有限公司 Gesture-based interaction method and device, and client

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103501445A (en) * 2013-10-12 2014-01-08 青岛旲天下智能科技有限公司 Gesture-based interaction two-way interactive digital TV box system and implementation method
US20150288927A1 (en) * 2014-04-07 2015-10-08 LI3 Technology Inc. Interactive Two-Way Live Video Communication Platform and Systems and Methods Thereof
CN106804007A (en) * 2017-03-20 2017-06-06 合网络技术(北京)有限公司 The method of Auto-matching special efficacy, system and equipment in a kind of network direct broadcasting
CN107124664A (en) * 2017-05-25 2017-09-01 百度在线网络技术(北京)有限公司 Exchange method and device applied to net cast
WO2017198143A1 (en) * 2016-05-18 2017-11-23 中兴通讯股份有限公司 Video processing method, video playback method, set-top box, and vr apparatus
CN108260021A (en) * 2018-03-08 2018-07-06 乐蜜有限公司 Living broadcast interactive method and apparatus
CN109905754A (en) * 2017-12-11 2019-06-18 腾讯科技(深圳)有限公司 Virtual present collection methods, device and storage equipment
CN110139142A (en) * 2019-05-16 2019-08-16 北京达佳互联信息技术有限公司 Virtual objects display methods, device, terminal and storage medium
CN110267051A (en) * 2019-05-16 2019-09-20 北京奇艺世纪科技有限公司 A kind of method and device of data processing

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103501445A (en) * 2013-10-12 2014-01-08 青岛旲天下智能科技有限公司 Gesture-based interaction two-way interactive digital TV box system and implementation method
US20150288927A1 (en) * 2014-04-07 2015-10-08 LI3 Technology Inc. Interactive Two-Way Live Video Communication Platform and Systems and Methods Thereof
WO2017198143A1 (en) * 2016-05-18 2017-11-23 中兴通讯股份有限公司 Video processing method, video playback method, set-top box, and vr apparatus
CN106804007A (en) * 2017-03-20 2017-06-06 合网络技术(北京)有限公司 The method of Auto-matching special efficacy, system and equipment in a kind of network direct broadcasting
CN107124664A (en) * 2017-05-25 2017-09-01 百度在线网络技术(北京)有限公司 Exchange method and device applied to net cast
CN109905754A (en) * 2017-12-11 2019-06-18 腾讯科技(深圳)有限公司 Virtual present collection methods, device and storage equipment
CN108260021A (en) * 2018-03-08 2018-07-06 乐蜜有限公司 Living broadcast interactive method and apparatus
CN110139142A (en) * 2019-05-16 2019-08-16 北京达佳互联信息技术有限公司 Virtual objects display methods, device, terminal and storage medium
CN110267051A (en) * 2019-05-16 2019-09-20 北京奇艺世纪科技有限公司 A kind of method and device of data processing

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113325948A (en) * 2020-02-28 2021-08-31 华为技术有限公司 Air-isolated gesture adjusting method and terminal
CN113325948B (en) * 2020-02-28 2023-02-07 华为技术有限公司 Air-isolated gesture adjusting method and terminal
CN111461005A (en) * 2020-03-31 2020-07-28 腾讯科技(深圳)有限公司 Gesture recognition method and device, computer equipment and storage medium
CN111461005B (en) * 2020-03-31 2023-11-28 腾讯科技(深圳)有限公司 Gesture recognition method and device, computer equipment and storage medium
CN111627039A (en) * 2020-05-09 2020-09-04 北京小狗智能机器人技术有限公司 Interaction system and interaction method based on image recognition
CN112286439A (en) * 2020-11-03 2021-01-29 广东科徕尼智能科技有限公司 Terminal interaction method, device and storage medium based on touch track
CN113253901A (en) * 2021-03-15 2021-08-13 北京字跳网络技术有限公司 Interaction method, device, equipment and storage medium in live broadcast room
WO2022247650A1 (en) * 2021-05-28 2022-12-01 北京字节跳动网络技术有限公司 Gesture-based interaction method and device, and client
CN113727147A (en) * 2021-08-27 2021-11-30 上海哔哩哔哩科技有限公司 Gift presenting method and device for live broadcast room
CN114115524A (en) * 2021-10-22 2022-03-01 青岛海尔科技有限公司 Interaction method of intelligent water cup, storage medium and electronic device
CN114115524B (en) * 2021-10-22 2023-08-18 青岛海尔科技有限公司 Interaction method of intelligent water cup, storage medium and electronic device
CN114363685A (en) * 2021-12-20 2022-04-15 咪咕文化科技有限公司 Video interaction method and device, computing equipment and computer storage medium

Also Published As

Publication number Publication date
CN110703913B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
CN110703913B (en) Object interaction method and device, storage medium and electronic device
CN111556278B (en) Video processing method, video display device and storage medium
CN109688451B (en) Method and system for providing camera effect
CN106920079A (en) Virtual objects distribution method and device based on augmented reality
CN107278374A (en) Interactive advertisement display method, terminal and smart city interactive system
CN111260545A (en) Method and device for generating image
CN106605218A (en) Method of collecting and processing computer user data during interaction with web-based content
CN110149549B (en) Information display method and device
CN113691829B (en) Virtual object interaction method, device, storage medium and computer program product
CN109876440B (en) Object display method and device, storage medium and electronic device
CN108108012A (en) Information interacting method and device
US20130229342A1 (en) Information providing system, information providing method, information processing apparatus, method of controlling the same, and control program
CN109754329B (en) Electronic resource processing method, terminal, server and storage medium
CN108289230A (en) A kind of recommendation method, apparatus, equipment and the storage medium of TV shopping content
CN113573090A (en) Content display method, device and system in game live broadcast and storage medium
CN113824983A (en) Data matching method, device, equipment and computer readable storage medium
CN110198472B (en) Video resource playing method and device
CN114095742A (en) Video recommendation method and device, computer equipment and storage medium
CN114332417A (en) Method, device, storage medium and program product for multi-person scene interaction
CN111741321A (en) Live broadcast control method, device, equipment and computer storage medium
CN112188223B (en) Live video playing method, device, equipment and medium
CN113596574A (en) Video processing method, video processing apparatus, electronic device, and readable storage medium
CN113244609A (en) Multi-picture display method and device, storage medium and electronic equipment
CN112218111A (en) Image display method and device, storage medium and electronic equipment
CN108958690B (en) Multi-screen interaction method and device, terminal equipment, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40020924

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant