CN113518264A - Interaction method, device, terminal and storage medium - Google Patents

Interaction method, device, terminal and storage medium Download PDF

Info

Publication number
CN113518264A
CN113518264A CN202011184274.2A CN202011184274A CN113518264A CN 113518264 A CN113518264 A CN 113518264A CN 202011184274 A CN202011184274 A CN 202011184274A CN 113518264 A CN113518264 A CN 113518264A
Authority
CN
China
Prior art keywords
target
option
interaction
image
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011184274.2A
Other languages
Chinese (zh)
Inventor
唐艾妮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011184274.2A priority Critical patent/CN113518264A/en
Publication of CN113518264A publication Critical patent/CN113518264A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection

Abstract

The embodiment of the invention discloses an interaction method, an interaction device, a terminal and a storage medium, wherein the method comprises the following steps: displaying a playing interface of a target video of a first application program in a terminal; when the playing of the target video meets the interaction condition, displaying at least one interaction option interacting with a target object in the target video on a playing interface; selecting a target interaction option in at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option; and when the target event matched with the interactive prompt information exists, outputting an interactive response related to the target event. By adopting the embodiment of the invention, the interaction with the target object in the video is realized when the online video is watched.

Description

Interaction method, device, terminal and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an interaction method, an interaction device, a terminal, and a storage medium.
Background
For families, playing zoos is an important way for improving the feelings of relatives and children, and children can increase the playing pleasure through the interaction with animals. However, under the epidemic influence of new coronary pneumonia, people reduce gathering under the line, and zoos also begin to attract the parent-child users in a form of direct online broadcasting. But the current form of visiting zoos online lacks the interaction between users and animals, and lacks the interest of interactive experience and the sediment of commemoration of playing. Therefore, how to interact when online visiting becomes a hot issue of research nowadays.
Disclosure of Invention
The embodiment of the invention provides an interaction method, an interaction device, a terminal and a storage medium, which are used for realizing interaction with a target object in a video when an online video is watched.
In one aspect, an embodiment of the present invention provides an interaction method, including:
displaying a playing interface of a target video of a first application program in a terminal;
when the playing of the target video meets an interaction condition, displaying at least one interaction option interacting with a target object in the target video on the playing interface;
selecting a target interaction option in the at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option;
and when a target event matched with the interaction prompt information exists, outputting an interaction response related to the target event.
In one aspect, an embodiment of the present invention provides an interaction apparatus, including:
the display unit is used for displaying a playing interface of a target video of the first application program in the terminal;
the display unit is further configured to display at least one interaction option interacting with a target object in the target video on the play interface when the play of the target video meets an interaction condition;
the output unit is used for selecting a target interaction option in the at least one interaction option and outputting interaction prompt information corresponding to the target interaction option;
and the output unit is also used for outputting an interactive response related to the target event when the target event matched with the interactive prompt information exists.
In one aspect, an embodiment of the present invention provides a terminal, including:
a processor adapted to implement one or more instructions; and the number of the first and second groups,
a computer storage medium storing one or more instructions adapted to be loaded by the processor and to perform the steps of:
displaying a playing interface of a target video of a first application program in a terminal;
when the playing of the target video meets an interaction condition, displaying at least one interaction option interacting with a target object in the target video on the playing interface;
selecting a target interaction option in the at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option;
and when a target event matched with the interaction prompt information exists, outputting an interaction response related to the target event.
In one aspect, an embodiment of the present invention provides a computer storage medium, where computer program instructions are stored in the computer storage medium, and when executed by a processor, the computer program instructions are configured to perform the following steps:
displaying a playing interface of a target video of a first application program in a terminal;
when the playing of the target video meets an interaction condition, displaying at least one interaction option interacting with a target object in the target video on the playing interface;
selecting a target interaction option in the at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option;
and when a target event matched with the interaction prompt information exists, outputting an interaction response related to the target event.
In one aspect, an embodiment of the present invention provides a computer program product or a computer program, where the computer program product or the computer program includes computer instructions stored in a computer-readable storage medium; a processor of the terminal reads the computer instructions from the computer storage medium, and executes the computer instructions to perform:
displaying a playing interface of a target video of a first application program in a terminal;
when the playing of the target video meets an interaction condition, displaying at least one interaction option interacting with a target object in the target video on the playing interface;
selecting a target interaction option in the at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option;
and when a target event matched with the interaction prompt information exists, outputting an interaction response related to the target event.
In the embodiment of the invention, a playing interface of a target video of a first application program is displayed in a terminal, and along with the playing of the target video, if the playing of the target video meets an interaction condition, at least one interaction option interacting with a target object in the target video is displayed on the playing interface; selecting a target interaction option in the at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option; and when the target event matched with the interactive prompt information exists, outputting an interactive response related to the target event. The interaction between the user and the object in the video is realized, the interactivity is provided, and the interestingness of watching the video is increased.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the implementation description will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a video management system according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an interaction method according to an embodiment of the present invention;
fig. 3a is a schematic diagram of a playing interface of a target video according to an embodiment of the present invention;
FIG. 3b is a diagram illustrating an interactive option according to an embodiment of the present invention;
FIG. 3c is a diagram illustrating another interactive option display provided by an embodiment of the present invention;
FIG. 4a is a diagram of a display image capture window according to an embodiment of the present invention;
FIG. 4b is a schematic diagram of a prompt animation for triggering feeding interaction according to an embodiment of the present invention;
FIG. 5a is a schematic diagram of a group photo image according to an embodiment of the present invention;
fig. 5b is a schematic diagram of a friend selection window according to an embodiment of the present invention;
fig. 5c is a schematic diagram of a session window provided by the embodiment of the present invention;
FIG. 6a is a diagram illustrating an output interactive response according to an embodiment of the present invention;
FIG. 6b is a diagram illustrating a display hint animation according to an embodiment of the present invention;
FIG. 6c is a diagram illustrating an interactive option display according to an embodiment of the present invention;
FIG. 7 is a flow chart illustrating another interaction method according to an embodiment of the present invention;
FIG. 8a is a schematic view of another image capture window provided by embodiments of the present invention;
FIG. 8b is a diagram of another playback interface provided by an embodiment of the present invention;
FIG. 8c is a diagram of another playback interface provided by an embodiment of the present invention;
FIG. 9a is a diagram of another playback interface provided in an embodiment of the present invention;
FIG. 9b is a diagram of a group photo album window according to an embodiment of the present invention;
fig. 10a is a network topology diagram of another video management system provided by the embodiment of the present invention;
fig. 10b is a schematic three-axis view of a terminal according to an embodiment of the present invention;
FIG. 10c is a block diagram of a shared video management system according to an embodiment of the present invention;
FIG. 10d is a system flow diagram provided by an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an interactive apparatus according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
The embodiment of the invention provides an interaction scheme, wherein a playing interface of a target video of a first application program is displayed in a terminal; if the fact that the playing of the target video meets the interaction condition is checked in the playing process of the target video, displaying at least one interaction option interacting with a target object in the target video on a playing interface; when a target interaction option in at least one interaction option is selected, outputting interaction prompt information corresponding to the target interaction option; and when a target event matched with the interactive prompt information is detected, outputting an interactive response related to the target event. By outputting the interaction options in the playing process of the target video, the interaction between a video viewer and a target object in the video is realized, and the interactivity and the interest in viewing the video are improved.
Based on the above interaction scheme, an embodiment of the present invention provides a video management system, please refer to fig. 1, which is a schematic structural diagram of a video management system according to an embodiment of the present invention. The video management system shown in fig. 1 can implement interaction with a target object in a video when the video is viewed, such as group photo, handshake, and the like.
The shared video management system shown in fig. 1 includes at least one terminal 101 and a server 102. The terminal 101 may include devices such as a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, a smart car, and a smart television; the server 102 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, big data and an artificial intelligence platform.
In one embodiment, a first application for viewing online video, such as an online zoo application, may be running in the terminal 101. The server 102 provides support for the running of the first application in the terminal 101. In a specific implementation, a user can view any one of videos in the first application through the terminal 101. In the process that a user watches a video through the terminal 101, if the terminal 101 or the server 102 detects that the playing of the shared video meets the interaction condition of interacting with a target object in the video, the terminal 101 displays at least one interaction option. The user selects any one of the interaction options, the terminal 101 notifies the server 102 of the selected interaction option, and the server 102 may notify the terminal 101 to display corresponding interaction prompt information according to the selected interaction option, so that the user inputs an interaction operation according to the interaction prompt information.
If the terminal 101 detects that the interactive operation of the user is matched with the interactive information, an interactive response is output, interaction with a target object in the video is realized when the video is watched, the interactivity of the first application program and the interestingness of watching the video are improved, and therefore the attention of the first application program can be improved.
Based on the video management system, the embodiment of the invention provides an interaction method. Fig. 2 is a schematic flow chart of an interaction method according to an embodiment of the present invention. The interaction method shown in fig. 2 may be executed by a terminal, and specifically may be executed by a processor of the terminal, and the interaction method shown in fig. 2 may include the following steps:
step S201, displaying a playing interface of a target video of the first application program in the terminal.
In one embodiment, the first application may refer to any application for viewing videos, such as an online zoo, other small video applications, and so forth. The target video may refer to any piece of video viewed through the first application by a target user using the terminal. The target video may include a target object, and the target object may refer to any one or more of an animal such as a panda, a plant, a person, and the like.
In one embodiment, when a target user using the terminal is watching a target video in a first application program, the terminal displays a playing interface of the target video. Referring to fig. 3a, a schematic view of a playing interface provided in the embodiment of the present invention, 301 represents a playing interface of a target video.
In one embodiment, the first application program may include a video-watching trigger option, and the target video playing interface is displayed when the video-watching trigger option is selected. In other words, when the trigger option is selected, which indicates that the target user wants to watch the video in the first application program, the terminal displays a playing interface of the target video in the first application program. For example, a trigger option 302 to view a video, denoted "explore," is included in the play interface 301 shown in FIG. 3 a.
Step S202, when the playing of the target video meets the interaction condition, at least one interaction option interacting with the target object in the target video is displayed on the playing interface.
In one embodiment, the interaction condition may include a play progress condition indicating a target play progress, which may refer to played 1/3, played 4/5, 30 seconds away from the end of the play, and so on. When detecting whether the playing of the target video meets the playing progress, the following steps can be executed: comparing the played progress of the target video with the target playing progress indicated by the playing progress condition; and if the played progress of the target video is the same as the target playing progress indicated by the playing progress condition, determining that the playing of the current video meets the interaction condition.
Based on this, the embodiment of the invention can display the interaction option when the playing of the target video is about to end, so that the target user can interact with the target object in the target video, and the interactivity is improved; or in the playing process of the target video, the interaction options for the user to interact with the target object are displayed, the fatigue of the target user for watching the video for a long time can be relieved through interaction, and the interestingness of watching the target video is increased.
In another embodiment, the interaction condition may further include a play picture condition, and in colloquial, the play picture condition refers to a condition that a play picture currently played by the shared video needs to meet, where the play picture condition is used to indicate a target picture, and at this time, when the play of the target video meets the interaction condition, the play picture in the target video at the current time is the target picture indicated by the play picture condition.
The target picture may be specified in advance, and the target picture may be any one of a plurality of frames of play pictures included in the target video, for example, the target picture refers to a play picture including a target object, or the target picture refers to a play picture including a target object and having a preset posture. Wherein, predetermine the gesture and can refer to the gesture that is fit for the group photo, or the gesture that is fit for feeding, or the gesture that is fit for touching arbitrary one or more. In the playing process of the target video, some playing pictures suitable for interaction are selected for the target user to interact, so that the interest of watching the target video can be improved.
As an optional implementation manner, if the target picture refers to any one of the multiple played animations included in the target video, the implementation manner of the target terminal detecting whether the playing of the target video meets the interaction condition may be: the terminal adds marks to the target pictures in the target video; and if the fact that the mark exists in the playing picture played at the current moment is detected in the playing process of the target video, determining that the playing of the target video meets the interaction condition.
As another optional implementation, if the target picture is a play picture including a target object, an implementation of the terminal in detecting whether play of the target video meets an interaction condition may further be: in the process of playing the target video, identifying a playing picture of which each frame comprises an object; and if the object included in the playing picture played at the current moment is identified as the target object, determining that the playing of the target video meets the interaction condition.
As another optional implementation manner, if the target picture includes the target object and the gesture of the target object is a preset gesture, the implementation manner of the terminal detecting whether the playing of the target video meets the interaction condition may further be: in the playing process of the target video, identifying a playing picture of each frame including a target object, and identifying the posture of the target object in the playing picture; and if the gesture of the target object is equal to the preset gesture, determining that the playing of the target video meets the interaction condition.
In yet another embodiment, the interaction condition may further include an audio data condition, and it should be understood that the target video may be composed of a plurality of consecutive frames of image data and audio data. In colloquial, the audio data condition refers to a condition that needs to be satisfied for sharing the audio data currently played by the video. At this time, the fact that the playing of the interactive video meets the interactive condition may mean that: and matching the audio data corresponding to the target video played at the current moment with the target audio data indicated by the audio data condition.
For example, target audio data indicated by an audio data condition may refer to audio that includes any vocabulary related to a group photo, such as "xxxxx when a guest wants to group a photo with an expectation"; further, if "a person is expected to look for xxx, the person will take a posture suitable for group photo". As another example, the target audio data indicated by the audio data condition may also refer to audio that includes any vocabulary related to a handshake, such as "a gesture that often makes a handshake when xxx wants to be fed".
After the fact that the playing of the target video meets the interaction condition is detected, the terminal can display at least one interaction option in the playing interface, each interaction option corresponds to one interaction operation, and each interaction option can also be understood as a scene.
In an embodiment, at least one interaction option to be displayed related to the target object may be pre-stored in the terminal, and the at least one interaction option in step S202 may include each interaction option in the at least one interaction option to be displayed, that is, when it is detected that the playing of the target video meets the interaction condition, the terminal displays all interaction options pre-stored in the terminal and related to the target object in the target video in the playing interface. Wherein, at least one interactive option to be displayed may include any one or more of the following: an image operation option for operating the image corresponding to the target object, and an object operation option for operating the target object.
Optionally, the image operation option for operating the image corresponding to the target object may include a group photo option, and the object operation option for operating the target object may be different according to different object categories to which the target object belongs. For example, if the object category to which the target object belongs is an animal, the object manipulation options may include feeding options, shaking options, and stroking options, among others; if the object to which the target object belongs is analogous to a plant, the object manipulation options may include a watering option, a pruning branch and leaf option, and so on; if the object class to which the target object belongs is a person, the object operation options may include a hug option and a handshake option, etc.
In another embodiment, the at least one interaction option displayed in step S202 may also refer to an interaction option related to the interaction condition, among at least one interaction option to be displayed preset by the terminal. In a specific implementation, if the interaction condition is a play progress condition, the at least one interaction option refers to an interaction option matched with a play progress in at least one interaction option to be displayed; if the interaction condition is a play picture condition, the at least one interaction option may be an interaction option matched with the play picture in the at least one interaction option to be displayed, for example, an image operation option such as a group photo option for operating an image corresponding to the target object; if the interaction condition is an audio data condition, the at least one interaction option may be an interaction option matching the audio data condition, for example, the object operation option for operating the target object includes a feeding option, the target audio data indicated by the audio data condition includes audio related to feeding, and the interaction option matching the audio data condition in the at least one interaction option to be displayed is a feeding option.
In one embodiment, the at least one interactive option may be directly displayed on the playing interface when it is detected that the playing of the target video satisfies the playing condition. For example, referring to fig. 3b, a schematic diagram for displaying interaction options provided in the embodiment of the present invention is shown, in fig. 3b, it is assumed that the type of the object to which the target object belongs is an animal, the name of the target object is pan, 303 represents a display interface of the target video, 31 and 32 represent two interaction options, which are a group photo option, represented as "pan and pan group photo" and a feeding option, represented as "feed to pan and pan".
In other embodiments, the at least one interactive option may be displayed in an interactive window, where the interactive window is displayed when the terminal detects that the playing of the target video meets the playing condition. For example, based on fig. 3b, another schematic diagram for displaying interaction options is provided in the embodiment of the present invention, as shown in fig. 3 c. As in fig. 3b, 303 in fig. 3c represents the playing interface of the target video, and 31 and 32 represent two interaction options; in contrast to fig. 3b, fig. 3c includes interactive windows 305, 31 and 32 displayed on the interactive windows.
Step S203, selecting a target interaction option in the at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option.
In one embodiment, the target interaction option is any one of at least one interaction option, and the target interaction option may be selected by any one or more of the following triggering methods: a touch control mode and a voice mode; the touch control mode is that a target interaction option is selected in any one of contact modes such as clicking, long pressing, double clicking and the like; the voice mode refers to outputting voice including target interaction options, and colloquially, the voice including the target interaction options is input by a target user through a terminal.
Based on this, when the terminal displays at least one interactive option, the terminal can simultaneously display prompt information for triggering the interactive option in a voice mode, for example, "try to open a microphone, say your selection bar". If at least one interactive option is displayed in the manner of fig. 3b, that is, directly displayed in the play interface, the prompt message for triggering the interactive option in the voice manner may also be directly displayed in the play interface; if at least one interactive option is displayed in the manner shown in fig. 3c, that is, in the interactive window of the play interface, the prompt message of the voice-triggered interactive option may also be displayed in the interactive window.
In an embodiment, when the terminal displays at least one interactive option, it may also display a determination selection control at the same time, as shown in 306 in fig. 3b, and when both a target interactive option and the determination selection control in the at least one interactive option are selected, it is determined that the target interactive option is selected. Therefore, the target user is prevented from accidentally touching any interactive option to determine that the any interactive option is selected, the interactive option selection accuracy is improved, and unnecessary power consumption overhead of the terminal is reduced.
And when the terminal determines that the target interaction option in the at least one interaction option is selected, outputting interaction prompt information corresponding to the target interaction option. The interactive prompt message may refer to a message, an animation, a window, an interface, or the like. What the interactive prompt information specifically includes is related to the target interaction option, which is described in detail below.
In one embodiment, if the target interaction option includes an image operation option for operating an image corresponding to the target object, where the image operation option includes a group photo option, the interaction prompt information corresponding to the target interaction option may include an image acquisition window; the image acquisition window can comprise an image preview area and an image acquisition control; if the image acquisition control is not triggered, displaying a picture acquired at the current moment in the image preview area; and if the image acquisition control is triggered, displaying a first image in the image preview area, wherein the first image is obtained by shooting a picture acquired when the image acquisition control is triggered.
For example, referring to fig. 4a, which is a schematic diagram of displaying an image capturing window according to an embodiment of the present invention, when the interactive option 31 "and the anticipated group photo" in fig. 3b are selected, the image capturing window corresponding to the interactive option is triggered to be displayed as 400 in fig. 4 a. In fig. 4a, 401 represents an image preview area in an image capture window, and 402 represents an image capture control in the image capture window. Assuming that when the image capture control 402 is triggered, the terminal captures a screen as shown by 403 in fig. 4a, and the terminal captures 403 to obtain a first image, and displays the first image in the image preview area 401.
In other embodiments, if the target object is an animal, the target interaction option includes an object operation option for operating the target object, and the object operation option includes feeding for the target object, the interaction prompt information corresponding to the target interaction option includes a prompt animation for triggering feeding interaction. The prompt animation comprises an operation mode for executing feeding operation, such as shaking the terminal back and forth or shaking the terminal left and right.
For example, referring to fig. 4b, a schematic diagram of a prompt animation for triggering feeding interaction according to an embodiment of the present invention is provided, and assuming that the interaction option 32 "expect feeding" in fig. 3b is selected, the terminal displays a prompt animation for triggering feeding interaction corresponding to the interaction option, as shown in 41 in fig. 4 b. The prompt animation includes an operation mode for executing the feeding operation, and can be expressed as 'shaking the terminal back and forth to expect a feeding bar'.
And step S204, outputting an interactive response related to the target event when the target event matched with the interactive prompt information exists.
In one embodiment, the target event matched with the interactive prompt message may refer to an operation input by a target user in the terminal, or may refer to a posture parameter of the terminal itself. Specifically, the target event matched with the interactive information is different according to the difference of the interactive prompt information, and is described in detail below.
In one embodiment, if the interactive prompt message includes an image capture window, the target event matching the interactive prompt message means that the image capture control is triggered. As can be seen from the foregoing, if the image capture control is triggered to indicate that the terminal captures the first image, then the interactive response related to the target event includes a group photo image, where the group photo image includes the captured first image and the image of the target object. The image of the target object refers to a playing picture of the target video when any one of the interaction options is triggered, or the image of the target object refers to a preset image including the target object. Based on this, outputting an interactive response related to the target event, comprising: and displaying the group photo image.
For example, assuming that the image capture control 402 is triggered in the image capture window shown in fig. 4a, it is determined that a target event matching the interaction prompt information is detected, and the further terminal synthesizes a group image according to the first image captured when the image capture control 402 is triggered and the image of the target object, and displays the group image as shown in 501 in fig. 5 a.
Alternatively, the group image may be displayed in an interactive response window, such as the interactive response window shown at 500 in FIG. 5 a. When the interactive response is displayed in the interactive response window, a group image sharing area may also be displayed, as shown at 502 in fig. 5 a. The group image sharing area may include at least one application identifier of the second application, such as any one or more of a name, an icon, and the like of the second application; when any application program identifier is selected, a sharing friend selection window can be displayed; the sharing friend selection window comprises user identifications of a plurality of friends, and the friend users are contact users of the target user in the selected application program; and selecting any friend user identification, and triggering the friend indicated by any friend identification to share the group photo image through the selected second application program. The friend indicated by the selected friend identifier can view the group photo image shared by the target user through the second application program, and specifically, the friend indicated by the selected friend identifier can view the group photo image through a session window with the target user in the second application program.
For example, the application id of the at least one second application includes a QQ application id, and when the QQ application id is selected, a friend selection window 51 shown in fig. 5b is displayed; suppose that the friend selection window includes a user identifier 1 of friend 1, a user identifier 2 of friend 2, and a user identifier 3 of friend 3; and when the target user selects the user identifier 1, the terminal shares the group photo image to the friend user 1 through the QQ application program. Friend user 1 opens a session window in the QQ application for a session with the target user. Referring to fig. 5c, in order to a conversation window displayed in the terminal of the friend user 1, where the friend user 1 has a conversation with the target user, the group photo image shared by the target user to the friend user 1 is displayed as a conversation message in the conversation window as shown in 52 in fig. 5 c.
Optionally, the group photo image sharing area further includes a save button, as shown in 520 in fig. 5a, when the save button 520 is triggered, the terminal saves the group photo image in the local storage.
In another embodiment, if the interactive prompt includes an image capture window, when the image capture control is triggered, the image capture window may further include a determination control, and the target event matching the interactive prompt may indicate that the determination control is triggered. It should be understood that when the image control is triggered, the image preview window will display the first image, and the image acquisition control is hidden from display; at this time, if the determination control displayed in the image acquisition window is triggered, the determination control indicates that the target user determines to use the first image as the image for group photo. Optionally, when the image capturing control is triggered, the image capturing window may further include a re-shooting control, and when the re-shooting control is triggered, it indicates that the target user does not use the first image as an image for group photography, and wants to re-shoot, at this time, the re-shooting control and the determination control in the image capturing window disappear, and the image preview area and the image capturing control are restored to the state when the image capturing control is not triggered.
For example, referring to the image capture window 400 shown in fig. 4a, assume that when 402 in fig. 4a is triggered, the image preview area 401 displays a first image, and the image capture window includes a determination control 411 and a retake control 422 therein; if the control 411 is determined to be triggered, determining that a target event related to the interactive prompt message exists; if the re-shoot control 422 is triggered, the image capture window reverts to the image capture control not triggered state, as shown at 400.
As an optional implementation manner, a friend user of a target user in a first application program or a friend user of the target user in another application program may be watching the target video in a synchronous manner in a different place from the target user; when the playing of the target video meets the interaction condition, the terminal of the target user displays at least one interaction option; if the selected target interaction option in the at least one interaction option is a group photo option, outputting prompt information of group photo with other users, wherein the prompt information is used for prompting other users to watch the target video and whether to invite the other users to group photo with the target object; if the target user inputs the determined operation aiming at the prompt information, the terminal of the target user informs the terminals of other users to acquire the images of other users while acquiring the target user, and finally synthesizes the image of the target user, the images of other users and the corresponding image of the target object to obtain a group photo image.
In other embodiments, if the interaction prompt information includes a prompt animation triggering feeding interaction, the target event matched with the interaction prompt information includes that the posture parameter of the terminal is matched with a preset posture parameter when feeding operation is executed according to the operation mode indicated by the prompt animation. The preset attitude parameters may include an angle threshold of an angle between the terminal and a horizontal plane. As can be seen from the foregoing, the prompt animation includes an operation mode for executing feeding operation, such as shaking the terminal forward and backward, or shaking the terminal left and right, and the like, where executing feeding operation according to the operation mode indicated by the prompt animation refers to shaking the terminal according to the operation mode prompted in the prompt animation, such as shaking the terminal forward and backward; and if the fact that the included angle between the terminal and the horizontal plane is equal to the angle threshold value in the shaking process of the terminal is detected, determining that a target event matched with the interactive prompt information exists.
In one embodiment, when the target event refers to that the posture parameter of the terminal matches a preset posture parameter when the feeding operation is performed according to the operation mode indicated by the prompt animation, the interactive response related to the target event may refer to a feeding video for feeding a target object at the angle of a target user. At this time, outputting an interactive response related to the target event includes: and playing a feeding video for feeding the target object at the angle of the target user.
For example, referring to fig. 6a for a schematic diagram of outputting an interactive response according to an embodiment of the present invention, assuming that an interactive option 32 "give an intended feed" is selected in fig. 3a, at this time, a prompt animation for triggering a feeding interaction is displayed as shown in fig. 6a 601, the target user shakes the terminal back and forth according to the prompt animation, if the posture parameter of the terminal meets a preset posture parameter during shaking, or the shaking reaches a certain amplitude, the prompt animation disappears, and the terminal is triggered to play a video of the feed that the target user officially gives the intended feed as the owner, as shown in fig. 6a 602.
Optionally, in the process of shaking the terminal by the target user, if it is detected that the attitude parameter of the terminal does not satisfy the preset attitude parameter, or the shaking amplitude of the terminal is not large enough, the terminal may output a new prompt animation again to prompt the target user to increase the shaking amplitude until it is detected that the attitude parameter of the terminal satisfies the preset attitude parameter. For example, referring to fig. 6b, which is a schematic diagram of displaying a prompt animation according to an embodiment of the present invention, when the prompt animation 601 triggering feeding interaction is displayed, and the terminal detects that the current shaking amplitude is insufficient, a new prompt animation 603 is output to prompt the target user to increase the shaking amplitude of the terminal.
Similarly, in the playing process of the feeding video, if the terminal detects that the playing of the feeding video meets the interaction condition, the terminal can display an interaction option in the playing interface of the feeding video, so that the user can determine the video to be watched next or the interaction to be executed. For example, in the process of playing the feeding video in fig. 6a, if it is detected that the playing of the feeding video satisfies the interaction condition, the interaction options "view other small animals" option 61 and "look ahead and close up" option 62 are displayed on the playing interface of the feeding video, as shown in fig. 6 c. The terminal realizes interaction with a target user on a playing interface of the feeding video through the same steps from step S202 to step S204.
In the embodiment of the invention, a playing interface of a target video of a first application program is displayed in a terminal, and along with the playing of the target video, if the playing of the target video meets an interaction condition, at least one interaction option interacting with a target object in the target video is displayed on the playing interface; selecting a target interaction option in the at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option; and when the target event matched with the interactive prompt information exists, outputting an interactive response related to the target event. The interaction between the user and the object in the video is realized, the interactivity is provided, and the interestingness of watching the video is increased.
Based on the above-mentioned interaction method, the embodiment of the present invention provides another interaction method, and referring to fig. 7, a schematic flow chart of the another interaction method provided by the embodiment of the present invention is shown. The interaction method illustrated in fig. 7 may be executed by the terminal, and may specifically be executed by a processor of the terminal. The interaction method shown in fig. 7 may include the following steps:
step S701, displaying a playing interface of the target video of the first application program in the terminal.
Step S702, when the playing of the target video meets the interaction condition, displaying at least one interaction option interacting with the target object in the target video on the playing interface.
In an embodiment, some possible implementations included in steps S701 to S702 may refer to specific descriptions in steps S201 to S202 in fig. 2, and are not described herein again.
And S703, selecting a target interaction option in the at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option.
As can be seen from the foregoing, the target interaction option may include a group photo option, and when the target interaction option is the group photo option, the output interaction prompt message may include an image capture window, and the image capture window may include an image preview area and an image capture control therein, as shown in fig. 4 a.
In other embodiments, the image capture window may further include at least one image special effect option, and if the at least one image special effect option is selected and the image capture control is not triggered, a special effect picture is displayed in the image preview area, where the special effect picture is generated based on a picture captured at the current time and the selected image special effect option; and if the at least one image special effect option is selected and the image acquisition control is triggered, displaying a second image in the image preview area, wherein the second image is generated based on the acquired picture when the image acquisition control is triggered and the selected image special effect.
That is, the target user can select to use a certain special effect photographed image so that the terminal synthesizes the image photographed using the special effect and the image of the target object to generate a group photo image. Each image special effect option corresponds to a special effect, the special effect may include a filter and a beauty treatment, the filter is used for decorating the collected images through virtual decoration, and the beauty treatment is used for treating the collected human face images by a certain means, such as face thinning and whitening.
For example, referring to fig. 8a, a schematic diagram of another image capture window provided in the embodiment of the present invention is shown, where 801 represents an image capture control, 802 represents a special effect option for cartoon panda decoration, and 803 represents an image preview area. Assuming 802 is selected, the image preview area displays 804 an image of the target user's cartoon panda decor.
It should be understood that the special effect image of the target user and the image of the target object are combined to obtain a combined image, so that the diversity of combined image selection is increased, and the interestingness of interaction can be further improved.
And step S704, when a target event matched with the interactive prompt information exists, outputting an interactive response related to the target event.
In an embodiment, some possible implementations included in step S704 may refer to the descriptions in step S204 in fig. 2, and are not described herein again.
Step 705, if the history group photo option in the playing interface is triggered, displaying a history group photo window.
In one embodiment, the playing interface of the target video may include a history group photo option, and when the history group photo option is triggered, a history group photo window is displayed, where the history group photo window includes at least one group photo image, each group photo image includes an image of the target object, and each group photo image corresponds to one operation option and the number of times that the operation option is triggered; and if the operation option corresponding to any one of the photographic images is triggered, increasing the number of times of triggering the operation option corresponding to the photographic image by one.
As an alternative embodiment, the operation option corresponding to each group photo option may be a like option, and the more times that each like option is triggered, the more people the group photo image corresponding to the like option is liked by.
For example, referring to fig. 8b, which is a schematic diagram of another playing interface provided in the embodiment of the present invention, 81 represents a playing interface of a target video, and 82 represents a history group photo option; when the history group photo option 82 is triggered by the target user, a history group photo window is displayed in the play interface as shown at 83 in fig. 8 b. 83 includes 6 movie images, each of which has a favorite option corresponding to the lower part, for example, the favorite option 805 corresponding to the movie image 84; each praise option corresponds to one triggered number of times, the triggered number of times corresponding to the praise option 805 is indicated as 86, and the value corresponding to 86 is 209, indicating that 209 users triggered the praise option. If the target user clicks on approve option 805, the corresponding number of times 805 in the history composition window is changed from 209 to 210, as shown at 87 in FIG. 8 b.
As another alternative, the operation option corresponding to each movie image may also be a comment option, and at this time, if the comment option corresponding to any movie image is selected, comment information for a plurality of users to comment on any movie image and an input area for inputting the comment information may be displayed; and after the target user inputs comment content in the input area, the number of times that the comment option corresponding to any one movie image is triggered is increased by 1.
For example, referring to fig. 8c, which is a schematic diagram of another playing interface provided by the embodiment of the present invention, as in fig. 8b, 81 indicates a playing interface of a target video, and 82 indicates a history group photo option; when the history group photo option 82 is triggered by the target user, a history group photo window is displayed in the play interface as shown at 83 in fig. 8 b. 83 includes group images 88, 88 corresponding to a comment option 89; numeral 83 corresponding to 89 indicates that the group image 88 corresponds to 83 pieces of comment information. If 89 is triggered, the comment information corresponding to the group image 88 and the input area for inputting the comment information are displayed, and the target user can input the comment content for the group image 88.
Step S706, if the trigger option for viewing the user information in the playing interface is selected, displaying a user information window.
In one embodiment, the playing interface of the target video further comprises a trigger option for viewing user information, and when the trigger option is selected, a user information window is displayed, wherein the user information window comprises browsing history of a target user in a first application program and history group photo images containing the target user, and each history group photo image corresponds to one sharing control; if the sharing control corresponding to any historical movie image is triggered, displaying an image sharing window, wherein the image sharing window comprises a plurality of sharing objects; and selecting any one of the plurality of sharing objects to trigger sending any selected historical group photo image to any one sharing object.
The user information window further comprises one or more of a nickname and an avatar of the target user in the first application program, and the plurality of sharing objects included in the image sharing window can refer to contact users of the target user in the first application program.
For example, referring to fig. 9a, a schematic diagram of another playback interface provided by the embodiment of the present invention is shown, where 901 denotes the playback interface, 902 denotes a trigger option for viewing user information, denoted as "my", and when 902 is selected, a user information window 903 is displayed; the user information window 903 displays a user nickname of the target user and a user avatar of the target user. The user information window 903 also includes the browsing history of the target user in the first application program, as shown by 130 in 903, and it can be seen from 130 that the target user has watched a video named "panda", watched "lovely koala", and so on.
At least one history group photo image 904 can be further displayed in the user information window 903, each history group photo image 904 corresponds to one sharing option 905, when the sharing option 905 corresponding to any history group photo image is triggered, the terminal displays an image sharing window, so that a target user can select any sharing object in the image sharing window, and the terminal is triggered to share the selected history image with the sharing object.
In one embodiment, the at least one historical group photo image may be all historical group photo images that contain the target user. In other embodiments, because the screen size of the terminal is limited, the number of the historical group photo images including the target user is large, and the terminal may not display all the historical group photo images in the user information window, the at least one historical group photo image may refer to any number of the historical group photo images including the target user, or a number of the historical group photo images closest to the current time.
If all the historical group photo images are wanted to be viewed, the method can be achieved by triggering a group photo image album option included in the user information window, as represented by "my memories" in fig. 9a, and when the "my memories" is triggered, a group photo album window as shown in fig. 9b is displayed, and all the historical group photo images including the target user are included in the group photo album window.
In the embodiment of the invention, a playing interface of a target video of a first application program is displayed in a terminal, and along with the playing of the target video, if the playing of the target video meets an interaction condition, at least one interaction option interacting with a target object in the target video is displayed on the playing interface; selecting a target interaction option in the at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option; and when the target event matched with the interactive prompt information exists, outputting an interactive response related to the target event. The interaction between the user and the object in the video is realized, the interactivity is provided, and the interestingness of watching the video is increased.
In addition, the playing interface can also comprise a history group photo option and a triggering option for viewing user information, a target user can view the history group photo image comprising the target object through the history group photo option, and can perform praise operation on any history group photo image, so that the interaction between users is realized. The target user can also check the tour history of the target user in the first application program and the history group photo image related to the target user by clicking the trigger option for checking the user information, and any history group photo image can be shared to other users through the social platform, so that the social property is improved.
Based on the above-mentioned embodiment of the interaction method, an embodiment of the present invention further provides another video management system, which is a network topology diagram of another video management system provided in an embodiment of the present invention, with reference to fig. 10 a. The video management system shown in fig. 10a can implement the interaction methods shown in fig. 2 and fig. 7, and is applied to an online zoo application program to support a user to select different interaction options to select a next scene by himself/herself on the basis of playing a target video, and in combination with a gravity sensing technology, the user can perform an "online feeding" operation, thereby improving the diversity and interactivity of online play.
The video management system shown in fig. 10a uses a common terminal/server architecture, that is, the video management system shown in fig. 10a includes a terminal 1001 and a server 1002. In order to realize the function of watching animals on line at multiple angles, the terminal is required to be capable of playing panoramic videos, sensing the relevant displacement of the user and adjusting the video display angle. Therefore, the terminal 1001 should include the following sensors thereon: gyroscopes, GPS, cameras, microphones, vibration sensors, light sensors, distance sensors, etc. In addition, the terminal 1001 needs to have the capability of recognizing the self-movement. The feeding experience is finished by mainly relying on a gyroscope (for sensing and maintaining the direction based on the principle of conservation of angular momentum), a gravity sensor (for sensing the horizontal direction based on the piezoelectric effect) and an acceleration sensor (a micro-accelerometer based on capacitive sensing) to sense the interactive operation of a user.
In the processing environment of the application, at a server 1002, a terminal 1001 is used for pulling server video source data, receiving and displaying related content; and sensing the relevant operation of the user through built-in sensors (a gyroscope, an acceleration sensor, a gravity sensor and the like) to provide interaction and service experience for the user.
In one embodiment, the attitude of the terminal 10101 can be determined by the values of the terminal on the horizontal axis X, the vertical axis Y and the vertical axis Z, which are shown in fig. 10b, and are a schematic three-axis diagram of a terminal according to an embodiment of the present invention. Specifically, when the value of the X-axis approaches the acceleration of gravity, it indicates that the left side of the terminal is facing downward; when the value of the X axis is close to the negative gravity acceleration, the right side of the terminal is downward; when the value of the Y axis is close to the gravity acceleration, the lower edge of the terminal is downward, and the terminal is in a vertical screen placement and normal placement state; when the value of the Y axis is close to the negative gravity acceleration, the terminal is in a vertical screen placement and inversion state, and the upper edge of the terminal faces downwards; when the value of the Z axis is close to the gravity acceleration, the screen of the terminal is upward; when the value of the Z-axis approaches the negative gravitational acceleration, it indicates that the screen of the terminal is facing downward.
In one embodiment, in the video management system shown in fig. 10a, the terminal 1001 is mainly configured to complete a relevant interaction with the user, such as displaying an interactive option to the user, and the like, receive a series of operations of the user, such as selecting an interactive option, trigger a feeding interaction operation, and provide a corresponding customized content display, such as playing a feeding video, displaying a group photo image, and the like.
In one embodiment, the server 1002 is mainly used for processing requests from the terminal 1001, which may be video stream processing, image processing, and service data processing, respectively. The video stream processing is to perform relevant encoding, decoding, storage and issuing work aiming at various customized videos; the image processing mainly focuses on synthesizing the group photo image and completing storage; the service data processing means processing the relevant information of the user using the video sharing video and providing the user with personal central portal view.
In an embodiment, the video management system shown in fig. 10a may further include a third party cloud service 1003, where the third party cloud service 1003 mainly provides a powerful natural language identification server and an audio and video streaming call service for the server 1002.
Based on the above description, a module architecture diagram corresponding to the shared video management system in the embodiment of the present invention can be obtained. Referring to fig. 10c, a block architecture diagram of a shared video management system according to an embodiment of the present invention is shown. The shared video management system shown in FIG. 10c may include a presentation layer 111, a logic layer 112, and a service layer 113.
In one embodiment, the presentation layer 111 is handled by the terminal 1101, and the terminal 1101 provides the user with a choice by opening two portals "explore" and "my". If the user selects the 'exploration' option, in the process of watching videos through the 'exploration' option, the selected interactive option belongs to interactive videos, such as feeding videos, an embedded sensor is mainly used for completing user immersive interactive experience, and different video watching visual angles are given to the user by tracking changes of relevant movement tracks of the user. When the option of 'my' is selected, a related user information viewing window can be displayed for the user according to the generated group photo image, the sharing capability of the group photo image is provided, and the user is supported to carry out secondary propagation.
In one embodiment, the logic layer 112 is responsible for some business logic and non-business logic processing, including receiving user terminal information, interactive scene switching, video playback control, locally compositing user images, managing online users, and so on. When a user selects to watch a video in a first application program, a terminal initiates a request to pull content information related to the first application program, and triggers logic processing of video loading and playing; a user enables parameters related to a built-in sensor to change by shaking the terminal, processing logic for switching interactive scenes is triggered, and if the mobile phone is shaken, a feeding video is selected to be played corresponding to the video; when the user selects to group a photo with the current animal, the logic of related picture processing is triggered, the terminal needs to acquire video image data watched by the user at a specific moment, the synthesis of the group photo image and the process of uploading the video image data stored by the server 1002 are completed, and the group photo image is stored locally for the user.
In one embodiment, the service layer 113 primarily provides audio-video telephony capabilities, image recognition computing capabilities, and user information storage capabilities. The interaction with the user is handled by providing a corresponding interface to the terminal 1001. If the image processing interface is responsible for processing the request of the user for launching the photo cartoon image and the animal group photo; the audio/video processing interface is responsible for providing the user with the ability to view video.
Based on the network topology and the module architecture diagram of the above system, the embodiment of the present invention provides a system flow chart, as shown in fig. 10 d. After a user starts a first application program, entering a system and initializing various data; if the user selects the 'exploration' option, the terminal 1001 sends a request to the server 1002 to prepare for loading the relevant video resource data; the server 1002 sends a related video resource link to the terminal 1001 for pulling according to a request sent by the terminal 1001; when watching a video in a first application program, if a user selects interaction options such as feeding options, the user can make a built-in sensor feel conversion through the mobile terminal 1001 so as to trigger a related low-level interface and enter logic level processing; the server 1002 continues to issue different video resources according to the different interaction mode requests; if the group photo option is selected, calling a picture shooting interface to shoot the image of the user, then calling a synthetic picture interface to synthesize the image with the animal image, uploading and storing; the user can also enter a user information window through a personal center option in a playing interface of the terminal 1001, browse related images shot before, and trigger a sharing operation for any image, and then call a related interface to share the images to each social platform.
In summary, in the above system, the main core lies in that the user behavior operation is sensed by using a sensor built in the terminal, so that different experience videos are issued, and the user obtains an immersive interactive experience. After the user enters the exploration option to watch the video, the user can select the interaction option when the video playing is nearly finished so as to determine the scene displayed in the next step of the terminal, and the user is prompted to execute the operation in the terminal according to the interaction option selected by the user. For example, the sensor triggers a relevant interface callback after sensing the feeding operation, the logic layer processes the corresponding requests, and the server returns corresponding interactive responses. In an android device, sensor-related interfaces are put under an android. Sensor, sensorEvent, sensorManager, and a sensorEventListener interface. And acquiring all available sensors in the terminal, decoding the information transmitted by the sensors, and triggering related events to call back to finish the feeding operation of the user.
If the user selects the group photo option, capturing a playing picture in the video at the current time point as an animal image, calling the shooting capability of the system to shoot the user image for the user, and finishing operations such as beautifying, filtering and the like by means of the plug-in; and then calling an image processing interface provided by the server side to send the two images and generate a group photo image.
When the system is used, interaction abnormality such as network abnormality, terminal abnormality and interface abnormality may be caused due to various reasons, and at this time, abnormality prompt information may be output to a user. In specific implementation, when network abnormality occurs, the abnormality prompt information can be used for prompting a user that network loading is abnormal and normal software functions cannot be used; when the terminal is abnormal, the abnormal prompt information is used for prompting the user that a sensor of the terminal used by the user has a problem, so that the user cannot normally perform video interaction even if the mobile phone is shaken; when the interface request is abnormal, the abnormal prompt message may prompt the user to report an error message according to the related error word.
Compared with the current live broadcast mode adopted by the zoo online transformation, the video management system shown in fig. 10 a-10 d has the advantages that a user can watch the zoo video not only as a spectator, but also can autonomously select to browse a scene and interact with animals in the video, so that the interactivity of online zoo playing is improved. Secondly, compared with the interactive form of the interactive video in the prior art, the video management system provided by the invention has the advantages that the user can trigger the selection of scenes in a clicking mode, the user can trigger the interactive forms of on-line feeding and the like by shaking the terminal back and forth, the personal image and the animal image can be shot to be synthesized, so that a group photo image is obtained, and the user can share the group photo image with other users through a social platform. The users can perform praise interaction, comment interaction and the like on the group picture images of each other, and the interestingness and the sociability of the user experience and the immersion and participation sense of video watching are enhanced.
Based on the above embodiment of the interaction method, the embodiment of the invention also provides an interaction device. Fig. 11 is a schematic structural diagram of an interaction device according to an embodiment of the present invention. The interaction device shown in fig. 11 may operate as follows:
a display unit 1101 for displaying a play interface of a target video of a first application in the terminal;
the display unit 1101 is further configured to display at least one interaction option interacting with a target object in the target video on the play interface when the play of the target video meets an interaction condition;
the output unit 1102 is configured to select a target interaction option in the at least one interaction option, and output an interaction prompt message corresponding to the target interaction option;
the output unit 1102 is further configured to output an interactive response related to the target event when the target event matching the interactive prompt information exists.
In one embodiment, at least one preset interaction option to be displayed related to the target object is stored in the terminal; the at least one interactive option comprises each interactive option in the at least one interactive option to be displayed; or, the at least one interactive option comprises an interactive option related to the interactive condition, which is screened from the at least one interactive option to be displayed; the at least one interactive option to be displayed comprises any one or more of: an image operation option for operating the image corresponding to the target object, and an object operation option for operating the target object.
In one embodiment, the interaction condition includes any one of: a playing time length condition, a playing picture condition and an audio data condition;
if the interaction condition comprises a playing time length condition, the fact that the playing of the shared video meets the interaction condition means that the played time length in the shared video is equal to the target time length indicated by the playing time length condition;
if the interaction condition is a playing picture condition, the fact that the playing of the shared video meets the interaction condition means that the playing picture in the shared video at the current moment is a target picture indicated by the playing picture condition;
if the interaction condition includes an audio condition, the fact that the shared video is played to meet the interaction condition means that the audio data corresponding to the shared video played at the current moment is matched with the target audio data indicated by the audio data condition.
In one embodiment, if the target interaction option includes an image operation option for operating an image corresponding to the target object, where the image operation option includes a group photo option, the interaction prompt information corresponding to the target interaction option includes an image acquisition window, and the image acquisition window includes an image preview area and an image acquisition control;
if the image acquisition control is not triggered, displaying a picture acquired at the current moment in the image preview area; and if the image acquisition control is triggered, displaying a first image in the image preview area, wherein the first image is obtained by shooting a picture acquired by triggering the image acquisition control.
In one embodiment, the image capture window further comprises at least one image special effects option;
if the at least one image special effect option is selected and the image acquisition control is not triggered, displaying a special effect picture in the image preview area, wherein the special effect picture is generated based on a picture acquired at the current moment and the selected image special effect option;
and if the at least one image special effect option is selected and the image acquisition control is triggered, displaying a second image in the image preview area, wherein the second image is generated based on the acquired picture when the image acquisition control is triggered and the selected image special effect.
In one embodiment, the target event matched with the interactive prompt message means that the image acquisition control is triggered.
In one embodiment, when the image capture control is triggered, the image capture window further comprises a determination control, and the target event matching with the interactive prompt means that the determination control is touched.
In one embodiment, the interactive response related to the target event includes a group photo image, the group photo image includes an image of a target object and the second image, the image of the target object refers to a playing picture of the target video when any one of the interactive options is triggered, or the image of the target object refers to a preset image including the target object.
In one embodiment, if the target object is an animal, the target interaction option includes an object operation option for operating the target object, and the object operation option includes feeding for the target object, the interaction prompt information corresponding to the target interaction option includes a prompt animation for triggering feeding interaction; the target event matched with the interactive prompt message comprises: and matching the attitude parameters of the terminal with preset attitude parameters when feeding operation is executed according to the operation mode indicated by the prompt animation, wherein the operation mode indicated by the prompt animation comprises that the terminal is shaken according to a target direction.
In one embodiment, the interactive response related to the target event includes a feeding video for feeding a target object at a target user angle, the target user is a user using the terminal, and the output unit 1102 performs the following operations when outputting the interactive response related to the target event: and playing the feeding video on the terminal.
In one embodiment, the target interaction option is selected by any one or more of the following: the method comprises a touch control mode and a voice mode, wherein the voice mode refers to inputting voice comprising the target interaction option.
In one embodiment, the playback interface includes a history group photo option, and the interactive apparatus further includes the processing unit 1103: the display unit 1101 is further configured to display a history group photo window on the play interface when the history group photo option is triggered, where the history group photo window includes at least one group photo image, each group photo image includes an image of the target object, and each group photo image corresponds to one operation option and the number of times that the operation option is triggered; the processing unit 1103 is configured to increase, by one, the number of times that an operation option corresponding to any one of the photographic images is triggered if the operation option corresponding to the any one of the photographic images is triggered.
In an embodiment, the playing interface includes a trigger option for viewing user information, and the display unit 1101 is further configured to display a user information window when the trigger option is selected, where the user information window includes browsing history of the target user in the first application program and history group photo images including the target user, and each history group photo image corresponds to one sharing control; the display unit 1101 is further configured to display an image sharing window if the sharing control corresponding to any one of the historical movie images is triggered, where the image sharing window includes a plurality of sharing objects; the processing unit 1103 is further configured to select any one of the multiple sharing objects, so as to trigger sending of any one of the selected historical group photo images to any one of the sharing objects.
According to an embodiment of the present invention, the steps involved in the interaction methods shown in fig. 2 and fig. 7 may be performed by the units in the interaction apparatus shown in fig. 11. For example, steps S201 to S202 shown in fig. 2 can be performed by the display unit 1101 of the interactive apparatus shown in fig. 11, and steps S203 to S204 can be performed by the output unit 1102 of the interactive apparatus shown in fig. 11; as another example, steps S701 to S702 and steps S705 to S706 in the interaction method shown in fig. 7 may be performed by the display unit 1101 in the interaction apparatus shown in fig. 11, and steps S703 to S704 may be performed by the output unit 1102 in the interaction apparatus shown in fig. 11.
According to another embodiment of the present invention, the units in the interactive apparatus shown in fig. 11 may be respectively or entirely combined into one or several other units to form the interactive apparatus, or some unit(s) therein may be further split into multiple units with smaller functions to form the interactive apparatus, which may achieve the same operation without affecting the achievement of the technical effect of the embodiment of the present invention. The units are divided based on logic functions, and in practical application, the functions of one unit can be realized by a plurality of units, or the functions of a plurality of units can be realized by one unit. In other embodiments of the present invention, the interaction-based device may also include other units, and in practical applications, these functions may also be implemented by being assisted by other units, and may be implemented by cooperation of a plurality of units.
According to another embodiment of the present invention, the interactive apparatus shown in fig. 11 may be constructed by running a computer program (including program codes) capable of executing the steps involved in the respective methods shown in fig. 2 and fig. 7 on a general-purpose computing device such as a computer including a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM), and the like as well as a storage element, and implementing the interactive method of the embodiment of the present invention. The computer program may be embodied on a computer-readable storage medium, for example, and loaded into and executed by the above-described computing apparatus via the computer-readable storage medium.
In the embodiment of the invention, a playing interface of a target video of a first application program is displayed in a terminal, and along with the playing of the target video, if the playing of the target video meets an interaction condition, at least one interaction option interacting with a target object in the target video is displayed on the playing interface; selecting a target interaction option in the at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option; and when the target event matched with the interactive prompt information exists, outputting an interactive response related to the target event. The interaction between the user and the object in the video is realized, the interactivity is provided, and the interestingness of watching the video is increased.
Based on the method and the device embodiment, the embodiment of the invention provides a terminal. Referring to fig. 12, a schematic structural diagram of a terminal provided in this embodiment is shown. The terminal shown in fig. 12 comprises at least a processor 1201, an input interface 1202, an output interface 1203 and a computer storage medium 1204. The processor 1201, the input interface 1202, the output interface 1203, and the computer storage medium 1204 may be connected by a bus or other means.
A computer storage medium 1204 may be stored in the memory of the terminal, the computer storage medium 1204 being for storing a computer program comprising program instructions, the processor 1201 being for executing the program instructions stored by the computer storage medium 1204. The processor 1201 (or CPU) is a computing core and a control core of the terminal, and is adapted to implement one or more instructions, and specifically adapted to load and execute:
displaying a playing interface of a target video of a first application program in a terminal; when the playing of the target video meets an interaction condition, displaying at least one interaction option interacting with a target object in the target video on the playing interface; selecting a target interaction option in the at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option; and when a target event matched with the interaction prompt information exists, outputting an interaction response related to the target event.
The embodiment of the invention also provides a computer storage medium (Memory), which is a Memory device in the terminal and is used for storing programs and data. It is understood that the computer storage medium herein may include a built-in storage medium in the terminal, and may also include an extended storage medium supported by the terminal. The computer storage medium provides a storage space that stores an operating system of the terminal. Also stored in the memory space are one or more instructions, which may be one or more computer programs (including program code), suitable for loading and execution by the processor 1201. The computer storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory; and optionally at least one computer storage medium located remotely from the processor.
In one embodiment, the computer storage medium may be loaded and executed by processor 1201 with one or more instructions stored in the computer storage medium to perform the corresponding steps described above with respect to the interaction methods shown in fig. 2 and 7. In particular implementations, one or more instructions in the computer storage medium are loaded by the processor 1201 and perform the steps of:
displaying a playing interface of a target video of a first application program in a terminal; when the playing of the target video meets an interaction condition, displaying at least one interaction option interacting with a target object in the target video on the playing interface; selecting a target interaction option in the at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option; and when a target event matched with the interaction prompt information exists, outputting an interaction response related to the target event.
In one embodiment, at least one preset interaction option to be displayed related to the target object is stored in the terminal; the at least one interactive option comprises each interactive option in the at least one interactive option to be displayed; or, the at least one interactive option comprises an interactive option related to the interactive condition, which is screened from the at least one interactive option to be displayed;
the at least one interactive option to be displayed comprises any one or more of: an image operation option for operating the image corresponding to the target object, and an object operation option for operating the target object.
In one embodiment, the interaction condition includes any one of: a playing time length condition, a playing picture condition and an audio data condition;
if the interaction condition comprises a playing time length condition, the fact that the playing of the shared video meets the interaction condition means that the played time length in the shared video is equal to the target time length indicated by the playing time length condition;
if the interaction condition is a playing picture condition, the fact that the playing of the shared video meets the interaction condition means that the playing picture in the shared video at the current moment is a target picture indicated by the playing picture condition;
if the interaction condition includes an audio condition, the fact that the shared video is played to meet the interaction condition means that the audio data corresponding to the shared video played at the current moment is matched with the target audio data indicated by the audio data condition.
In one embodiment, if the target interaction option includes an image operation option for operating an image corresponding to the target object, where the image operation option includes a group photo option, the interaction prompt information corresponding to the target interaction option includes an image acquisition window, and the image acquisition window includes an image preview area and an image acquisition control;
if the image acquisition control is not triggered, displaying a picture acquired at the current moment in the image preview area; and if the image acquisition control is triggered, displaying a first image in the image preview area, wherein the first image is obtained by shooting a picture acquired by triggering the image acquisition control.
In one embodiment, the image capture window further comprises at least one image special effects option;
if the at least one image special effect option is selected and the image acquisition control is not triggered, displaying a special effect picture in the image preview area, wherein the special effect picture is generated based on a picture acquired at the current moment and the selected image special effect option;
and if the at least one image special effect option is selected and the image acquisition control is triggered, displaying a second image in the image preview area, wherein the second image is generated based on the acquired picture when the image acquisition control is triggered and the selected image special effect.
In one embodiment, the target event matched with the interactive prompt message means that the image acquisition control is triggered.
In one embodiment, when the image capture control is triggered, the image capture window further comprises a determination control, and the target event matching with the interactive prompt means that the determination control is touched.
In one embodiment, the interactive response related to the target event includes a group photo image, the group photo image includes an image of a target object and the second image, the image of the target object refers to a playing picture of the target video when any one of the interactive options is triggered, or the image of the target object refers to a preset image including the target object.
In one embodiment, if the target object is an animal, the target interaction option includes an object operation option for operating the target object, and the object operation option includes feeding for the target object, the interaction prompt information corresponding to the target interaction option includes a prompt animation for triggering feeding interaction; the target event matched with the interactive prompt message comprises: and matching the attitude parameters of the terminal with preset attitude parameters when feeding operation is executed according to the operation mode indicated by the prompt animation, wherein the operation mode indicated by the prompt animation comprises that the terminal is shaken according to a target direction.
In one embodiment, the interactive response related to the target event includes a feeding video for feeding a target object at a target user angle, the target user is a user using the terminal, and the processor 1201 performs the following steps when outputting the interactive response related to the target event: and playing the feeding video on the terminal.
In one embodiment, the target interaction option is selected by any one or more of the following: the method comprises a touch control mode and a voice mode, wherein the voice mode refers to inputting voice comprising the target interaction option.
In one embodiment, the playback interface includes a history group photo option, the method further comprising:
when the historical group photo option is triggered, displaying a historical group photo window on the playing interface, wherein the historical group photo window comprises at least one group photo image, each group photo image comprises an image of the target object, and each group photo image corresponds to one operation option and the number of times that the operation option is triggered;
and if the operation option corresponding to any one of the photographic images is triggered, increasing the number of times of triggering the operation option corresponding to the photographic image by one.
In one embodiment, the playback interface includes a trigger option to view user information, and the processor 1201 is further configured to: when the trigger option is selected, displaying a user information window, wherein the user information window comprises browsing history of the target user in the first application program and history group photo images containing the target user, and each history group photo image corresponds to one sharing control; if the sharing control corresponding to any historical movie image is triggered, displaying an image sharing window, wherein the image sharing window comprises a plurality of sharing objects; and selecting any one of the plurality of sharing objects to trigger sending any selected historical group photo image to any one sharing object.
In the embodiment of the invention, a playing interface of a target video of a first application program is displayed in a terminal, and along with the playing of the target video, if the playing of the target video meets an interaction condition, at least one interaction option interacting with a target object in the target video is displayed on the playing interface; selecting a target interaction option in the at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option; and when the target event matched with the interactive prompt information exists, outputting an interactive response related to the target event. The interaction between the user and the object in the video is realized, the interactivity is provided, and the interestingness of watching the video is increased.
According to an aspect of the present application, an embodiment of the present invention also provides a computer product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor 1201 reads the computer instructions from the computer-readable storage medium, and the processor 1201 executes the computer instructions to cause the image processing apparatus to perform the interaction method shown in fig. 2 and 7, specifically: displaying a playing interface of a target video of a first application program in a terminal; when the playing of the target video meets the interaction condition, displaying at least one interaction option interacting with a target object in the target video on a playing interface; selecting a target interaction option in at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option; and when the target event matched with the interactive prompt information exists, outputting an interactive response related to the target event.
In the embodiment of the invention, a playing interface of a target video of a first application program is displayed in a terminal, and along with the playing of the target video, if the playing of the target video meets an interaction condition, at least one interaction option interacting with a target object in the target video is displayed on the playing interface; selecting a target interaction option in the at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option; and when the target event matched with the interactive prompt information exists, outputting an interactive response related to the target event. The interaction between the user and the object in the video is realized, the interactivity is provided, and the interestingness of watching the video is increased.

Claims (15)

1. An interactive method, comprising:
displaying a playing interface of a target video of a first application program in a terminal;
when the playing of the target video meets an interaction condition, displaying at least one interaction option interacting with a target object in the target video on the playing interface;
selecting a target interaction option in the at least one interaction option, and outputting interaction prompt information corresponding to the target interaction option;
and when a target event matched with the interaction prompt information exists, outputting an interaction response related to the target event.
2. The method of claim 1, wherein at least one preset interaction option to be displayed in relation to the target object is stored in the terminal; the at least one interactive option comprises each interactive option in the at least one interactive option to be displayed; or, the at least one interactive option comprises an interactive option related to the interactive condition, which is screened from the at least one interactive option to be displayed;
the at least one interactive option to be displayed comprises any one or more of: an image operation option for operating the image corresponding to the target object, and an object operation option for operating the target object.
3. The method of claim 1, wherein the interaction condition comprises any one of: a playing time length condition, a playing picture condition and an audio data condition;
if the interaction condition comprises a playing time length condition, the fact that the playing of the shared video meets the interaction condition means that the played time length in the shared video is equal to the target time length indicated by the playing time length condition;
if the interaction condition comprises a playing picture condition, the fact that the playing of the shared video meets the interaction condition means that the playing picture in the shared video at the current moment is the target picture indicated by the playing picture condition;
if the interaction condition includes an audio condition, the fact that the shared video is played to meet the interaction condition means that the audio data corresponding to the shared video played at the current moment is matched with the target audio data indicated by the audio data condition.
4. The method of claim 2, wherein if the target interaction option includes an image manipulation option for manipulating an image corresponding to the target object, and the image manipulation option includes a group photo option, the interaction prompt information corresponding to the target interaction option includes an image capture window, and the image capture window includes an image preview area and an image capture control;
if the image acquisition control is not triggered, displaying a picture acquired at the current moment in the image preview area; and if the image acquisition control is triggered, displaying a first image in the image preview area, wherein the first image is obtained by shooting a picture acquired by triggering the image acquisition control.
5. The method of claim 4, wherein the image acquisition window further comprises at least one image special effects option;
if the at least one image special effect option is selected and the image acquisition control is not triggered, displaying a special effect picture in the image preview area, wherein the special effect picture is generated based on a picture acquired at the current moment and the selected image special effect option;
and if the at least one image special effect option is selected and the image acquisition control is triggered, displaying a second image in the image preview area, wherein the second image is generated based on the acquired picture when the image acquisition control is triggered and the selected image special effect.
6. The method of claim 5, wherein the target event matching the interactive prompt message is that the image capture control is triggered.
7. The method of claim 5, wherein when the image capture control is triggered, the image capture window further comprises a determination control, and the target event matching an interactive prompt is that the determination control is touched.
8. The method of claim 6 or 7, wherein the interactive response related to the target event comprises a group photo image, the group photo image comprises an image of a target object and the second image, the image of the target object refers to a playing picture of the target video when any interactive option is triggered, or the image of the target object refers to a preset image comprising the target object.
9. The method of claim 2, wherein if the target object is an animal, the target interaction option comprises an object manipulation option for manipulating the target object, the object manipulation option comprises feeding the target object, and the interaction prompt information corresponding to the target interaction option comprises a prompt animation for triggering feeding interaction; the target event matched with the interactive prompt message comprises: and matching the attitude parameters of the terminal with preset attitude parameters when feeding operation is executed according to the operation mode indicated by the prompt animation, wherein the operation mode indicated by the prompt animation comprises that the terminal is shaken according to a target direction.
10. The method of claim 9, wherein the interactive response associated with the target event comprises a feed video of a target subject feeding at a target user perspective, the target user being a user using the terminal, and wherein outputting the interactive response associated with the target event comprises: and playing the feeding video on the terminal.
11. The method of claim 1, wherein the target interaction option is selected by any one or more of: the method comprises a touch control mode and a voice mode, wherein the voice mode refers to inputting voice comprising the target interaction option.
12. The method of claim 1, wherein the playback interface includes a historical group photo option, the method further comprising:
when the historical group photo option is triggered, displaying a historical group photo window on the playing interface, wherein the historical group photo window comprises at least one group photo image, each group photo image comprises an image of the target object, and each group photo image corresponds to one operation option and the number of times that the operation option is triggered;
and if the operation option corresponding to any one of the photographic images is triggered, increasing the number of times of triggering the operation option corresponding to the photographic image by one.
13. The method of claim 1, wherein the playback interface includes a trigger option to view user information, the method further comprising:
when the trigger option is selected, displaying a user information window, wherein the user information window comprises browsing history of the target user in the first application program and history group photo images containing the target user, and each history group photo image corresponds to one sharing control;
if the sharing control corresponding to any historical movie image is triggered, displaying an image sharing window, wherein the image sharing window comprises a plurality of sharing objects;
and selecting any one of the plurality of sharing objects to trigger sending any selected historical group photo image to any one sharing object.
14. An interactive device, comprising:
the display unit is used for displaying a playing interface of a target video of the first application program in the terminal;
the display unit is further configured to display at least one interaction option interacting with a target object in the target video on the play interface when the play of the target video meets an interaction condition;
the output unit is used for selecting a target interaction option in the at least one interaction option and outputting interaction prompt information corresponding to the target interaction option;
and the output unit is also used for outputting an interactive response related to the target event when the target event matched with the interactive prompt information exists.
15. A terminal, comprising:
a processor adapted to implement one or more instructions, an
A computer storage medium having stored thereon one or more instructions adapted to be loaded by the processor and to perform the interactive method of any of claims 1-13.
CN202011184274.2A 2020-10-29 2020-10-29 Interaction method, device, terminal and storage medium Pending CN113518264A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011184274.2A CN113518264A (en) 2020-10-29 2020-10-29 Interaction method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011184274.2A CN113518264A (en) 2020-10-29 2020-10-29 Interaction method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN113518264A true CN113518264A (en) 2021-10-19

Family

ID=78060904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011184274.2A Pending CN113518264A (en) 2020-10-29 2020-10-29 Interaction method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113518264A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114398135A (en) * 2022-01-14 2022-04-26 北京字跳网络技术有限公司 Interaction method, interaction device, electronic device, storage medium, and program product
CN114422843A (en) * 2022-03-10 2022-04-29 北京达佳互联信息技术有限公司 Video color egg playing method and device, electronic equipment and medium
CN114489404A (en) * 2022-01-27 2022-05-13 北京字跳网络技术有限公司 Page interaction method, device, equipment and storage medium
CN114500438A (en) * 2022-01-11 2022-05-13 北京达佳互联信息技术有限公司 File sharing method and device, electronic equipment and storage medium
CN115022701A (en) * 2022-05-30 2022-09-06 北京达佳互联信息技术有限公司 Video playing method, terminal, device, electronic equipment, medium and program product
CN115097984A (en) * 2022-06-22 2022-09-23 北京字跳网络技术有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN115097984B (en) * 2022-06-22 2024-05-17 北京字跳网络技术有限公司 Interaction method, interaction device, electronic equipment and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500438A (en) * 2022-01-11 2022-05-13 北京达佳互联信息技术有限公司 File sharing method and device, electronic equipment and storage medium
CN114398135A (en) * 2022-01-14 2022-04-26 北京字跳网络技术有限公司 Interaction method, interaction device, electronic device, storage medium, and program product
CN114489404A (en) * 2022-01-27 2022-05-13 北京字跳网络技术有限公司 Page interaction method, device, equipment and storage medium
CN114422843A (en) * 2022-03-10 2022-04-29 北京达佳互联信息技术有限公司 Video color egg playing method and device, electronic equipment and medium
CN114422843B (en) * 2022-03-10 2024-03-26 北京达佳互联信息技术有限公司 video color egg playing method and device, electronic equipment and medium
CN115022701A (en) * 2022-05-30 2022-09-06 北京达佳互联信息技术有限公司 Video playing method, terminal, device, electronic equipment, medium and program product
CN115022701B (en) * 2022-05-30 2023-09-26 北京达佳互联信息技术有限公司 Video playing method, terminal, device, electronic equipment, medium and program product
CN115097984A (en) * 2022-06-22 2022-09-23 北京字跳网络技术有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN115097984B (en) * 2022-06-22 2024-05-17 北京字跳网络技术有限公司 Interaction method, interaction device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10039988B2 (en) Persistent customized social media environment
CN113518264A (en) Interaction method, device, terminal and storage medium
CN111178191B (en) Information playing method and device, computer readable storage medium and electronic equipment
RU2527199C2 (en) Avatar integrated shared media selection
CN109920065B (en) Information display method, device, equipment and storage medium
CN109729372B (en) Live broadcast room switching method, device, terminal, server and storage medium
CN112261481B (en) Interactive video creating method, device and equipment and readable storage medium
CN112905074B (en) Interactive interface display method, interactive interface generation method and device and electronic equipment
CN103812761A (en) Apparatus and method for providing social network service using augmented reality
CN113230655B (en) Virtual object control method, device, equipment, system and readable storage medium
CN114302214B (en) Virtual reality equipment and anti-jitter screen recording method
CN112181573A (en) Media resource display method, device, terminal, server and storage medium
CN111327916B (en) Live broadcast management method, device and equipment based on geographic object and storage medium
CN106604147A (en) Video processing method and apparatus
KR20230062857A (en) augmented reality messenger system
CN113490010B (en) Interaction method, device and equipment based on live video and storage medium
KR20210102698A (en) Method, system, and computer program for providing communication using video call bot
CN114302160B (en) Information display method, device, computer equipment and medium
CN109819341B (en) Video playing method and device, computing equipment and storage medium
CN113886611A (en) Resource display method and device, computer equipment and medium
CN111382355A (en) Live broadcast management method, device and equipment based on geographic object and storage medium
CN114415907B (en) Media resource display method, device, equipment and storage medium
JP5519751B2 (en) Image viewing system, image viewing method, image viewing server, and terminal device
CN110008357A (en) Event recording method, device, electronic equipment and storage medium
CN114430494B (en) Interface display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40053583

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination