CN116962338A - Method and device for interaction between objects, electronic equipment and storage medium - Google Patents
Method and device for interaction between objects, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN116962338A CN116962338A CN202310926943.6A CN202310926943A CN116962338A CN 116962338 A CN116962338 A CN 116962338A CN 202310926943 A CN202310926943 A CN 202310926943A CN 116962338 A CN116962338 A CN 116962338A
- Authority
- CN
- China
- Prior art keywords
- interaction
- interaction mode
- terminal
- area
- media resource
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 698
- 238000000034 method Methods 0.000 title claims abstract description 51
- 230000000694 effects Effects 0.000 claims description 103
- 238000012360 testing method Methods 0.000 claims description 99
- 230000004044 response Effects 0.000 claims description 63
- 230000002452 interceptive effect Effects 0.000 description 48
- 238000010586 diagram Methods 0.000 description 33
- 238000012545 processing Methods 0.000 description 11
- 230000002093 peripheral effect Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 6
- 241000234295 Musa Species 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 235000018290 Musa x paradisiaca Nutrition 0.000 description 3
- 235000013399 edible fruits Nutrition 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000009432 framing Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 235000021015 bananas Nutrition 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013475 authorization Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/10—Multimedia information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
- H04L51/046—Interoperability with other network applications or services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/06—Message adaptation to terminal or network requirements
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The disclosure provides a method, a device, electronic equipment and a storage medium for interaction among objects, and belongs to the technical field of multimedia. The method comprises the following steps: receiving a first media resource sent by a second terminal through a target application, wherein the target application is any application in the first terminal, the second terminal is a terminal logged in by a second object, and the second object is an object with an association relation with the first object; displaying a preview component of the first media resource on a desktop of the first terminal, wherein the preview component is used for displaying preview information of the first media resource; and responding to the triggering operation of the preview component, and displaying the first media resource in the target application. The method can display the preview component for displaying the preview information of the media resource on the desktop, increases the diversity of interaction modes, and improves the interaction efficiency between objects, thereby improving the man-machine interaction efficiency and improving the user experience.
Description
Technical Field
The present disclosure relates to the field of multimedia technologies, and in particular, to a method and apparatus for interaction between objects, an electronic device, and a storage medium.
Background
With the development of multimedia technology, more and more short video applications are emerging. People can release works such as short videos and picture sets on short video application, and meanwhile, the works released by other people can be watched. Since people are watching works distributed by others, there is a high possibility that they want to interact with the work distribution object of the work. Therefore, how to interact with the work distribution object is a technical problem to be solved.
In the related art, a user can enter a homepage of a work release object by triggering an entry control in a playing interface of the work. Then, by triggering a conversation control in the homepage of the work, interaction with the work release object can be performed in a displayed conversation interface. Or, the comment interface of the work can also interact with the publishing object of the work by sending contents such as pictures, characters and the like.
However, the interaction mode is single, and the interaction is poor, so that the human-computer interaction efficiency is low, and the user experience is poor.
Disclosure of Invention
The method, the device, the electronic equipment and the storage medium for interaction among the objects increase the diversity of interaction modes and improve the interaction efficiency among the objects by displaying the preview component of the media resource on the desktop, thereby improving the man-machine interaction efficiency and improving the user experience. The technical scheme of the present disclosure is as follows.
According to an aspect of an embodiment of the present disclosure, there is provided a method for interaction between objects, including:
receiving a first media resource sent by a second terminal through a target application, wherein the target application is any application in the first terminal, the second terminal is a terminal logged in by a second object, and the second object is an object with an association relation with the first object;
displaying a preview component of the first media resource on a desktop of the first terminal, wherein the preview component is used for displaying preview information of the first media resource;
and responding to the triggering operation of the preview component, and displaying the first media resource in the target application.
According to another aspect of the embodiments of the present disclosure, there is provided an apparatus for interaction between objects, including:
the receiving unit is configured to receive a first media resource sent by a second terminal through a target application, wherein the target application is any application in the first terminal, the second terminal is a terminal logged in by a second object, and the second object is an object with an association relation with the first object;
a first display unit configured to display a preview component of the first media resource on a desktop of the first terminal, where the preview component is used to display preview information of the first media resource;
And a second display unit configured to display the first media resource in the target application in response to a trigger operation to the preview component.
In some embodiments, the apparatus further comprises:
the first determining unit is configured to determine a second media resource corresponding to the first media resource and a target interaction mode based on the target application, wherein the target interaction mode is an object interaction mode corresponding to the first media resource;
and the first sending unit is configured to send the second media resource to the second terminal, and the second terminal displays a preview component of the second media resource on a desktop of the second terminal, wherein the preview component of the second media resource is used for displaying preview information of the second media resource.
In some embodiments, the target interaction mode is a first interaction mode, and the first interaction mode is an object interaction mode for interaction based on media resources photographed in real time;
the first determining unit is configured to respond to the target interaction mode as the first interaction mode, display an interaction mode interface of the target application, wherein the interaction mode interface displays a first area, and the first area is used for displaying a picture acquired by the first terminal in real time; and responding to shooting operation, and determining the media resources acquired in the first area as the second media resources based on the target application, wherein the second media resources are videos or pictures.
In some embodiments, the target interaction mode is a second interaction mode, the second interaction mode is an object interaction mode for interaction based on media resources shot by a first frame mode, and the first frame mode is a frame mode for displaying a plurality of media resources which are mutually independent in an interaction mode interface of the target application;
the first determining unit is configured to respond to the target interaction mode being the second interaction mode, display an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a first area and a second area, the first area is used for displaying the first media resource, and the second area is used for displaying a picture acquired by the first terminal in real time; and responding to shooting operation, and based on the target application, splicing the first media resource and the media resource acquired in the second area to obtain the second media resource, wherein the second media resource is a video or a picture.
In some embodiments, the target interaction mode is a third interaction mode, the third interaction mode is an object interaction mode for interaction based on media resources shot by a second same-frame mode, and the second same-frame mode is a same-frame mode for displaying a plurality of media resources overlapping each other in an interaction mode interface of the target application;
The first determining unit is configured to respond to the target interaction mode being the third interaction mode, display an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a first area, all areas of the first area are used for displaying the first media resource, and partial areas of the first area are used for displaying a picture acquired by the first terminal in real time; and responding to shooting operation, and determining the second media resource based on the first media resource and the media resource acquired in a partial area of the first area, wherein the second media resource is a video or a picture.
In some embodiments, the target interaction mode is a fourth interaction mode, and the fourth interaction mode is an object interaction mode for interaction based on media resources shot by any special effect template;
the first determining unit is configured to respond to the target interaction mode being the fourth interaction mode, display an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a first area and a third area, the first area is used for displaying a picture acquired by the first terminal in real time, and the third area is used for displaying a plurality of special effect templates; in response to a selection operation of a target special effect template, adding a special effect corresponding to the target special effect template in a picture displayed in the first area, wherein the target special effect template is any special effect template in the plurality of special effect templates; and responding to shooting operation, and determining the media resources acquired in the first area as the second media resources based on the target application, wherein the second media resources are videos or pictures with special effects corresponding to the target special effect template.
In some embodiments, the target interaction mode is a fifth interaction mode, and the fifth interaction mode is an object interaction mode for interaction based on manually input media resources;
the first determining unit is configured to respond to the target interaction mode being the fifth interaction mode, display an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a fourth area, the fourth area is used for displaying prompt information, and the prompt information is used for prompting the first object to input media resources in the fourth area; and in response to the input operation to the fourth area, determining the second media resource based on the first media resource and the media resource input in the fourth area, wherein the second media resource is text.
In some embodiments, the target interaction mode is a sixth interaction mode, and the sixth interaction mode is an object interaction mode for interaction based on automatically input media resources;
the first determining unit is configured to respond to the target interaction mode being the sixth interaction mode, display an interaction mode interface of the target application, wherein a fifth area is displayed on the interaction mode interface, and the fifth area is used for displaying the first media resource, and the first media resource comprises a test question and a plurality of test options; and in response to a selection operation of the target test option, determining the second media resource based on the first media resource and the target test option, wherein the target test option is any one of the plurality of test options, and the second media resource is text.
In some embodiments, the apparatus further comprises:
the third display unit is configured to respond to object interaction operation in the target application, display an interaction mode interface of the target application, wherein the interaction mode interface displays a plurality of object interaction modes, the object interaction operation is used for indicating the first object to interact with at least one third object, and the third object is an object with an association relation with the first object;
a second determining unit configured to determine, based on the target application, a third media resource corresponding to a seventh interaction mode in response to a selection operation of the seventh interaction mode, the seventh interaction mode being any one of the plurality of object interaction modes;
the second sending unit is configured to send the third media resource to at least one third terminal, the third terminal is a terminal logged in by the third object, the third terminal is used for displaying a preview component of the third media resource on a desktop of the third terminal, and the preview component is used for displaying preview information of the third media resource.
In some embodiments, the target interaction mode is a same-frame interaction mode, and the same-frame interaction mode is an object interaction mode for interaction based on media resources shot in any one of the same-frame modes;
The second determination unit includes:
the display subunit is configured to respond to the selection operation of the same-frame interaction mode, and display a first area and a second area in the interaction mode interface, wherein the first area is used for displaying pictures acquired by the first terminal in real time, and the second area is used for displaying various same-frame modes;
the first determining subunit is configured to respond to a selection operation of a target in-frame mode, determine a first display area and a second display area in the first area based on the target in-frame mode, wherein the first display area is used for displaying the third media resource, and the second display area is used for displaying a picture acquired by the third terminal in real time, and the target in-frame mode is any one of the multiple in-frame modes;
and a second determining subunit configured to determine, based on the target application, the media resource acquired in the first display area as the third media resource, which is a video or a picture, in response to a shooting operation.
In some embodiments, the target in-frame manner is a first in-frame manner, the first in-frame manner being a in-frame manner that displays multiple media resources independent of each other in the first region;
The first determining subunit is configured to respond to the selection operation of the first frame mode, divide the first area into a first sub-area and a second sub-area which are mutually independent based on the first frame mode, wherein the first sub-area is used for displaying a picture acquired by the first terminal in real time, and the second sub-area is used for displaying a blank picture; determining the first sub-region as the first display region; and determining the second subarea as the second display area.
In some embodiments, the target in-frame manner is a second in-frame manner, the second in-frame manner being in-frame manner that displays a plurality of media assets overlapping each other in the first area;
the first determining subunit is configured to determine, in response to a selection operation of the second in-frame interaction manner, all areas in the first area as the first display area, and a part of areas in the first area as the second display area based on the second in-frame interaction manner, wherein media resources displayed in the second display area are displayed above media resources displayed in the first display area.
According to another aspect of the embodiments of the present disclosure, there is provided an electronic device including:
one or more processors;
a memory for storing the processor-executable program code;
wherein the processor is configured to execute the program code to implement the method of interaction between objects described above.
According to another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method of interaction between objects described above.
According to another aspect of the disclosed embodiments, a computer program product is provided, comprising a computer program/instruction which, when executed by a processor, implements a method of interaction between objects as described above.
The embodiment of the disclosure provides a method for interaction between objects, which receives a first media resource sent by a terminal logged in by a second object with an association relation with a first object through a target application, so that the first terminal can display a preview component of the first media resource on a desktop of the first terminal. The first object registered in the first terminal can preview the first media resource through the preview information displayed by the preview component. And by triggering the preview component, the user can jump to the target application to watch the first media resource, so that the interaction between the objects is realized. Compared with the prior art that interaction between objects is realized in the target application, the preview component for displaying media resources on the desktop is implemented, so that the diversity of interaction modes is increased, the interaction efficiency between objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic diagram illustrating an environment in which a method of interaction between objects is implemented, according to an exemplary embodiment;
FIG. 2 is a flowchart illustrating a method of inter-object interaction according to an exemplary embodiment;
FIG. 3 is a flowchart illustrating another method of inter-object interaction according to an example embodiment;
FIG. 4 is a schematic diagram of a preview component of a first media asset shown in accordance with an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating an interactive mode interface corresponding to a first interactive mode according to an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating an interactive mode interface corresponding to a second interactive mode according to an exemplary embodiment;
FIG. 7 is a schematic diagram of an interactive mode interface corresponding to a third interactive mode, according to an example embodiment;
FIG. 8 is a schematic diagram illustrating an interactive mode interface corresponding to a fifth interactive mode according to an exemplary embodiment;
FIG. 9 is a schematic diagram of an interactive mode interface corresponding to a sixth interactive mode, according to an example embodiment;
FIG. 10 is a schematic diagram of a preview component of a second media asset shown in accordance with an exemplary embodiment;
FIG. 11 is a schematic diagram of a video presentation interface and associated object interface, according to an example embodiment;
FIG. 12 is a schematic diagram illustrating an interactive mode interface corresponding to a frame interactive mode, according to an example embodiment;
FIG. 13 is a schematic diagram of an interactive mode interface corresponding to a special effects interactive mode, according to an exemplary embodiment;
FIG. 14 is a diagram illustrating an interactive mode interface corresponding to a message interactive mode, according to an example embodiment;
FIG. 15 is a schematic diagram illustrating an interactive mode interface corresponding to a test interactive mode, according to an example embodiment;
FIG. 16 is a block diagram illustrating an apparatus for interaction between objects, according to an example embodiment;
FIG. 17 is a block diagram illustrating another device for interaction between objects, according to an example embodiment;
Fig. 18 is a block diagram of a terminal according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present disclosure are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, the first media asset, the second media asset, and the preview component referred to in this disclosure are all acquired with sufficient authorization.
FIG. 1 is a schematic diagram illustrating an environment in which a method of interaction between objects is implemented, according to an exemplary embodiment. Referring to fig. 1, the implementation environment specifically includes: a first terminal 101, a second terminal 102, and a server 103.
The first terminal 101 and the second terminal 102 may be at least one of a smart phone, a smart watch, a desktop computer, a laptop computer, an MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression standard audio plane 4) player, and a laptop portable computer. An application program for acquiring media resources is installed and run on both the first terminal 101 and the second terminal 102, and a user can log in the application program through the first terminal 101 or the second terminal 102 to acquire services provided by the application program. The application is associated with the server 102 and background services are provided by the server 102. The first terminal 101 and the second terminal 102 may be connected to the server 102 through a wireless network or a wired network.
The first terminal 101 and the second terminal 102 may refer broadly to one of a plurality of terminals, respectively, and the present embodiment is exemplified only with the first terminal 101 and the second terminal 102. Those skilled in the art will recognize that the number of terminals may be greater or lesser. For example, the number of the terminals may be only several, or the number of the terminals may be tens or hundreds, or more, and the number and the device type of the terminals are not limited in the embodiments of the present disclosure.
Server 102 may be at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Alternatively, the number of servers may be greater or lesser, which is not limited by the embodiments of the present disclosure. Of course, the server 102 may also include other functional servers to provide more comprehensive and diverse services.
Fig. 2 is a flowchart illustrating a method of interaction between objects, as shown in fig. 2, performed by a first terminal, according to an exemplary embodiment, including the following steps.
In step S201, the first terminal receives, through a target application, a first media resource sent by the second terminal, where the target application is any application in the first terminal, the second terminal is a terminal that logs in a second object, and the second object is an object that has an association relationship with the first object.
In the embodiment of the present disclosure, the first terminal is a terminal on which the first object logs in, and the target application is any application in the first terminal. The target application may be a media class application, a social class application, a shopping class application, etc., and the type of the target application is not limited by the embodiments of the present disclosure. The first object is an account number logged in the target application, and various media resources such as videos, pictures, texts and the like can be published or browsed in the target application through the first object. The second terminal is a terminal which is logged in by a second object, and the second object is an account which is logged in a target application installed in the second terminal. The second object is an object with an association relation with the first object. The association relationship may be a relationship in which the first object and the second object are in focus of each other, or a relationship in which the first object and the second object are in close relationship to each other. The affinity includes relationships set in the target application, such as friend relationships, family relationships, lover relationships, classmates relationships, and the like. Since the second terminal can transmit the first media resource to the first terminal through the target application, the first terminal can receive the first media resource transmitted by the second terminal through the target application. The first media resource may be a media resource generated in the target application, or may be a media resource obtained by the target application from a media resource locally stored in the second terminal, where the first media resource may be a video, a picture, a text, etc., and the source and the resource type of the first media resource are not limited in the embodiment of the present disclosure.
In step S202, the first terminal displays a preview component of the first media resource on a desktop of the first terminal, where the preview component is used to display preview information of the first media resource.
In the embodiment of the disclosure, based on the received first media resource, the first terminal can display a preview component of the first media resource on the desktop. The preview component is for displaying preview information of the first media asset. If the first media resource is a picture, the preview information may be a thumbnail of the picture. If the first media resource is a video, the preview information may be a thumbnail of a first video frame in the video, or may be a thumbnail of a moving picture formed by several continuous video frames in the video. If the first media asset is text, the preview information may be a thumbnail displayed text.
In step S203, in response to the triggering operation of the preview component, the first terminal displays the first media resource in the target application.
In the embodiment of the disclosure, under the condition that the target application is started, the first terminal can automatically jump into the target application and display the first media resource by triggering the preview component of the first media resource in the desktop. Under the condition that the target application is not started, the first terminal can automatically start the target application by triggering a preview component of the first media resource in the desktop, and then the first media resource is displayed in the target application.
The embodiment of the disclosure provides a method for interaction between objects, which receives a first media resource sent by a terminal logged in by a second object with an association relation with a first object through a target application, so that the first terminal can display a preview component of the first media resource on a desktop of the first terminal. The first object registered in the first terminal can preview the first media resource through the preview information displayed by the preview component. And by triggering the preview component, the user can jump to the target application to watch the first media resource, so that the interaction between the objects is realized. Compared with the prior art that interaction between objects is realized in the target application, the preview component for displaying media resources on the desktop is implemented, so that the diversity of interaction modes is increased, the interaction efficiency between objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
In some embodiments, the method further comprises:
determining a second media resource corresponding to the first media resource and a target interaction mode based on the target application, wherein the target interaction mode is an object interaction mode corresponding to the first media resource;
and sending the second media resource to the second terminal through the target application, and displaying a preview component of the second media resource on a desktop of the second terminal by the second terminal, wherein the preview component of the second media resource is used for displaying preview information of the second media resource.
In the embodiment of the disclosure, the first terminal can generate the second media resource based on the first media resource and the object interaction mode corresponding to the first media resource, and return the second media resource to the second terminal, and the second terminal displays the preview component of the second media resource on the desktop. The interaction mode can improve interaction efficiency between objects, so that man-machine interaction efficiency is improved, and user experience is improved.
In some embodiments, the target interaction mode is a first interaction mode, and the first interaction mode is an object interaction mode for interaction based on media resources photographed in real time;
determining, based on the target application, a second media asset corresponding to the first media asset and the target interaction mode, including:
responding to the target interaction mode as a first interaction mode, displaying an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a first area which is used for displaying a picture acquired by a first terminal in real time;
and responding to the shooting operation, and determining the media resources acquired in the first area as second media resources based on the target application, wherein the second media resources are videos or pictures.
In the embodiment of the disclosure, a first area for displaying a picture acquired by a first terminal in real time is displayed on an interactive mode interface, so that the first terminal can display the picture shot by a camera in the first area and determine media resources acquired in the first area as second media resources.
In some embodiments, the target interaction mode is a second interaction mode, the second interaction mode is an object interaction mode for interaction based on media resources captured by a first frame mode, and the first frame mode is a frame mode for displaying a plurality of media resources independent of each other in an interaction mode interface of the target application;
determining, based on the target application, a second media asset corresponding to the first media asset and the target interaction mode, including:
responding to the target interaction mode as a second interaction mode, displaying an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a first area and a second area, the first area is used for displaying a first media resource, and the second area is used for displaying a picture acquired by the first terminal in real time;
and responding to shooting operation, and based on the target application, splicing the first media resource and the media resource acquired in the second area to obtain a second media resource, wherein the second media resource is a video or a picture.
In the embodiment of the disclosure, the first terminal can obtain the second media resource simultaneously displaying the media resource shot by the first terminal and the media resource shot by the second terminal by splicing the first media resource and the media resource acquired in the second area, so that the diversity of interaction modes is increased, the interaction efficiency between objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
In some embodiments, the target interaction mode is a third interaction mode, the third interaction mode is an object interaction mode for interaction based on media resources shot by a second same-frame mode, and the second same-frame mode is a same-frame mode for displaying a plurality of media resources overlapped with each other in an interaction mode interface of the target application;
determining, based on the target application, a second media asset corresponding to the first media asset and the target interaction mode, including:
responding to the target interaction mode as a third interaction mode, displaying an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a first area, all areas of the first area are used for displaying first media resources, and partial areas of the first area are used for displaying pictures acquired by the first terminal in real time;
and responding to shooting operation, and determining a second media resource based on the first media resource and the media resource acquired in a partial area of the first area, wherein the second media resource is a video or a picture.
In the embodiment of the disclosure, the first terminal can obtain the second media resource simultaneously displaying the media resource shot by the first terminal and the media resource shot by the second terminal by superposing the media resource shot by the first terminal above the first media resource displayed in the first area, so that the diversity of interaction modes is increased, the interaction efficiency between objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
In some embodiments, the target interaction mode is a fourth interaction mode, and the fourth interaction mode is an object interaction mode for interaction based on media resources shot by any special effect template;
determining, based on the target application, a second media asset corresponding to the first media asset and the target interaction mode, including:
responding to the target interaction mode as a fourth interaction mode, displaying an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a first area and a third area, the first area is used for displaying pictures acquired by the first terminal in real time, and the third area is used for displaying a plurality of special effect templates;
in response to a selection operation of a target special effect template, adding a special effect corresponding to the target special effect template in a picture displayed in a first area, wherein the target special effect template is any special effect template in a plurality of special effect templates;
and responding to shooting operation, determining the media resources acquired in the first area as second media resources based on the target application, wherein the second media resources are videos or pictures with special effects corresponding to the target special effect template.
In the embodiment of the disclosure, by providing a plurality of special effect templates, the first terminal adds special effects corresponding to different special effect templates in the media resources shot by the first terminal displayed in the first area, so that the diversity of interaction modes is increased, the interaction efficiency between objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
In some embodiments, the target interaction mode is a fifth interaction mode, and the fifth interaction mode is an object interaction mode for interaction based on manually input media resources;
determining, based on the target application, a second media asset corresponding to the first media asset and the target interaction mode, including:
responding to the target interaction mode as a fifth interaction mode, displaying an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a fourth area, the fourth area is used for displaying prompt information, and the prompt information is used for prompting the first object to input media resources in the fourth area;
in response to an input operation to the fourth region, a second media asset is determined based on the first media asset and the media asset input in the fourth region, the second media asset being text.
In the embodiment of the disclosure, by displaying the fourth area for inputting the media resource, the first object can answer the question of the second object in the first area in a text form, and blessings, greetings and the like of the second object can also be input, so that the diversity of interaction modes is increased, the interaction efficiency between the objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
In some embodiments, the target interaction mode is a sixth interaction mode, where the sixth interaction mode is an object interaction mode that performs interaction based on automatically input media resources;
determining, based on the target application, a second media asset corresponding to the first media asset and the target interaction mode, including:
responding to the target interaction mode as a sixth interaction mode, displaying an interaction mode interface of the target application, wherein a fifth area is displayed on the interaction mode interface, and the fifth area is used for displaying a first media resource, and the first media resource comprises a test question and a plurality of test options;
in response to a selection operation of the target test option, a second media resource is determined based on the first media resource and the target test option, the target test option being any one of the plurality of test options, the second media resource being text.
In the embodiment of the disclosure, the test questions are displayed in the fifth area, so that the second object can know the degree of knowledge of the first object on the second object based on the answer condition of the first object on the test questions, the diversity of interaction modes is increased, the interaction efficiency between the objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
In some embodiments, the method further comprises:
responding to object interaction operation in the target application, displaying an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a plurality of object interaction modes, and the object interaction operation is used for indicating the interaction between a first object and at least one third object, and the third object is an object with an association relation with the first object;
responding to the selection operation of the seventh interaction mode, and determining a third media resource corresponding to the seventh interaction mode based on the target application, wherein the seventh interaction mode is any one of a plurality of object interaction modes;
and sending the third media resource to at least one third terminal, wherein the third terminal is a terminal for logging in a third object, the third terminal is used for displaying a preview component of the third media resource on a desktop of the third terminal, and the preview component is used for displaying preview information of the third media resource.
The embodiment of the disclosure provides a method for interaction between objects, through object interaction operation in a target application, a second terminal can send a first media resource acquired based on any object interaction mode to a first terminal which performs at least one first object login for interaction with a second object, so that the first terminal can display a preview component of the first media resource on a desktop of the first terminal. Through the preview component for displaying the preview information of the media resources on the desktop, the diversity of interaction modes is increased, and the interaction efficiency between objects is improved, so that the man-machine interaction efficiency is improved, and the user experience is improved.
In some embodiments, the target interaction mode is a same-frame interaction mode, and the same-frame interaction mode is an object interaction mode for interaction based on media resources shot in any one of the same-frame modes;
in response to the selection operation of the seventh interaction mode, determining, based on the target application, a third media resource corresponding to the seventh interaction mode, including:
responding to the selection operation of the same-frame interaction mode, displaying a first area and a second area in an interaction mode interface, wherein the first area is used for displaying pictures acquired by a first terminal in real time, and the second area is used for displaying various same-frame modes;
responding to the selection operation of the target in-frame mode, and determining a first display area and a second display area in a first area based on the target in-frame mode, wherein the first display area is used for displaying a third media resource, the second display area is used for displaying a picture acquired by a third terminal in real time, and the target in-frame mode is any one in-frame mode of a plurality of in-frame modes;
and responding to the shooting operation, and determining the media resources acquired in the first display area as third media resources based on the target application, wherein the third media resources are videos or pictures.
In the embodiment of the disclosure, by providing multiple same-frame modes, the first terminal can display media resources shot by the first terminal and the third terminal in the first area based on different same-frame modes, so that the diversity of interaction modes is increased, the interaction efficiency between objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
In some embodiments, the target in-frame manner is a first in-frame manner, the first in-frame manner being a in-frame manner that displays a plurality of media assets that are independent of each other in the first region;
in response to a selection operation of the target in-frame manner, determining a first display area and a second display area in the first area based on the target in-frame manner, including:
responding to the selection operation of the first frame mode, dividing the first area into a first subarea and a second subarea which are mutually independent based on the first frame mode, wherein the first subarea is used for displaying a picture acquired by the first terminal in real time, and the second subarea is used for displaying a blank picture;
determining the first sub-region as a first display region;
the second sub-region is determined to be a second display region.
In the embodiment of the disclosure, the second terminal determines any sub-region as the display region of the media resource shot by the first terminal, and determines the other sub-region as the display region of the media resource shot by the third terminal, so that the same-frame display of different media resources can be realized, the diversity of interaction modes is increased, and the interaction efficiency between objects is improved, thereby improving the man-machine interaction efficiency and the user experience.
In some embodiments, the target in-frame manner is a second in-frame manner, the second in-frame manner being a in-frame manner that displays a plurality of media assets overlapping each other in the first region;
in response to a selection operation of the target in-frame manner, determining a first display area and a second display area in the first area based on the target in-frame manner, including:
in response to a selection operation of the second in-frame interaction mode, determining all areas in the first area as first display areas, determining partial areas in the first area as second display areas, and displaying media resources displayed in the second display areas above the media resources displayed in the first display areas based on the second in-frame interaction mode.
In the embodiment of the disclosure, the first terminal can realize the same-frame display of different media resources by displaying the media resources shot by the third terminal above the media resources shot by the first terminal. The diversity of interaction modes is increased, and the interaction efficiency between objects is improved, so that the man-machine interaction efficiency is improved, and the user experience is improved.
The foregoing fig. 2 is merely a basic flow of the disclosure, and the scheme provided in the disclosure is further described below based on a specific implementation, and fig. 3 is a flowchart illustrating a method of interaction between objects according to an exemplary embodiment. The method is performed by a first terminal, see fig. 3, the method comprising the following steps.
In step S301, the first terminal receives, through a target application, a first media resource sent by the second terminal, where the target application is any application in the first terminal.
In the embodiment of the present disclosure, the first terminal is a terminal on which the first object logs in, and the target application is any application in the first terminal. The target application may be a media class application, a social class application, a shopping class application, etc., and the type of the target application is not limited by the embodiments of the present disclosure. The first object is an account number logged in the target application, and various media resources such as videos, pictures, texts and the like can be published or browsed in the target application through the first object. The second terminal is a terminal which is logged in by a second object, and the second object is an account which is logged in a target application installed in the second terminal. The second object is an object with an association relation with the first object. The association relationship may be a relationship in which the first object and the second object are in focus of each other, or a relationship in which the first object and the second object are in close relationship to each other. The affinity includes relationships set in the target application, such as friend relationships, family relationships, lover relationships, classmates relationships, and the like. Since the second terminal can transmit the first media resource to the first terminal through the target application, the first terminal can receive the first media resource transmitted by the second terminal through the target application. The first media resource may be a media resource generated in the target application, or may be a media resource obtained by the target application from a media resource locally stored in the second terminal, where the first media resource may be a video, a picture, a text, etc., and the source and the resource type of the first media resource are not limited in the embodiment of the present disclosure.
In step S302, the first terminal displays a preview component of the first media resource on a desktop of the first terminal, where the preview component is used to display preview information of the first media resource.
In the embodiment of the disclosure, based on the received first media resource, the first terminal can display a preview component of the first media resource on the desktop. The preview component is for displaying preview information of the first media asset. If the first media resource is a picture, the preview information may be a thumbnail of the picture. If the first media resource is a video, the preview information may be a thumbnail of a first video frame in the video, or may be a thumbnail of a moving picture formed by several continuous video frames in the video. If the first media asset is text, the preview information may be a thumbnail displayed text.
For example, FIG. 4 is a schematic diagram illustrating a preview component of a first media asset, according to an example embodiment. As shown in fig. 4, in the case that the object interaction mode corresponding to the first media resource is a shooting interaction mode, a second in-frame mode, or a special effect interaction mode, the first terminal displays a preview component 401 on the desktop, and the preview component 401 displays preview information of the first media resource. In the case where the target interaction mode is the first frame mode of the frame interaction modes, the first terminal displays a preview component 402 or a preview component 403 on the desktop, and the preview component 402 and the preview component 403 display preview information of the first media resource and prompt information for prompting the first object to take a picture, "you are in a picture. In the case that the target interaction mode is a message interaction mode, the first terminal displays a preview component 404 on the desktop, and the preview component 404 displays preview information of the first media resource and a nickname "XX" of the first object. In the case that the target interaction mode is the test interaction mode, the first terminal displays a preview component 405 on the desktop, where the preview component 405 displays preview information of the first media resource and a nickname "XXX" of the second object. The shooting interaction mode is an object interaction mode for carrying out interaction based on media resources shot in real time. The same-frame interaction mode is an object interaction mode for interaction based on media resources shot in any one of the same-frame modes, the first same-frame mode is a same-frame mode for displaying a plurality of mutually independent media resources in any one of the areas, and the second same-frame mode is a same-frame mode for displaying a plurality of mutually overlapped media resources in any one of the areas. The special effect interaction mode is an object interaction mode for interaction based on media resources shot by any special effect template. The message interaction mode is an object interaction mode for interaction based on manually input media resources. The test interaction mode is an object interaction mode for interaction based on automatically input media resources.
In step S303, in response to the triggering operation of the preview component, the first terminal displays the first media asset in the target application.
In the embodiment of the disclosure, under the condition that the target application is started, the first terminal can automatically jump into the target application and display the first media resource by triggering the preview component of the first media resource in the desktop. Under the condition that the target application is not started, the first terminal can automatically start the target application by triggering a preview component of the first media resource in the desktop, and then the first media resource is displayed in the target application.
In step S304, the first terminal determines, based on the target application, a second media resource corresponding to the first media resource and the target interaction mode, where the target interaction mode is an object interaction mode corresponding to the first media resource.
In the embodiment of the disclosure, after receiving the first media resource, the first terminal may display an interaction mode interface corresponding to the target interaction mode in the target application, and determine, in the interaction mode interface, a second media resource corresponding to the first media resource and the target interaction mode. Wherein the second media asset is of the same asset type as the first media asset.
It should be noted that, the first interaction mode in the following scheme and the shooting interaction mode in the above scheme are the same interaction mode. The second interaction mode in the following scheme and the same-frame interaction mode in which the target same-frame mode in the scheme is the first same-frame mode are the same interaction mode. The third interaction mode in the following scheme and the same-frame interaction mode in which the target same-frame mode in the scheme is the second same-frame mode are the same interaction mode. The fourth interaction mode in the following scheme and the special effect interaction mode in the scheme are the same interaction mode. The fifth interaction mode in the following scheme and the message interaction mode in the above scheme are the same interaction mode. The sixth interaction mode in the following scheme is the same interaction mode as the test interaction mode in the above scheme.
In some embodiments, the target interaction mode is a first interaction mode, and the first terminal can determine, based on the target application, a second media resource corresponding to the first media resource and the first interaction mode. Correspondingly, in response to the target interaction mode being a first interaction mode, the first terminal displays an interaction mode interface of the target application, the interaction mode interface displays a first area, and the first area is used for displaying pictures acquired by the first terminal in real time; and responding to the shooting operation, the first terminal determines the media resources acquired in the first area as second media resources based on the target application, wherein the second media resources are videos or pictures. The first interaction mode is an object interaction mode for carrying out interaction based on media resources shot in real time. Therefore, under the condition that the object interaction mode corresponding to the first media resource is the first interaction mode, the first terminal can display an interaction mode interface corresponding to the first interaction mode in the target application. The interactive mode interface is displayed with a first area for displaying a picture acquired by the first terminal in real time. The image acquired by the first terminal in real time can be acquired through a front camera of the first terminal, can also be acquired through a rear camera of the first terminal, and can also be acquired through acquisition equipment outside the first terminal. In response to the photographing operation, the first terminal can display a screen photographed by the camera in the first area. The second media resource is the media resource collected by the first terminal in the first area.
For example, fig. 5 is a schematic diagram illustrating an interaction mode interface corresponding to a first interaction mode according to an exemplary embodiment. As shown in fig. 5, the interactive mode interface 501 displays a first area 502 and a capture control 503. The first area 502 is used for displaying a picture acquired by the first terminal in real time. By triggering this shooting control 503, the first terminal can display the currently shot media asset of the first terminal in the first area 502. When the user presses the shooting control 503 for a long time, the media resource is video; when the user clicks on the capture control 503, the media asset is a picture.
In some embodiments, the target interaction mode is a second interaction mode, and the first terminal can determine a second media resource corresponding to the first media resource and the second interaction mode based on the target application. Correspondingly, in response to the target interaction mode being a second interaction mode, the first terminal displays an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a first area and a second area, the first area is used for displaying a first media resource, and the second area is used for displaying a picture acquired by the first terminal in real time; and responding to the shooting operation, and based on the target application, the first terminal splices the first media resource and the media resource acquired in the second area to obtain a second media resource, wherein the second media resource is a video or a picture. The second interaction mode is an object interaction mode for interaction based on media resources shot by a first frame mode, and the first frame mode is a frame mode for displaying a plurality of media resources which are mutually independent in an interaction mode interface of a target application. Therefore, when the object interaction mode corresponding to the first media resource is the second interaction mode, the display area of the first media resource and the display area of the media resource shot by the first terminal are independent. Therefore, the first terminal can display the interactive mode interface in the target application. The interactive mode interface displays a first area and a second area which are mutually independent. The first area is used for displaying a first media resource, and the second area is used for displaying a picture acquired by the first terminal in real time. The first terminal can obtain the second media resources which simultaneously display the media resources shot by the first terminal and the media resources shot by the second terminal by splicing the first media resources and the media resources acquired in the second area, so that the diversity of interaction modes is increased, the interaction efficiency between objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
For example, fig. 6 is a schematic diagram illustrating an interaction mode interface corresponding to a second interaction mode according to an exemplary embodiment. As shown in fig. 6, the interactive interface 601 displays a first area 602, a second area 603, and a shooting control 604. The first area 602 displays a first media resource, and the second area 603 displays a picture photographed by the first terminal in real time. By triggering the shooting control 604, the second terminal can display the media resource currently shot by the camera of the first terminal in the second area 603. And then splicing the first media resources in the first area 602 and the media resources acquired in the second area 603 to obtain second media resources.
In some embodiments, the target interaction mode is a third interaction mode, and the first terminal can determine the second media resource corresponding to the first media resource and the third interaction mode based on the target application. Correspondingly, in response to the target interaction mode being a third interaction mode, the first terminal displays an interaction mode interface of the target application, the interaction mode interface is displayed with a first area, all areas of the first area are used for displaying first media resources, and part areas of the first area are used for displaying pictures acquired by the first terminal in real time; and responding to the shooting operation, the first terminal determines a second media resource based on the first media resource and the media resource acquired in a partial area of the first area, wherein the second media resource is a video or a picture. The third interaction mode is an object interaction mode for interaction based on the media resources shot by the second same-frame mode, and the second same-frame mode is a same-frame mode for displaying a plurality of media resources which are mutually overlapped in an interaction mode interface of the target application. Therefore, when the object interaction mode corresponding to the first media resource is the third interaction mode, the display area of the first media resource and the display area of the media resource shot by the first terminal are indicated to overlap each other. Therefore, the first terminal can display the interactive mode interface in the target application, and the interactive mode interface displays the first area. At this time, the first media asset is displayed in the entire area of the first area, and the media asset photographed by the first terminal can be displayed in a partial area of the first area. That is, the media assets photographed by the first terminal are displayed above the first media assets. The first terminal can obtain the second media resources which are simultaneously displayed with the media resources shot by the first terminal and the media resources shot by the second terminal by superposing the media resources shot by the first terminal on the first media resources displayed in the first area, so that the diversity of interaction modes is increased, the interaction efficiency between objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
For example, fig. 7 is a schematic diagram illustrating an interaction mode interface corresponding to a third interaction mode according to an exemplary embodiment. As shown in fig. 7, the interactive interface 701 displays a first area 702 and a shooting control 703. The entire area of the first area 702 displays the first media assets, and a partial area of the first area 702 displays the media assets photographed by the first terminal. By triggering the shooting control 703, the first terminal can perform matting processing on the media resource currently shot by the first terminal, and the processed media resource is displayed above the first media resource. The user can drag the processed media resource to change the display area of the media resource.
In some embodiments, the target interaction mode is a fourth interaction mode, and the first terminal can determine the second media resource corresponding to the first media resource and the fourth interaction mode based on the target application. Correspondingly, in response to the target interaction mode being a fourth interaction mode, the first terminal displays an interaction mode interface of the target application, the interaction mode interface displays a first area and a third area, the first area is used for displaying pictures acquired by the first terminal in real time, and the third area is used for displaying a plurality of special effect templates; in response to the selection operation of the target special effect template, adding a special effect corresponding to the target special effect template into a picture displayed in the first area by the first terminal, wherein the target special effect template is any special effect template in a plurality of special effect templates; and responding to the shooting operation, the first terminal determines the media resources acquired in the first area as second media resources based on the target application, wherein the second media resources are videos or pictures with special effects corresponding to the target special effect template. The fourth interaction mode is an object interaction mode for interaction based on media resources shot by any special effect template. Therefore, in the case that the object interaction mode corresponding to the first media resource is the fourth interaction mode, the first terminal can display the interaction mode interface in the target application. The interactive mode interface displays a first area and a third area. By selecting a target special effect template from the multiple special effect templates in the third area, the second terminal can add a special effect corresponding to the target special effect template into a picture which is displayed in the first area and is acquired by the first terminal in real time based on the target special effect template. In response to the shooting operation, the first terminal can determine the second media resource from the media resource with the special effect corresponding to the target special effect template in the first area. By providing a plurality of special effect templates, the first terminal adds special effects corresponding to different special effect templates in the media resources shot by the first terminal displayed in the first area, so that the diversity of interaction modes is increased, the interaction efficiency between objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
In some embodiments, the target interaction mode is a fifth interaction mode, and the first terminal can determine the second media resource corresponding to the first media resource and the fifth interaction mode based on the target application. Correspondingly, in response to the target interaction mode being a fifth interaction mode, the first terminal displays an interaction mode interface of the target application, the interaction mode interface is displayed with a fourth area, the fourth area is used for displaying prompt information, and the prompt information is used for prompting the first object to input media resources in the fourth area; in response to the input operation to the fourth region, the first terminal determines a second media asset based on the first media asset and the media asset input in the fourth region, the second media asset being text. The fifth interaction mode is an object interaction mode for interaction based on manually input media resources, so that the first terminal can display an interaction mode interface in the target application when the object interaction mode corresponding to the first media resources is the fifth interaction mode. The interactive mode interface is displayed with a fourth area. The fourth area displays prompt information for prompting the first object to input media resources in the fourth area. The first terminal can determine the media asset displayed in the fourth area as the second media asset based on the input operation of the first object in the fourth area. By displaying the fourth area for inputting the media resource, the first object can answer the question of the second object in the first area in a text form, blessing, greeting and the like of the second object can also be input, the diversity of interaction modes is increased, the interaction efficiency between the objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
For example, fig. 8 is a schematic diagram illustrating an interaction mode interface corresponding to a fifth interaction mode according to an exemplary embodiment. As shown in fig. 8, the interactive interface 801 displays a fourth area 802, and the fourth area 802 displays a first media asset, "do you accept a foreign love? ". The user can enter an answer to the question in the fourth area 802. Then by triggering the answer submission control "submit answer", the answer can be sent to the second terminal.
In some embodiments, the target interaction mode is a sixth interaction mode, and the first terminal is capable of determining, based on the target application, a second media resource corresponding to the first media resource and the sixth interaction mode. Correspondingly, in response to the target interaction mode being a sixth interaction mode, the first terminal displays an interaction mode interface of the target application, the interaction mode interface is displayed with a fifth area, the fifth area is used for displaying a first media resource, and the first media resource comprises a test question and a plurality of test options; in response to a selection operation of the target test option, the first terminal determines a second media resource based on the first media resource and the target test option, wherein the target test option is any one of a plurality of test options, and the second media resource is text. The sixth interaction mode is an object interaction mode for interaction based on automatically input media resources. Therefore, in the case that the object interaction mode corresponding to the first media resource is the sixth interaction mode, the first terminal can display the interaction mode interface in the target application. The interactive mode interface displays a fifth area for displaying a first media asset, wherein the first media asset comprises a test question and a plurality of test options. The first object is able to select from a plurality of test options the correct option it considers to be based on the test question. After the selection is completed, the first terminal can mark the correct option of the plurality of test options in the fifth area. The first terminal is capable of determining the second media asset based on the selection of the first media asset and the first object. Through showing the test question in the fifth region for the second object can be based on the answer condition of first object to the test question, knows first object to oneself's understanding degree, has increased the variety of interactive mode, has improved the interactive efficiency between the object, thereby has improved man-machine interaction efficiency, has promoted user experience.
For example, fig. 9 is a schematic diagram illustrating an interaction mode interface corresponding to a sixth interaction mode according to an exemplary embodiment. As shown in fig. 9, the interactive interface 901 displays a fifth region 902. The fifth region 902 displays a first media asset that includes the test title "XXX prefers what kind of fruit? "and test options" apple "and" banana ". The first object can select the correct option considered by itself from the test options according to the knowledge of the second object. Such as bananas. The first object can send its own answer condition to the second terminal by triggering an answer submitting control to submit an answer. After the first object selection is completed, the first terminal can mark the correct one of the plurality of test options in the fifth area 902.
In step S305, the first terminal sends the second media resource to the second terminal, and the second terminal displays a preview component of the second media resource on a desktop of the second terminal, where the preview component of the second media resource is used to display preview information of the second media resource.
In the embodiment of the disclosure, the first terminal can send the second media resource corresponding to the first media resource and the target interaction mode to the second terminal, and the second terminal can receive the second media resource through the target application and then display a preview component of the second media resource on a desktop of the second terminal. The preview component is used for displaying preview information of the second media resource. If the second media asset is a picture, the preview information may be a thumbnail of the picture. If the second media resource is a video, the preview information may be a thumbnail of a first video frame in the video, or may be a thumbnail of a moving picture formed by several consecutive video frames in the video. If the second media asset is text, the preview information may be a thumbnail displayed text.
For example, FIG. 10 is a schematic diagram illustrating a preview component of a second media asset, according to an example embodiment. As shown in fig. 10, in the case where the target interaction mode is the first interaction mode and the fourth interaction mode, the second terminal displays the preview component 1001 on the desktop. In the case where the target interaction mode is the second interaction mode, the second terminal displays the preview component 1003 or the preview component 1004 on the desktop. In the case that the target interaction mode is the third interaction mode, the second terminal displays the preview component 1002 on the desktop. In the case that the target interaction mode is the fifth interaction mode, the second terminal displays a preview component 1005 on the desktop, and the preview component 1005 displays preview information "XX of the second media resource to answer your question". In the case that the target interaction mode is the sixth interaction mode, the second terminal displays a preview component 1006 on the desktop, and the preview component 1006 displays preview information "XXX of the second media asset for selection.
In some embodiments, the first terminal may be capable of actively initiating an interaction to send media resources in the target application to the third terminal. For a specific process, see the following steps (1) - (3).
(1) In response to an object interaction operation in the target application, the first terminal displays an interaction mode interface of the target application, the interaction mode interface displays a plurality of object interaction modes, and the object interaction operation is used for indicating the interaction between the first object and at least one third object, wherein the third object is an object with an association relation with the first object.
In the embodiment of the disclosure, in response to an object interaction operation in a target application, the first terminal can display an interaction mode interface in the target application. The interactive mode interface displays various object interactive modes. The first object is capable of interacting with at least one third object based on any of the object interactions. The third object is an object with an association relation with the first object. The association relationship may be a relationship in which the third object and the first object are in focus of each other, or a relationship in which the third object and the first object are in close relationship to each other. The affinity includes relationships set in the target application, such as friend relationships, family relationships, lover relationships, classmates relationships, and the like.
In some embodiments, the video presentation interface of the first object displays an association entry, and in response to a trigger operation on the association entry, the first terminal can display at least one third object having an association with the first object in the association object interface. The video display interface displays the association relationship entry, video entries of a plurality of videos issued by the first object and detailed information of the first object. The detailed information includes the number of fans of the first object, the number of videos issued by the first object, the number of praise of videos issued by the first object, and the like.
For example, FIG. 11 is a schematic diagram of a video presentation interface and associated object interface, as shown in accordance with an exemplary embodiment. As shown in fig. 11, the video presentation interface 1101 displays a plurality of association entries 1102, including "sister", "brother", "classmates". The video presentation interface also displays a relationship creation control 1103. The relationship creation control 1103 is used to create an association relationship between the first object and other objects. Each relationship entry represents a relationship for which the user can display the association object interface 1104 by triggering the relationship entry. An associated object interface 1104 displays a plurality of third objects having sister relationships to the first object, the associated object interface 1104 also displaying a relationship creation control 1103. By triggering the relationship creation control 1103, the user can create sister relationships between the first object and other objects. The associated object interface 1104 also displays a preview entry 1105 for the interactive control "do me" and intimacy ring, and the user can display the interactive mode interface of the target application by triggering the interactive control. By triggering the preview portal 1105, the user can display an affinity for displaying media assets published by a third object having an association with the first object.
(2) In response to the selection operation of the seventh interaction mode, the third terminal determines a third media resource corresponding to the seventh interaction mode based on the target application, wherein the seventh interaction mode is any one of a plurality of object interaction modes.
In an embodiment of the present disclosure, the interaction mode interface includes a plurality of object interaction modes, and the seventh interaction mode is any one of the plurality of object interaction modes. In response to the selection operation of the seventh interaction mode, the first terminal can determine a third media resource corresponding to the seventh interaction mode based on the target application. The third media resource may be a media resource generated in the target application, or may be a media resource acquired by the target application from a media resource locally stored in the first terminal. The third media asset may be video, picture, text, etc., and the embodiments of the present disclosure do not limit the source and type of the third media asset.
In some embodiments, the seventh interaction mode is a shooting interaction mode, and the first terminal can determine a third media resource corresponding to the shooting interaction mode based on the target application. Correspondingly, responding to the selection operation of the shooting interaction mode, the first terminal displays a first area in an interaction mode interface, wherein the first area is used for displaying a picture acquired by the first terminal in real time; and responding to the shooting operation, and determining the media resources acquired in the first area as third media resources based on the target application, wherein the third media resources are videos or pictures. The shooting interaction mode is an object interaction mode for interaction based on media resources shot in real time, so that the first terminal can display a first area for displaying a picture acquired by the first terminal in real time in an interaction mode interface by selecting the shooting interaction mode in the plurality of object interaction modes. The image acquired by the first terminal in real time can be acquired through a front camera of the first terminal, can also be acquired through a rear camera of the first terminal, and can also be acquired through acquisition equipment outside the first terminal. In response to the photographing operation, the first terminal can display a screen photographed by the camera in the first area. The third media resource is the media resource collected by the first terminal in the first area.
In some embodiments, the seventh interaction mode is a in-frame interaction mode, and the first terminal is capable of determining a third media resource corresponding to the in-frame interaction mode based on the target application. Correspondingly, in response to the selection operation of the same-frame interaction mode, the first terminal displays a first area and a second area in the interaction mode interface, wherein the first area is used for displaying pictures acquired by the first terminal in real time, and the second area is used for displaying various same-frame modes; in response to a selection operation of a target in-frame mode, the first terminal determines a first display area and a second display area in a first area based on the target in-frame mode, wherein the first display area is used for displaying a third media resource, the second display area is used for displaying a picture acquired by a third terminal in real time, and the target in-frame mode is any one in-frame mode of a plurality of in-frame modes; and responding to the shooting operation, and determining the media resources acquired in the first display area as third media resources based on the target application, wherein the third media resources are videos or pictures. The same-frame interaction mode is an object interaction mode for interaction based on any media resource shot in the same-frame mode. Therefore, by selecting the same-frame interaction mode among the plurality of object interaction modes, the first terminal can display the first area and the second area in the interaction mode interface. By selecting a target in-frame mode of the multiple in-frame modes in the second area, the first terminal can determine a display area of a picture acquired by the first terminal and a picture acquired by the third terminal in the first area based on the target in-frame mode, and determine a media resource acquired by the first terminal displayed in the first area as a third media resource. Through providing multiple same frame mode for first terminal can show the media resource that third terminal and first terminal shot in first region based on different same frame modes, has increased the variety of interactive mode, has improved the interactive efficiency between the object, thereby has improved man-machine interaction efficiency, has promoted user experience.
In some embodiments, the target in-frame manner includes a first in-frame manner and a second in-frame manner. Accordingly, the first terminal can determine the first display area and the second display area through the first frame mode and the second frame mode respectively, and the specific case is the following case one and the case two.
Case one: responding to the selection operation of the first frame mode, dividing the first area into a first subarea and a second subarea which are mutually independent based on the first frame mode by the first terminal, wherein the first subarea is used for displaying a picture acquired by the third terminal in real time, and the second subarea is used for displaying a blank picture; the first terminal determines the first sub-region as a first display region; the second sub-region is determined to be a second display region. The first frame mode is a frame mode for displaying a plurality of media resources which are mutually independent in a first area. And under the condition that the target in-frame mode is the first in-frame mode, indicating that the display area of the media resource shot by the third terminal in the first area is mutually independent from the display area of the media resource shot by the first terminal in the first area. Therefore, the first terminal can divide the first area based on the first framing mode to obtain a first sub-area and a second sub-area which are mutually independent, and the first sub-area and the second sub-area are mutually incoherent. In the process of shooting by the first terminal, the first subarea displays a picture acquired by the first terminal in real time, and the second subarea displays a blank picture. The first terminal determines any sub-region as a display region of the media resource shot by the first terminal, and determines the other sub-region as a display region of the media resource shot by the third terminal, so that the same-frame display of different media resources can be realized, the diversity of interaction modes is increased, the interaction efficiency between objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
And a second case: in response to a selection operation of the second in-frame interaction mode, the first terminal determines all areas in the first area as a first display area, determines part of areas in the first area as a second display area, and displays media resources displayed in the second display area above the media resources displayed in the first display area based on the second in-frame interaction mode. The second same-frame mode is a same-frame mode for displaying a plurality of media resources which are mutually overlapped in the first area. And under the condition that the target in-frame mode is the second in-frame mode, indicating that the display area of the media resource shot by the third terminal in the first area and the display area of the media resource shot by the first terminal in the first area are mutually overlapped. The media assets photographed by the first terminal can be displayed in the entire area of the first area, and the media assets photographed by the third terminal, which is photographed later, can be displayed in a partial area in the first area. That is, when the target in-frame mode is the second in-frame mode, the first terminal can realize in-frame display of different media resources by displaying the media resources shot by the third terminal above the media resources shot by the first terminal. The diversity of interaction modes is increased, and the interaction efficiency between objects is improved, so that the man-machine interaction efficiency is improved, and the user experience is improved.
For example, fig. 12 is a schematic diagram illustrating an interaction mode interface corresponding to a frame interaction mode according to an exemplary embodiment. As shown in fig. 12, the interactive mode interface 1201 displays a first area 1202 and a second area 1203. A plurality of frame manners are displayed in the second area 1203, including a first frame manner 1204 and a second frame manner 1205. When the user selects the first frame mode 1204, the first terminal can divide the first area 1202 to obtain a first sub-area 1206 and a second sub-area 1207. The first sub-area 1206 displays a screen acquired by the first terminal in real time, and the second sub-area 1207 displays a blank screen. The second sub-area 1207 also displays a prompt message "the semi-friend is left for", which can prompt the first object that the second sub-area 1207 logs in for the third object. In the case where the user selects the second in-frame mode 1205, the first terminal can display the media asset photographed by the first terminal, that is, the third media asset, in the entire area of the first area 1202. After the first terminal sends the third media asset to the third terminal, the third terminal can display the photographed media asset above the third media asset. The interactive mode interface 1201 also displays a shooting control 1208, and the user can display, by triggering the shooting control 1208, a media resource currently shot by the camera of the first terminal in the first area 1202. When the user presses the shooting control 1208 for a long time, the media resource is video; when the user clicks on the capture control 1208, the media asset is a picture.
In some embodiments, the seventh interaction mode is a special effect interaction mode, and the first terminal can determine the third media resource corresponding to the special effect interaction mode based on the target application. Correspondingly, responding to the selection operation of the special effect interaction mode, the first terminal displays a first area and a third area in an interaction mode interface, wherein the first area is used for displaying a picture acquired by the first terminal in real time, and the third area is used for displaying a plurality of special effect templates; in response to the selection operation of the target special effect template, adding a special effect corresponding to the target special effect template into a picture displayed in the first area by the first terminal, wherein the target special effect template is any special effect template in a plurality of special effect templates; and responding to the shooting operation, and determining the media resources acquired in the first area as third media resources by the first terminal based on the target application, wherein the third media resources are videos or pictures with special effects corresponding to the target special effect template. The special effect interaction mode is an object interaction mode for interaction based on media resources shot by any special effect template. Therefore, by selecting a special effect interaction mode among the plurality of object interaction modes, the first terminal can display the first area and the third area in the interaction mode interface. By selecting a target special effect template from the multiple special effect templates in the third area, the first terminal can add a special effect corresponding to the target special effect template into a picture which is displayed in the first area and is acquired by the first terminal in real time based on the target special effect template. In response to the shooting operation, the first terminal can determine the third media resource from the media resources with the special effects corresponding to the target special effect template in the first area. By providing a plurality of special effect templates, the first terminal adds special effects corresponding to different special effect templates in the media resources shot by the first terminal displayed in the first area, so that the diversity of interaction modes is increased, the interaction efficiency between objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
For example, fig. 13 is a schematic diagram of an interaction mode interface corresponding to a special effect interaction mode according to an exemplary embodiment. As shown in fig. 13, the interactive interface 1301 displays a first region 1302 and a third region 1303. A plurality of special effect templates are displayed in the third region 1303. The user selects a target special effect template from the plurality of special effect templates, and the first terminal can add a special effect corresponding to the target special effect template in the picture displayed in the first area 1302. The interactive mode interface 1301 further displays a shooting control 1304, and the user can display a media resource with a special effect corresponding to the target special effect template in the first area 1304 by triggering the shooting control 1304. When the user presses the shooting control 1304 for a long time, the media resource is a video with special effects; when the user clicks the capture control 1304, the media asset is a picture with special effects.
In some embodiments, the seventh interaction mode is a message interaction mode, and the first terminal can determine a third media resource corresponding to the third interaction mode based on the target application. Correspondingly, responding to the selection operation of the message interaction mode, the first terminal displays a fourth area in the interaction mode interface, wherein the fourth area is used for displaying prompt information, and the prompt information is used for prompting the first object to input media resources in the fourth area; in response to the input operation to the fourth area, the first terminal determines the media resource input in the fourth area as a third media resource based on the target application, the third media resource being text. The message interaction mode is an object interaction mode for interaction based on manually input media resources. Therefore, by selecting the message interaction mode from the plurality of object interaction modes, the first terminal can display the fourth area in the interaction mode interface. The fourth area displays prompt information for prompting the first object to input media resources in the fourth area. The first terminal can determine the media asset displayed in the fourth area as the third media asset based on the input operation of the first object in the fourth area. Through displaying the fourth area for inputting the media resource, the first object can input the problem of asking for the third object in the first area in a text form, and blessing, greetings and the like of the third object can also be input, so that the diversity of interaction modes is increased, the interaction efficiency between the objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
For example, fig. 14 is a schematic diagram illustrating an interaction mode interface corresponding to a message interaction mode according to an exemplary embodiment. As shown in fig. 14, the interactive interface 1401 displays a fourth area 1402, a send control 1403, a first background control 1404, and a second background control 1405. A fourth area 1402 displays a reminder "point here, write your mood/question". The user can input text in the fourth area 1402 by triggering the fourth area 1402. After the input is completed, the first terminal can determine the text displayed in the fourth area 1402 as the third media asset. The user can send the third media asset to the third terminal by triggering the send control 1403. By triggering the first background control 1404, the first terminal can modify the background map of the first region to a user-defined background map. By triggering the second context control 1405, the first terminal can modify the context map of the first area to the context map provided by the target application.
In some embodiments, the seventh interaction mode is a test interaction mode, and the first terminal can determine a third media resource corresponding to the test interaction mode based on the target application. Correspondingly, responding to the selection operation of the test interaction mode, displaying a fifth area on an interaction mode interface, wherein the fifth area is used for displaying a first test question, the first test question is any test question in a question bank related to a target application, and the first test question comprises a test question and a plurality of test options; in response to a selection operation of the target test option, determining the target test option as a correct option of the first test question, and determining other test options except the target test option in the plurality of test options as error options of the first test question; in response to a validation operation of the first test question, the first test question is determined to be a third media asset. Wherein, because the test interaction mode is an object interaction mode for interaction based on automatically input media resources. Therefore, by selecting the test interaction mode among the plurality of object interaction modes, the first terminal can display the fifth area in the interaction mode interface. In the fifth area, test questions randomly selected by the first terminal from the question bank associated with the target application, namely, first test questions are displayed, wherein the first test questions comprise test questions and a plurality of test options. The first object can select a correct option from a plurality of test options according to the test question, and the first terminal automatically classifies the rest test options as error options. After the selection is completed, the first terminal can determine the test questions in the fifth area as the third media resource. Through randomly displaying the test questions in the fifth area, the first object can know the degree of knowledge of the third object on the basis of the answer condition of the third object on the test questions, so that the diversity of interaction modes is increased, the interaction efficiency between the objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
For example, fig. 15 is a schematic diagram illustrating an interaction mode interface corresponding to a test interaction mode according to an exemplary embodiment. As shown in fig. 15, the interactive mode interface 1501 displays a fifth region 1502, a send control 1503, a first background control 1504, and a second background control 1505. A first test question is displayed in the fifth area 1502. The test question of the first test question is "what fruit you prefer? ", test options include" apple "and" banana ". The user can select his/her preferred fruit from the test options based on his/her preferences. Such as bananas. At this time, the test option "banana" is the correct option of the first test question, and the test option "apple" is the wrong option of the first test question. The user can send the first test question to the first terminal by triggering the send control 1503. By triggering the first background control 1504, the first terminal can modify the background map of the first area to a user-defined background map. By triggering the second context control 1505, the first terminal can modify the context map of the first region into a context map provided by the target application. The interactive mode interface 1501 also displays a question switching control "change question", and the user can switch the first test question currently displayed in the fifth area 1502 to any test question except the first test question in the question bank by triggering the question switching control.
In some embodiments, the fifth area may be capable of displaying a plurality of test questions sequentially, and after the first terminal sends the plurality of test questions to the third terminal, the third object may be capable of selecting, for any test question, a test option that is deemed correct by itself from test options of the test question. The first terminal can determine a degree of matching between the first object and the third object based on a degree of difference between a result of selection of the third object and a correct option set for the test question by the first object.
(3) And the third terminal is used for displaying a preview component of the third media resource on a desktop of the third terminal, and the preview component is used for displaying preview information of the third media resource.
In the embodiment of the present disclosure, the third terminal is a terminal that a third object logs in, and the third object is an account number that is logged in a target application installed in the third terminal. Since the third object is an object having an association relationship with the first object, the first terminal is able to send the third media resource to the terminal registered with at least one third object, i.e. the third terminal. The third terminal is capable of receiving a third media asset through the target application. The third terminal may be the second terminal, or may be other terminals except the first terminal, which is not limited in the embodiment of the present application.
The embodiment of the disclosure provides a method for interaction between objects, which receives a first media resource sent by a terminal logged in by a second object with an association relation with a first object through a target application, so that the first terminal can display a preview component of the first media resource on a desktop of the first terminal. The first object registered in the first terminal can preview the first media resource through the preview information displayed by the preview component. And by triggering the preview component, the user can jump to the target application to watch the first media resource, so that the interaction between the objects is realized. Compared with the prior art that interaction between objects is realized in the target application, the preview component for displaying media resources on the desktop is implemented, so that the diversity of interaction modes is increased, the interaction efficiency between objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
FIG. 16 is a block diagram illustrating an apparatus for interaction between objects, according to an example embodiment. Referring to fig. 16, the apparatus includes: a receiving unit 1601, a first display unit 1602, and a second display unit 1603.
A receiving unit 1601 configured to receive, by using a target application, a first media resource sent by a second terminal, where the target application is any application in the first terminal, the second terminal is a terminal on which a second object is registered, and the second object is an object having an association relationship with the first object;
a first display unit 1602 configured to display a preview component of the first media resource on a desktop of the first terminal, the preview component being for displaying preview information of the first media resource;
the second display unit 1603 is configured to display the first media resource in the target application in response to a trigger operation to the preview component.
In some embodiments, FIG. 17 is a block diagram illustrating another device for interaction between objects, according to an example embodiment. Referring to fig. 17, the apparatus further comprises:
the first determining unit 1604 is configured to determine, based on the target application, a second media resource corresponding to the first media resource and a target interaction mode, where the target interaction mode is an object interaction mode corresponding to the first media resource;
the first sending unit 1605 is configured to send the second media resource to the second terminal, and the second terminal displays a preview component of the second media resource on a desktop of the second terminal, where the preview component of the second media resource is used to display preview information of the second media resource.
In some embodiments, the target interaction mode is a first interaction mode, and the first interaction mode is an object interaction mode for interaction based on media resources photographed in real time;
the first determining unit 1604 is configured to display an interaction mode interface of the target application in response to the target interaction mode being a first interaction mode, wherein the interaction mode interface displays a first area, and the first area is used for displaying a picture acquired by the first terminal in real time; and responding to the shooting operation, and determining the media resources acquired in the first area as second media resources based on the target application, wherein the second media resources are videos or pictures.
In some embodiments, the target interaction mode is a second interaction mode, the second interaction mode is an object interaction mode for interaction based on media resources captured by a first frame mode, and the first frame mode is a frame mode for displaying a plurality of media resources independent of each other in an interaction mode interface of the target application;
the first determining unit 1604 is configured to display an interaction mode interface of the target application in response to the target interaction mode being a second interaction mode, wherein the interaction mode interface displays a first area and a second area, the first area is used for displaying a first media resource, and the second area is used for displaying a picture acquired by the first terminal in real time; and responding to shooting operation, and based on the target application, splicing the first media resource and the media resource acquired in the second area to obtain a second media resource, wherein the second media resource is a video or a picture.
In some embodiments, the target interaction mode is a third interaction mode, the third interaction mode is an object interaction mode for interaction based on media resources shot by a second same-frame mode, and the second same-frame mode is a same-frame mode for displaying a plurality of media resources overlapped with each other in an interaction mode interface of the target application;
the first determining unit 1604 is configured to display an interaction mode interface of the target application in response to the target interaction mode being a third interaction mode, wherein the interaction mode interface is displayed with a first area, all areas of the first area are used for displaying first media resources, and part areas of the first area are used for displaying pictures acquired by the first terminal in real time; and responding to shooting operation, and determining a second media resource based on the first media resource and the media resource acquired in a partial area of the first area, wherein the second media resource is a video or a picture.
In some embodiments, the target interaction mode is a fourth interaction mode, and the fourth interaction mode is an object interaction mode for interaction based on media resources shot by any special effect template;
the first determining unit 1604 is configured to display an interaction mode interface of the target application in response to the target interaction mode being a fourth interaction mode, wherein the interaction mode interface is displayed with a first area and a third area, the first area is used for displaying a picture acquired by the first terminal in real time, and the third area is used for displaying a plurality of special effect templates; in response to a selection operation of a target special effect template, adding a special effect corresponding to the target special effect template in a picture displayed in a first area, wherein the target special effect template is any special effect template in a plurality of special effect templates; and responding to shooting operation, determining the media resources acquired in the first area as second media resources based on the target application, wherein the second media resources are videos or pictures with special effects corresponding to the target special effect template.
In some embodiments, the target interaction mode is a fifth interaction mode, and the fifth interaction mode is an object interaction mode for interaction based on manually input media resources;
the first determining unit 1604 is configured to display an interaction mode interface of the target application in response to the target interaction mode being a fifth interaction mode, where the interaction mode interface displays a fourth area, and the fourth area is used for displaying prompt information, and the prompt information is used for prompting the first object to input media resources in the fourth area; in response to an input operation to the fourth region, a second media asset is determined based on the first media asset and the media asset input in the fourth region, the second media asset being text.
In some embodiments, the target interaction mode is a sixth interaction mode, where the sixth interaction mode is an object interaction mode that performs interaction based on automatically input media resources;
the first determining unit 1604 is configured to display an interaction mode interface of the target application in response to the target interaction mode being a sixth interaction mode, where the interaction mode interface displays a fifth area, and the fifth area is used for displaying a first media resource, and the first media resource includes a test question and a plurality of test options; in response to a selection operation of the target test option, a second media resource is determined based on the first media resource and the target test option, the target test option being any one of the plurality of test options, the second media resource being text.
In some embodiments, with continued reference to fig. 17, the apparatus further comprises:
a third display unit 1606 configured to display an interaction mode interface of the target application in response to an object interaction operation in the target application, the interaction mode interface displaying a plurality of object interaction modes, the object interaction operation being used to instruct the first object to interact with at least one third object, the third object being an object having an association relationship with the first object;
a second determining unit 1607 configured to determine, based on the target application, a third media resource corresponding to a seventh interaction mode, which is any one of a plurality of object interaction modes, in response to a selection operation of the seventh interaction mode;
and the second sending unit 1608 is configured to send the third media resource to at least one third terminal, the third terminal is a terminal logged in by a third object, the third terminal is used for displaying a preview component of the third media resource on a desktop of the third terminal, and the preview component is used for displaying preview information of the third media resource.
In some embodiments, the target interaction mode is a same-frame interaction mode, and the same-frame interaction mode is an object interaction mode for interaction based on media resources shot in any one of the same-frame modes;
With continued reference to fig. 17, the second determining unit 1607 includes:
the display subunit 1701 is configured to respond to the selection operation of the same-frame interaction mode, and display a first area and a second area in the interaction mode interface, wherein the first area is used for displaying a picture acquired by the first terminal in real time, and the second area is used for displaying a plurality of same-frame modes;
a first determining subunit 1702 configured to determine, in response to a selection operation of a target in-frame manner, a first display area and a second display area in a first area based on the target in-frame manner, where the first display area is used for displaying a third media resource, and the second display area is used for displaying a picture acquired by the third terminal in real time, and the target in-frame manner is any one in-frame manner of multiple in-frame manners;
the second determining subunit 1703 is configured to determine, in response to the photographing operation, the media asset acquired in the first display area as a third media asset based on the target application, the third media asset being a video or a picture.
In some embodiments, the target in-frame manner is a first in-frame manner, the first in-frame manner being a in-frame manner that displays a plurality of media assets that are independent of each other in the first region;
A first determining subunit 1702 configured to divide, in response to a selection operation of the first framing manner, the first area into a first sub-area and a second sub-area that are independent of each other based on the first framing manner, the first sub-area being used for displaying a picture acquired by the first terminal in real time, and the second sub-area being used for displaying a blank picture; determining the first sub-region as a first display region; the second sub-region is determined to be a second display region.
In some embodiments, the target in-frame manner is a second in-frame manner, the second in-frame manner being a in-frame manner that displays a plurality of media assets overlapping each other in the first region;
the first determining subunit 1702 is configured to determine, in response to a selection operation of the second in-frame interaction manner, all areas in the first area as a first display area, and a partial area in the first area as a second display area, where the media resource displayed in the second display area is displayed above the media resource displayed in the first display area, based on the second in-frame interaction manner.
The embodiment of the disclosure provides a device for interaction between objects, which receives a first media resource sent by a terminal logged in by a second object with an association relation with a first object through a target application, so that the first terminal can display a preview component of the first media resource on a desktop of the first terminal. The first object registered in the first terminal can preview the first media resource through the preview information displayed by the preview component. By triggering the preview component, the user can jump to the target application to watch the first media resource, thereby realizing interaction between objects. Compared with the prior art that interaction between objects is realized in the target application, the preview component for displaying media resources on the desktop is implemented, so that the diversity of interaction modes is increased, the interaction efficiency between objects is improved, the man-machine interaction efficiency is improved, and the user experience is improved.
It should be noted that, when the device for interaction between objects provided in the above embodiment runs an application, only the division of the above functional units is used for illustration, and in practical application, the above functional allocation may be performed by different functional units according to needs, that is, the internal structure of the electronic device is divided into different functional units, so as to complete all or part of the functions described above. In addition, the device for interaction between objects and the method embodiment for interaction between objects provided in the above embodiments belong to the same concept, and detailed implementation processes of the device for interaction between objects are shown in the method embodiment, which is not repeated here.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
When the electronic device is provided as a terminal, fig. 18 is a block diagram of a terminal 1800, according to an exemplary embodiment. The terminal fig. 18 shows a block diagram of a terminal 1800 provided by an exemplary embodiment of the present disclosure. The terminal 1800 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. The terminal 1800 may also be referred to as a user device, portable terminal, laptop terminal, desktop terminal, or the like.
In general, the terminal 1800 includes: a processor 1801 and a memory 1802.
Processor 1801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1801 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1801 may also include a main processor and a coprocessor, the main processor being a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1801 may integrate a GPU (Graphics Processing Unit, picture processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 1801 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
The memory 1802 may include one or more computer-readable storage media, which may be non-transitory. The memory 1802 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1802 is used to store at least one program code for execution by processor 1801 to implement the methods of inter-object interaction provided by the method embodiments of the present disclosure.
In some embodiments, the terminal 1800 may also optionally include: a peripheral interface 1803 and at least one peripheral. The processor 1801, memory 1802, and peripheral interface 1803 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 1803 by buses, signal lines or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1804, a display screen 1805, a camera assembly 1806, an audio circuit 1807, a positioning assembly 1808, and a power supply 1809.
The peripheral interface 1803 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 1801 and memory 1802. In some embodiments, processor 1801, memory 1802, and peripheral interface 1803 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1801, memory 1802, and peripheral interface 1803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1804 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1804 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1804 converts electrical signals to electromagnetic signals for transmission, or converts received electromagnetic signals to electrical signals. Optionally, the radio frequency circuit 1804 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 1804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 1804 may also include NFC (Near Field Communication ) related circuitry, which is not limited by this disclosure.
The display 1805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1805 is a touch display, the display 1805 also has the ability to collect touch signals at or above the surface of the display 1805. The touch signal may be input as a control signal to the processor 1801 for processing. At this point, the display 1805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1805 may be one, providing a front panel of the terminal 1800; in other embodiments, the display 1805 may be at least two, disposed on different surfaces of the terminal 1800 or in a folded configuration; in still other embodiments, the display 1805 may be a flexible display disposed on a curved surface or a folded surface of the terminal 1800. Even more, the display screen 1805 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 1805 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1806 is used to capture pictures or video. Optionally, the camera assembly 1806 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 1806 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 1807 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1801 for processing, or inputting the electric signals to the radio frequency circuit 1804 for realizing voice communication. For stereo acquisition or noise reduction purposes, the microphone may be multiple, and disposed at different locations of the terminal 1800. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is then used to convert electrical signals from the processor 1801 or the radio frequency circuit 1804 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuitry 1807 may also include a headphone jack.
A power supply 1808 is used to power the various components in the terminal 1800. The power supply 1808 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1808 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
Those skilled in the art will appreciate that the structure shown in fig. 18 is not limiting and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, a computer-readable storage medium is also provided, such as memory 1802 including instructions executable by processor 1801 of terminal 1800 to perform the above-described methods. Alternatively, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
A computer program product comprising computer programs/instructions which when executed by a processor implement a method of interaction between objects as described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (15)
1. The method for interaction between objects is characterized by being applied to a first terminal, wherein the first terminal is a terminal for logging in a first object, and the method comprises the following steps:
receiving a first media resource sent by a second terminal through a target application, wherein the target application is any application in the first terminal, the second terminal is a terminal logged in by a second object, and the second object is an object with an association relation with the first object;
displaying a preview component of the first media resource on a desktop of the first terminal, wherein the preview component is used for displaying preview information of the first media resource;
and responding to the triggering operation of the preview component, and displaying the first media resource in the target application.
2. The method of interaction between objects of claim 1, further comprising:
determining a second media resource corresponding to the first media resource and a target interaction mode based on the target application, wherein the target interaction mode is an object interaction mode corresponding to the first media resource;
And sending the second media resource to the second terminal, and displaying a preview component of the second media resource on a desktop of the second terminal by the second terminal, wherein the preview component of the second media resource is used for displaying preview information of the second media resource.
3. The method of interaction between objects according to claim 2, wherein the target interaction mode is a first interaction mode, and the first interaction mode is an object interaction mode for interaction based on media resources photographed in real time;
the determining, based on the target application, a second media resource corresponding to the first media resource and a target interaction mode includes:
responding to the target interaction mode as the first interaction mode, displaying an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a first area, and the first area is used for displaying a picture acquired by the first terminal in real time;
and responding to shooting operation, and determining the media resources acquired in the first area as the second media resources based on the target application, wherein the second media resources are videos or pictures.
4. The method of interaction between objects according to claim 2, wherein the target interaction mode is a second interaction mode, the second interaction mode is an object interaction mode for interaction based on media resources photographed by a first frame mode, and the first frame mode is a frame mode for displaying a plurality of media resources independent from each other in an interaction mode interface of the target application;
The determining, based on the target application, a second media resource corresponding to the first media resource and a target interaction mode includes:
responding to the target interaction mode as the second interaction mode, displaying an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a first area and a second area, the first area is used for displaying the first media resource, and the second area is used for displaying a picture acquired by the first terminal in real time;
and responding to shooting operation, and based on the target application, splicing the first media resource and the media resource acquired in the second area to obtain the second media resource, wherein the second media resource is a video or a picture.
5. The method of interaction between objects according to claim 2, wherein the target interaction mode is a third interaction mode, the third interaction mode is an object interaction mode for interaction based on media resources shot by a second same-frame mode, and the second same-frame mode is a same-frame mode for displaying a plurality of media resources overlapping each other in an interaction mode interface of the target application;
the determining, based on the target application, a second media resource corresponding to the first media resource and a target interaction mode includes:
Responding to the target interaction mode as the third interaction mode, displaying an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a first area, all areas of the first area are used for displaying the first media resource, and partial areas of the first area are used for displaying a picture acquired by the first terminal in real time;
and responding to shooting operation, and determining the second media resource based on the first media resource and the media resource acquired in a partial area of the first area, wherein the second media resource is a video or a picture.
6. The method of interaction between objects according to claim 2, wherein the target interaction mode is a fourth interaction mode, and the fourth interaction mode is an object interaction mode for interaction based on media resources shot by any special effect template;
the determining, based on the target application, a second media resource corresponding to the first media resource and a target interaction mode includes:
responding to the target interaction mode as the fourth interaction mode, displaying an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a first area and a third area, the first area is used for displaying a picture acquired by the first terminal in real time, and the third area is used for displaying a plurality of special effect templates;
In response to a selection operation of a target special effect template, adding a special effect corresponding to the target special effect template in a picture displayed in the first area, wherein the target special effect template is any special effect template in the plurality of special effect templates;
and responding to shooting operation, and determining the media resources acquired in the first area as the second media resources based on the target application, wherein the second media resources are videos or pictures with special effects corresponding to the target special effect template.
7. The method of interaction between objects according to claim 2, wherein the target interaction mode is a fifth interaction mode, the fifth interaction mode being an object interaction mode for interaction based on manually inputted media resources;
the determining, based on the target application, a second media resource corresponding to the first media resource and a target interaction mode includes:
responding to the target interaction mode as the fifth interaction mode, displaying an interaction mode interface of the target application, wherein a fourth area is displayed on the interaction mode interface, the fourth area is used for displaying prompt information, and the prompt information is used for prompting the first object to input media resources in the fourth area;
And in response to the input operation to the fourth area, determining the second media resource based on the first media resource and the media resource input in the fourth area, wherein the second media resource is text.
8. The method of interaction between objects according to claim 2, wherein the target interaction mode is a sixth interaction mode, and the sixth interaction mode is an object interaction mode for interaction based on automatically input media resources;
the determining, based on the target application, a second media resource corresponding to the first media resource and a target interaction mode includes:
responding to the target interaction mode as the sixth interaction mode, displaying an interaction mode interface of the target application, wherein a fifth area is displayed on the interaction mode interface and is used for displaying the first media resource, and the first media resource comprises a test question and a plurality of test options;
in response to a selection operation of a target test option, determining the second media resource based on the first media resource and the target test option, wherein the target test option is any one of the plurality of test options, and the second media resource is text.
9. The method of interaction between objects of claim 1, further comprising:
responding to object interaction operation in the target application, displaying an interaction mode interface of the target application, wherein the interaction mode interface is displayed with a plurality of object interaction modes, the object interaction operation is used for indicating the first object to interact with at least one third object, and the third object is an object with an association relation with the first object;
responding to the selection operation of a seventh interaction mode, and determining a third media resource corresponding to the seventh interaction mode based on the target application, wherein the seventh interaction mode is any one object interaction mode of the multiple object interaction modes;
and sending the third media resource to at least one third terminal, wherein the third terminal is a terminal for logging in the third object, the third terminal is used for displaying a preview component of the third media resource on a desktop of the third terminal, and the preview component is used for displaying preview information of the third media resource.
10. The method of interaction between objects according to claim 9, wherein the seventh interaction mode is a in-frame interaction mode, and the in-frame interaction mode is an object interaction mode for interaction based on media resources photographed in any one of the in-frame modes;
The responding to the selection operation of the seventh interaction mode, based on the target application, determining a third media resource corresponding to the seventh interaction mode, including:
responding to the selection operation of the same-frame interaction mode, displaying a first area and a second area in the interaction mode interface, wherein the first area is used for displaying a picture acquired by the first terminal in real time, and the second area is used for displaying a plurality of same-frame modes;
responding to a selection operation of a target in-frame mode, and determining a first display area and a second display area in the first area based on the target in-frame mode, wherein the first display area is used for displaying the third media resource, the second display area is used for displaying a picture acquired by the third terminal in real time, and the target in-frame mode is any one in-frame mode of the multiple in-frame modes;
and responding to shooting operation, and determining the media resources acquired in the first display area as the third media resources based on the target application, wherein the third media resources are videos or pictures.
11. The method of claim 10, wherein the target in-frame manner is a first in-frame manner, the first in-frame manner being a in-frame manner that displays a plurality of media assets that are independent of each other in the first area;
The determining, in response to a selection operation of a target in-frame manner, a first display area and a second display area in the first area based on the target in-frame manner includes:
responding to the selection operation of the first frame mode, dividing the first area into a first subarea and a second subarea which are mutually independent based on the first frame mode, wherein the first subarea is used for displaying a picture acquired by the first terminal in real time, and the second subarea is used for displaying a blank picture;
determining the first sub-region as the first display region;
and determining the second subarea as the second display area.
12. The method of interaction between objects of claim 10, wherein the target in-frame manner is a second in-frame manner, the second in-frame manner being a in-frame manner that displays a plurality of media assets overlapping each other in the first region;
the determining, in response to a selection operation of a target in-frame manner, a first display area and a second display area in the first area based on the target in-frame manner includes:
and responding to the selection operation of the second in-frame interaction mode, determining all areas in the first area as the first display area, determining partial areas in the first area as the second display area based on the second in-frame interaction mode, and displaying the media resources displayed in the second display area above the media resources displayed in the first display area.
13. An apparatus for interaction between objects, the apparatus comprising:
the receiving unit is configured to receive a first media resource sent by a second terminal through a target application, wherein the target application is any application in the first terminal, the second terminal is a terminal logged in by a second object, and the second object is an object with an association relation with the first object;
a first display unit configured to display a preview component of the first media resource on a desktop of the first terminal, where the preview component is used to display preview information of the first media resource;
and a second display unit configured to display the first media resource in the target application in response to a trigger operation to the preview component.
14. An electronic device, the electronic device comprising:
one or more processors;
a memory for storing the processor-executable program code;
wherein the processor is configured to execute the program code to implement the method of inter-object interaction of any of claims 1 to 12.
15. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of interaction between objects according to any of claims 1 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310926943.6A CN116962338A (en) | 2023-07-26 | 2023-07-26 | Method and device for interaction between objects, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310926943.6A CN116962338A (en) | 2023-07-26 | 2023-07-26 | Method and device for interaction between objects, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116962338A true CN116962338A (en) | 2023-10-27 |
Family
ID=88456219
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310926943.6A Pending CN116962338A (en) | 2023-07-26 | 2023-07-26 | Method and device for interaction between objects, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116962338A (en) |
-
2023
- 2023-07-26 CN CN202310926943.6A patent/CN116962338A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110490808B (en) | Picture splicing method, device, terminal and storage medium | |
CN113709022B (en) | Message interaction method, device, equipment and storage medium | |
CN112163406B (en) | Interactive message display method and device, computer equipment and storage medium | |
CN111596821B (en) | Message display method, device, computer equipment and storage medium | |
CN110278273B (en) | Multimedia file uploading method, device, terminal, server and storage medium | |
CN111953852B (en) | Call record generation method, device, terminal and storage medium | |
CN114866793B (en) | Data processing method, device, electronic equipment and storage medium | |
CN116016817A (en) | Video editing method, device, electronic equipment and storage medium | |
CN116962338A (en) | Method and device for interaction between objects, electronic equipment and storage medium | |
CN117812352B (en) | Object interaction method, device, electronic equipment and medium | |
CN116501227B (en) | Picture display method and device, electronic equipment and storage medium | |
CN117896568A (en) | Comment display method and device, electronic equipment and storage medium | |
CN115348240B (en) | Voice call method, device, electronic equipment and storage medium for sharing document | |
CN118524077A (en) | Session method, device, equipment and storage medium | |
CN118764679A (en) | Interface display method, device, equipment and storage medium | |
CN118820494A (en) | Resource recommendation method, device, equipment and storage medium | |
CN116679860B (en) | Work sending method, device, equipment and storage medium | |
CN118264846A (en) | Information display method, information display device, electronic equipment and storage medium | |
CN118573648A (en) | Message sending method, device, equipment and storage medium | |
CN118842955A (en) | Interaction method, device, equipment and storage medium based on works | |
CN117608453A (en) | Content sharing method and device, electronic equipment and storage medium | |
CN118363698A (en) | Page display method, device, equipment and storage medium | |
CN115857742A (en) | Interaction method, device and equipment based on electronic map and storage medium | |
CN118502626A (en) | Work recommendation method and device, electronic equipment and storage medium | |
CN118585116A (en) | Work playing method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |