CN114928761B - Video sharing method and device and electronic equipment - Google Patents

Video sharing method and device and electronic equipment Download PDF

Info

Publication number
CN114928761B
CN114928761B CN202210496636.4A CN202210496636A CN114928761B CN 114928761 B CN114928761 B CN 114928761B CN 202210496636 A CN202210496636 A CN 202210496636A CN 114928761 B CN114928761 B CN 114928761B
Authority
CN
China
Prior art keywords
contact
target
input
video
identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210496636.4A
Other languages
Chinese (zh)
Other versions
CN114928761A (en
Inventor
刘明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210496636.4A priority Critical patent/CN114928761B/en
Publication of CN114928761A publication Critical patent/CN114928761A/en
Application granted granted Critical
Publication of CN114928761B publication Critical patent/CN114928761B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses a video sharing method, a video sharing device and electronic equipment, and belongs to the technical field of image processing. The method comprises the following steps: receiving a first input of a first object in a target picture by a user under the condition of displaying a playing interface of the target video, wherein the target picture is any video picture of the target video; in response to the first input, associating the first object with a first contact; and sending a first video clip to the first contact, wherein the first video clip is a video clip generated based on target image content in at least two frames of target video frames, the target video frames are video frames comprising the first object in the target video, and the target image content comprises image content corresponding to the first object.

Description

Video sharing method and device and electronic equipment
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a video sharing method, a video sharing device and electronic equipment.
Background
Sharing information by electronic devices has gradually become a trend in the current age. The user can share things in life to users of other electronic devices in the form of pictures, words and videos. The favorite videos can be rapidly shared to friends through the electronic equipment.
Currently, in a scenario where a user uses an electronic device to share video, the user is required to send a video file containing complete video content to the other party, so that the data volume of the shared video file is larger, thereby spending more traffic.
Disclosure of Invention
The embodiment of the application aims to provide a video sharing method, a video sharing device and electronic equipment, which can solve the problem of large data volume of shared video files in the related technology.
In a first aspect, an embodiment of the present application provides a video sharing method, where the method includes:
receiving a first input of a first object in a target picture by a user under the condition of displaying a playing interface of the target video, wherein the target picture is any video picture of the target video;
in response to the first input, associating the first object with a first contact;
and sending a first video clip to the first contact, wherein the first video clip is a video clip generated based on target image content in at least two frames of target video frames, the target video frames are video frames comprising the first object in the target video, and the target image content comprises image content corresponding to the first object.
In a second aspect, an embodiment of the present application provides a video sharing device, including:
the first receiving module is used for receiving a first input of a first object in a target picture by a user under the condition of displaying a playing interface of the target video, wherein the target picture is any video picture of the target video;
a first response module for associating the first object with a first contact in response to the first input;
the sending module is used for sending a first video clip to the first contact person, wherein the first video clip is a video clip generated based on target image content in at least two frames of target video frames, the target video frames are video frames comprising the first object in the target video, and the target image content comprises image content corresponding to the first object.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In this embodiment of the present invention, under the condition of displaying a playing interface of a target video, a first input of a user to a first object in a target frame is received, and because the target frame is any video frame of the target video, the user can perform the first input to the first object interested in the target video at any time in the process of watching the target video. And then, associating the first object with the first contact, wherein the first contact is the object for sharing the video, and finally, sending the first video clip to the first contact to complete the sharing of the video. Because the first video clip is generated based on the target image content in at least two frames of target video frames, the target video frames are video frames of which the target video comprises a first object, and the target image content comprises the image content corresponding to the first object, the complete target video is not required to be sent to the first contact, and the video content interested by the user is sent to the first contact in a targeted manner, so that the data volume of the shared video file is reduced, and meanwhile, the traffic of both sharing parties is saved.
Drawings
Fig. 1 is a flowchart of steps of a video sharing method provided in an embodiment of the present application;
FIG. 2 is an interface schematic of a first control in an embodiment of the present application;
FIG. 3 is an interface schematic diagram of a first contact identifier in an embodiment of the present application;
FIG. 4 is one of the interface schematics of the input of a first contact identification in an embodiment of the present application;
FIG. 5 is a second exemplary interface diagram for entering a first contact identifier in an embodiment of the present application;
FIG. 6 is a third exemplary interface diagram for entering a first contact identification in an embodiment of the present application;
FIG. 7 is a schematic view of an interface of a first window in an embodiment of the present application;
fig. 8 is a block diagram of a video sharing device according to an embodiment of the present application;
fig. 9 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 10 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video sharing method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenarios thereof with reference to the accompanying drawings.
As shown in fig. 1, the video sharing method provided in the embodiment of the present application includes:
step 101: and under the condition of displaying a playing interface of the target video, receiving a first input of a first object in a target picture by a user.
In this step, the target video is a video file in any video format. Here, the case of displaying a playback interface in a target video includes: playing the target video or pausing playing the target video. The target picture is any video picture of the target video, that is, a video picture in the target video currently displayed. The first object is any object in the target frame, and the type thereof is not limited herein, and may be a person type, an animal type, a building type, or the like.
The first input may be: the click input of the user on the target area, or the voice command input by the user, or the specific gesture input by the user, may be specifically determined according to the actual use requirement, which is not limited in the embodiment of the present application.
The specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application may be single click input, double click input, or any number of click inputs, and may also be long press input or short press input.
It can be understood that the first object is an object of interest to the user, the user selects the object of interest in the target screen to perform the first input, and the selected object is the first object.
Step 102: in response to the first input, a first object is associated with the first contact.
In this step, the first contact is any contact stored in the electronic device, for example, the first contact may be any friend in the social application, but is not limited thereto. Associating the first object with the first contact may be understood as establishing or recording an association relationship between the first object and the first contact.
Step 103: the first video clip is sent to the first contact.
In this step, the first video clip is a video clip generated based on target image content in at least two target video frames, where the target video frames are video frames including a first object in the target video, and the target image content includes image content corresponding to the first object. Here, the first video clip may also be understood as the video content of the first object in the target video. It may include only the first object, but is not limited thereto. For example, if the first object is the person a in the target screen, only the person a may be present in the first video clip, and of course, all the image contents except the person B may be present.
In this embodiment of the present invention, under the condition of displaying a playing interface of a target video, a first input of a user to a first object in a target frame is received, and because the target frame is any video frame of the target video, the user can perform the first input to the first object interested in the target video at any time in the process of watching the target video. And then, associating the first object with the first contact, wherein the first contact is the object for sharing the video, and finally, sending the first video clip to the first contact to complete the sharing of the video. Because the first video clip is generated based on the target image content in at least two frames of target video frames, the target video frames are video frames of which the target video comprises a first object, and the target image content comprises the image content corresponding to the first object, the complete target video is not required to be sent to the first contact, and the video content interested by the user is sent to the first contact in a targeted manner, so that the data volume of the shared video file is reduced, and meanwhile, the traffic of both sharing parties is saved.
Optionally, the first input includes a first sub-input, a second sub-input, a third sub-input, and a fourth sub-input, and receiving the first input of the first object in the target screen by the user in step 101 may include:
receiving a first sub-input of a first object in a target picture by a user;
wherein the first sub-input includes, but is not limited to, click, slide, long press, etc.
Step 102 above: in response to the first input, associating the first object with the first contact, including:
the first control is displayed in response to the first sub-input.
In this step, the first control may be displayed at any position on the screen of the electronic device, and may be displayed in any form. For example, the first control may be displayed in a display area proximate to the first object to avoid its occlusion of the first object. As shown in fig. 2, a user making a first sub-input to the first object 21 will display a first control 22 in a display area near the first object 21.
And displaying at least one application program identifier under the condition that the second sub-input of the user to the first control is received, wherein each application program identifier corresponds to one application program.
In this step, the second sub-input includes, but is not limited to, click, slide, long press, etc. The application programs corresponding to the at least one application program identifier are all application programs installed in the electronic equipment. Preferably, the at least one application program identifier corresponds to the application program of the social class. It will be appreciated that since each person's preferences or habits may be the same or different, the applications selected for use by different persons may be the same or different. The applications selected for use by contacts such as friends and relatives may not be exactly the same for a user of an electronic device. For example, application a and application B installed in an electronic device, a portion of contacts such as friends and relatives of a user of an electronic device may only use application a, for which the user may communicate through application a. A portion of the people may only use application B for which the user may communicate through application B. A portion of the people use both application a and application B, for which the user can communicate through either application a or application B. Preferably, the at least one application identifier corresponds to an application that includes all applications installed in the electronic device or all social-type applications, so that a maximum selection range may be provided to a user of the electronic device.
And displaying at least one contact person identifier corresponding to the target application program under the condition that a third sub-input of the target application identifier in the at least one application program identifier by the user is received, wherein the target application program is the application program corresponding to the target application program identifier.
In this step, the third sub-input includes, but is not limited to, click, slide, long press, etc. In the case of displaying at least one application program identifier, the user may select any application program identifier as the target application program identifier according to the specific situation of the shared party. For example, the sharee is brother of the user, and the at least one application identification includes a first identification corresponding to application a and a second identification corresponding to application B. If the brother of the user is only used as the contact in the application program a, the user performs a third sub-input on the first identifier, which is the target application program identifier, and at least one contact identifier corresponding to the application program a, that is, the contact identifier of at least one contact in the application program a, is displayed.
It is to be appreciated that the contact identification can be, but is not limited to, a nickname, a real name, an avatar, etc. of the contact. Here, each contact has a contact identifier by which a different contact can be distinguished. Here, the buddy list of the target application may be directly displayed, but is not limited thereto.
And under the condition that a fourth sub-input of a first contact person identifier in at least one contact person identifier is received by a user, associating the first object with the first contact person, wherein the first contact person is a contact person corresponding to the first contact person identifier.
In this step, at least one contact identifier displayed is provided for the user to select, i.e. select the shared party. Here, the first contact identifier may be any one of at least one contact identifier, i.e. a contact identifier of the shared party selected by the user. The fourth sub-input includes, but is not limited to, a click, a slide, a long press, etc.
In the embodiment of the application, the first control is used as an entry for selecting the shared party, at least one application program identifier is displayed for the user to select for the first time after the user triggers the first control, and after the user selects the application program, the contact of the selected application program is provided for the user to select for the second time. Through the combination mode of the two selections, a larger selection range can be provided for the user, and further the sharing requirement of the user can be met to the greatest extent.
Optionally, after associating the first object with the first contact, the method further comprises:
And displaying a first contact identifier corresponding to the first contact in a first display area, wherein the first display area comprises a display area corresponding to a first object in the target picture.
It should be noted that the first contact identifier forms an occlusion for the first object, as shown in fig. 3, the first contact identifier 31 forms an occlusion for the first object 32. Of course, the first contact identifier may also be hovered over the first object with some transparency.
In the embodiment of the application, the contact person identifier of the shared party is displayed in the first display area corresponding to the first object, so that the association relationship between the first object and the shared party can be intuitively displayed, and the user can conveniently view the association relationship.
Optionally, after the first display area displays the first contact identifier corresponding to the first contact, the method further includes:
receiving a second input of the user to the first contact identification;
in response to the second input, the display of the first contact identification is canceled and the first object is disassociated with the first contact.
It should be noted that the second input may be a click, a slide, a long press, or the like input, but is not limited thereto. As shown in fig. 4, the second input of the first contact identification 41 may be dragging the first contact identification 41 from the location displayed by the first object 42 to outside the first display area.
It will be appreciated that after disassociating the first object from the first contact, the first video clip associated with the first object will not be sent to the first contact. At this time, the user may reselect the first object and make a first input to the first object. I.e. steps 101 to 103 can be re-performed.
In the embodiment of the application, the removal function is provided through the first contact identification, so that the first contact identification can be removed, and the association between the first object and the first contact can be canceled, thereby facilitating the user to reselect the first object and the first contact associated with the first object.
Optionally, after the first display area displays the first contact identifier corresponding to the first contact, the method further includes:
receiving a third input of a user to the first contact identification;
and responding to the third input, canceling the display of the first contact person identifier in the first display area, and displaying the first contact person identifier in the second display area, wherein the second display area comprises a display area corresponding to the second object in the target picture.
It should be noted that the third input may be a click, a slide, a long press, or the like input, but is not limited thereto. The second object is any object except the first object in the target picture. As shown in fig. 5, a third input to the first contact identification 51 may be dragging the first contact identification 51 from the location displayed by the first object 52 to the location displayed by the second object 53. In the event that the user has selected a first object and its associated first contact, the idea may be changed, and it is desirable to share video clips of other objects to the first contact. At this time, the user only needs to perform third input on the first contact identifier. For any object, the video clip of the object is similar to the first video clip of the first object, and will not be described herein.
It should be noted that in case the first display area does not display the first contact identification, the first object will be disassociated with the first contact. And associating the second object with the first contact in the case that the second display area displays the first contact identification. Since the first object is disassociated from the first contact and the object associated with the first contact is the second object, the first video clip associated with the first object will not be sent to the first contact anymore but the second video clip associated with the second object will be sent to the first contact. The second video segment associated with the second object is similar to the first video segment, and the difference between the second video segment and the first video segment is that the target image content in the first video segment is the image content corresponding to the first object, and the target image content in the second video segment is the image content corresponding to the second object.
In the embodiment of the application, the association relationship between the contact and the object in the video can be updated by inputting the first contact identifier.
Optionally, after the first display area displays the first contact identifier corresponding to the first contact, the method further includes:
Receiving a fourth input of the user to the first contact identification;
and responding to the fourth input, displaying the first contact person identifier in a third display area, and keeping the first display area to display the first contact person identifier, wherein the third display area comprises a display area corresponding to a third object in the target picture.
It should be noted that the fourth input may be a click, a slide, a long press, or the like input, but is not limited thereto. The third object is any object except the first object in the target picture, that is to say, the third object and the second object may be the same or different. As shown in fig. 6, a fourth input of the first contact identification 61 may be dragging the first contact identification 61 from the location displayed by the first object 62 to the location displayed by the third object 63. In case the user has selected the first object and the first contact, a new idea may be created, e.g. want to share video clips of other objects together to the first contact. At this time, the user only needs to perform fourth input on the first contact identifier. For any object, the video clip of the object is similar to the first video clip of the first object, and will not be described herein.
It should be noted that in the case where the first contact identifier is displayed in the third display area and the first contact identifier is kept displayed in the first display area, the third object will be associated with the first contact while the first object is associated with the first contact. At this point the first video clip associated with the first object will not be sent to the first contact anymore, but the second video clip associated with the first object and the third object will be sent to the first contact. The second video segment associated with the first object and the third object is similar to the first video segment, and the difference between the second video segment and the first video segment is that the target image content in the first video segment is the image content corresponding to the first object, and the target image content in the second video segment is the image content corresponding to the first object and the image content corresponding to the third object.
In the embodiment of the application, the object contained in the video clip can be updated by inputting the first contact identifier.
Optionally, before sending the first video clip to the first contact, the method further comprises:
and displaying a first window, wherein the first window comprises a video picture of the first video clip and a first contact identifier.
It should be noted that the first window may be a window control, for example, displayed in a floating manner in the form of a widget. It is understood that the number of objects selected by the user in the target screen is equal to the number of windows displayed and corresponds one to one. For example, as shown in fig. 7, the number of windows is two, including a first window 71 and a second window 72, wherein the first window 71 corresponds to a first object 73 and the second window 72 corresponds to a second object 74. The first window 71 is for displaying a first video screen 75 and a first contact identification 76 associated with a first object 73; the second window 72 is for displaying a second video screen 77 and a second contact identifier 78 associated with the second object 74.
In the embodiment of the invention, the contact person identifier of the shared party and the video picture of the first video clip to be sent to the shared party are displayed through the first window, so that a user can conveniently and quickly check the shared party and the shared content.
Optionally, step 103 above: transmitting the first video clip to the first contact, comprising:
receiving a fifth input of a user to a first contact identifier in a first window;
and responding to the fifth input, and sending the first video clip to the contact corresponding to the first contact identifier.
It should be noted that the fifth input may be a click, a slide, a long press, or the like input, but is not limited thereto.
In the embodiment of the application, the first video clip can be sent quickly through the first contact person identifier in the first window.
Optionally, after displaying the first window, the method further comprises:
receiving a sixth input of a user to the first contact identification in the first window;
in response to the sixth input, the display of the first contact identification is canceled in the first window and the first object is disassociated from the first contact.
It should be noted that the sixth input may be a click, a slide, a long press, or the like input, but is not limited thereto. The operation performed in response to the sixth input is similar to the operation performed in response to the second input in the above-described embodiment of the invention, and will not be described here. It will be appreciated that after disassociating the first object from the first contact, the first video clip associated with the first object will not be sent to the first contact. At this time, the user may reselect the first object and make a first input to the first object. I.e. steps 101 to 103 can be re-performed.
In the embodiment of the application, the removal function is provided through the first contact identification, so that the first contact identification can be removed, and the association between the first object and the first contact can be canceled, thereby facilitating the user to reselect the first object and the first contact associated with the first object.
Optionally, in the case of simultaneously displaying the first window and the second window, the method further comprises:
receiving a seventh input of a user to a target contact identifier in a target window;
and in response to the seventh input, updating the display position of the target contact person identifier and updating the association relation between the target contact person and the target video clip.
It should be noted that the second window includes a video frame of the second video clip and a second contact identifier; the target window includes at least one of: the target contact identification comprises at least one of the following items: a first contact identifier, a second contact identifier; the target contact person is a contact person corresponding to the target contact person identification, and the target video clip is a video clip corresponding to the target window. The second video clip is similar to the first video clip in that the target image content in the first video clip is the image content corresponding to the first object, and the target image content in the second video clip is the image content corresponding to the second object.
It will be appreciated that the operations performed in response to the seventh input may be similar to the operations performed in response to the third input or the operations performed in response to the fourth input. Specifically, the first contact identifier may be omitted from displaying in the first window, and the first contact identifier may be displayed in the second window. Simultaneously, the first object is disassociated with the first contact; the second object is associated with the first contact.
The first contact identifier may be displayed in the second window and maintained displayed in the first window. Meanwhile, under the condition that the first object is associated with the first contact person, the second object is associated with the first contact person.
In the embodiment of the application, under the condition that the first window and the second window are displayed simultaneously, the adjustment function is provided through the contact person identification in the window, so that a user can be helped to quickly adjust the shared party.
Optionally, before sending the first video clip to the first contact, the method further comprises:
and extracting a target video frame with the first object in the target video.
And removing all objects except the first object in the foreground image of the target video frame to obtain the processed video frame.
It should be noted that only the foreground image of the target video frame is processed, and the image content of the background image may be preserved. When the object in the target video frame is removed, the first object and the background image can be scratched out in a matting way, so that the object except the first object is removed. Of course, a clipping mode may also be adopted to clip the region to which the first object belongs, and the clipped image content including the first object is retained. It is also possible to delete objects other than the first object first and then fill with pixel points around the deleted portion.
A first video clip is generated based on processing the video frames.
It should be noted that the same object may be extracted and shared to different sharees, and the same sharee may also associate multiple objects. In the case where a plurality of objects are associated with the same sharee, image information obtained based on the plurality of objects is generated into a video file and transmitted to the sharee. For example, in the target picture of the target video, a person a, a person B and a person C are included, and when the person a is associated with the social contact person sheetlet, the person B and the person C are not finally extracted and shared to the sheetlet, and the person B and the person C are not included in the first video segment sent to the sheetlet. If both person a and person B are associated with a social contact sheetlet, one video file is generated based on the image information of person a and person B, and two video files are not generated. It will be appreciated that in the embodiments of the present application, only the object is illustrated as a person, but the object is not limited to the person.
In the embodiment of the application, the first video clip removes the objects except the first object in the foreground image, so that the first video clip has more pertinence and can better meet the user requirements.
It should be noted that, in the video sharing method provided in the embodiment of the present application, the execution subject may be a video sharing device. In the embodiment of the present application, a video sharing method performed by a video sharing device is taken as an example, and the video sharing device provided in the embodiment of the present application is described.
As shown in fig. 8, the embodiment of the present application further provides a video sharing device, where the device includes:
a first receiving module 81, configured to receive, when displaying a playing interface of a target video, a first input of a first object in a target frame by a user, where the target frame is any video frame of the target video;
a first response module 82 for associating the first object with the first contact in response to the first input;
the sending module 83 is configured to send a first video clip to the first contact, where the first video clip is a video clip generated based on target image content in at least two frames of target video frames, and the target video frames are video frames in the target video including a first object, and the target image content includes image content corresponding to the first object.
Optionally, the first input includes a first sub-input, a second sub-input, a third sub-input, and a fourth sub-input, and the first receiving module 81 is specifically configured to receive a first sub-input of a first object in the target screen by a user;
The first response module 82 includes:
a first response subunit for displaying the first control in response to the first sub-input;
the second response subunit is used for displaying at least one application program identifier under the condition that a second sub-input of the user to the first control is received, wherein each application program identifier corresponds to one application program;
a third response subunit, configured to display at least one contact identifier corresponding to a target application program under the condition that a third sub-input of the target application identifier from the at least one application program identifier by the user is received, where the target application program is an application program corresponding to the target application program identifier;
and the fourth response subunit is used for associating the first object with the first contact under the condition that the fourth sub-input of the first contact identifier in the at least one contact identifier is received by the user, wherein the first contact is a contact corresponding to the first contact identifier.
Optionally, the apparatus further comprises:
the first display module is used for displaying a first contact identifier corresponding to a first contact in a first display area, wherein the first display area comprises a display area corresponding to a first object in a target picture.
Optionally, the apparatus further comprises:
the second receiving module is used for receiving a second input of the user on the first contact identification;
and the second response module is used for responding to the second input, canceling the display of the first contact identification and canceling the association of the first object and the first contact.
Optionally, the apparatus further comprises:
the third receiving module is used for receiving a third input of the user on the first contact identification;
and the third response module is used for responding to the third input, canceling the display of the first contact person identifier in the first display area and displaying the first contact person identifier in the second display area, wherein the second display area comprises a display area corresponding to the second object in the target picture.
Optionally, the apparatus further comprises:
the fourth receiving module is used for receiving a fourth input of the user on the first contact identification;
and the fourth response module is used for responding to the fourth input, displaying the first contact person identifier in a third display area and keeping the first display area to display the first contact person identifier, wherein the third display area comprises a display area corresponding to a third object in the target picture.
Optionally, the apparatus further comprises:
and the second display module is used for displaying a first window, wherein the first window comprises a video picture of the first video clip and a first contact person identifier.
Optionally, the sending module 83 includes:
a fifth receiving unit, configured to receive a fifth input of a user to the first contact identifier in the first window;
and the fifth response unit is used for responding to the fifth input and sending the first video clip to the contact corresponding to the first contact identifier.
Optionally, the apparatus further comprises:
the sixth receiving module is used for receiving a sixth input of a user on the first contact identification in the first window;
and the sixth response module is used for responding to the sixth input, canceling the display of the first contact identification in the first window and canceling the association of the first object and the first contact.
Optionally, in the case of simultaneously displaying the first window and the second window, the apparatus further includes:
the seventh receiving module is used for receiving a seventh input of a user on the target contact person identification in the target window;
the seventh response module is used for responding to the seventh input, updating the display position of the target contact person identifier and updating the association relation between the target contact person and the target video clip;
the second window comprises a video picture of a second video clip and a second contact identifier; the target window includes at least one of: the target contact identification comprises at least one of the following items: a first contact identifier, a second contact identifier; the target contact person is a contact person corresponding to the target contact person identification, and the target video clip is a video clip corresponding to the target window.
In this embodiment of the present invention, under the condition of displaying a playing interface of a target video, a first input of a user to a first object in a target frame is received, and because the target frame is any video frame of the target video, the user can perform the first input to the first object interested in the target video at any time in the process of watching the target video. And then, associating the first object with a first contact, wherein the first contact is an object for sharing the video, and finally, sending the first video clip to the first contact to complete the sharing of the video. Because the first video clip is generated based on the target image content in at least two frames of target video frames, wherein the target video frames are video frames comprising the first object in the target video, and the target image content comprises the image content corresponding to the first object, the complete target video is not required to be sent to the first contact, and the video content interested by the user is sent to the first contact in a targeted manner, so that the data volume of the shared video file is reduced, and meanwhile, the traffic of both sharing parties is saved.
The video sharing device in the embodiment of the application may be an electronic device, or may be a component in the electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The video sharing device in the embodiment of the present application may be a device with an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The video sharing device provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 7, and achieve the same technical effects, so that repetition is avoided, and no further description is given here.
Optionally, as shown in fig. 9, the embodiment of the present application further provides an electronic device 900, including a processor 901 and a memory 902, where a program or an instruction capable of running on the processor 901 is stored in the memory 902, and the program or the instruction realizes each step of the embodiment of the video sharing method when being executed by the processor 901, and the steps can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 10 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The user input unit 1007 is configured to receive, when the playing interface of the target video is displayed, a first input of a first object in a target frame by a user, where the target frame is any video frame of the target video;
a processor 1010 for associating a first object with a first contact in response to a first input;
the processor 1010 is further configured to send a first video clip to the first contact, where the first video clip is a video clip generated based on target image content in at least two target video frames, the target video frames being video frames in the target video including a first object, and the target image content including image content corresponding to the first object.
In this embodiment of the present invention, under the condition of displaying a playing interface of a target video, a first input of a user to a first object in a target frame is received, and because the target frame is any video frame of the target video, the user can perform the first input to the first object interested in the target video at any time in the process of watching the target video. And then, associating the first object with the first contact, wherein the first contact is the object for sharing the video, and finally, sending the first video clip to the first contact to complete the sharing of the video. Because the first video clip is generated based on the target image content in at least two frames of target video frames, the target video frames are video frames of which the target video comprises a first object, and the target image content comprises the image content corresponding to the first object, the complete target video is not required to be sent to the first contact, and the video content interested by the user is sent to the first contact in a targeted manner, so that the data volume of the shared video file is reduced, and meanwhile, the traffic of both sharing parties is saved.
Optionally, the display unit 1006 is configured to display a first contact identifier corresponding to a first contact in a first display area, where the first display area includes a display area corresponding to a first object in the target screen.
In the embodiment of the application, the contact person identifier of the shared party is displayed in the first display area corresponding to the first object, so that the association relationship between the first object and the shared party can be intuitively displayed, and the user can conveniently view the association relationship.
Optionally, the user input unit 1007 is further configured to receive a second input of the user on the first contact identifier;
the processor 1010 is further configured to control the display unit 1006 to cancel displaying the first contact identifier and to cancel associating the first object with the first contact in response to the second input.
In the embodiment of the application, the removal function is provided through the first contact identification, so that the first contact identification can be removed, and the association between the first object and the first contact can be canceled, thereby facilitating the user to reselect the first object and the first contact associated with the first object.
Optionally, the user input unit 1007 is further configured to receive a third input of the user on the first contact identifier;
the processor 1010 is further configured to, in response to the third input, control the display unit 1006 to cancel displaying the first contact identifier in a first display area, and display the first contact identifier in a second display area, where the second display area includes a display area corresponding to the second object in the target screen.
In the embodiment of the application, the association relationship between the contact and the object can be updated by inputting the first contact identifier.
Optionally, the user input unit 1007 is further configured to receive a fourth input of the user on the first contact identifier;
the processor 1010 is further configured to, in response to the fourth input, control the display unit 1006 to display the first contact identifier in a third display area, and keep the first display area to display the first contact identifier, where the third display area includes a display area corresponding to a third object in the target screen.
In the embodiment of the application, the object contained in the video clip can be updated by inputting the first contact identifier.
Optionally, the display unit 1006 is further configured to display a first window, where the first window includes a video frame of the first video clip and a first contact identifier.
In the embodiment of the invention, the contact person identifier of the shared party and the video picture of the first video clip to be sent to the shared party are displayed through the first window, so that a user can conveniently and quickly check the shared party and the shared content.
Optionally, the user input unit 1007 is further configured to receive a sixth input of the user on the first contact identifier in the first window;
The processor 1010 is further configured to control the display unit 1006 to cancel displaying the first contact identifier in the first window and to cancel associating the first object with the first contact in response to the sixth input.
In the embodiment of the application, the removal function is provided through the first contact identification, so that the first contact identification can be removed, and the association between the first object and the first contact can be canceled, thereby facilitating the user to reselect the first object and the first contact associated with the first object.
Optionally, in the case that the first window and the second window are displayed simultaneously, the user input unit 1007 is further configured to receive a seventh input of the user on the target contact identifier in the target window;
the processor 1010 is further configured to update a display position of the target contact identifier and update an association relationship between the target contact and the target video clip in response to the seventh input;
the second window comprises a video picture of a second video clip and a second contact identifier; the target window includes at least one of: the target contact identification comprises at least one of the following items: a first contact identifier, a second contact identifier; the target contact person is a contact person corresponding to the target contact person identification, and the target video clip is a video clip corresponding to the target window.
In the embodiment of the application, under the condition that the first window and the second window are displayed simultaneously, the adjustment function is provided through the contact person identification in the window, so that a user can be helped to quickly adjust the shared party.
It should be understood that in the embodiment of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory x09 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1009 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction realizes each process of the embodiment of the video sharing method, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so that each process of the video sharing method embodiment can be implemented, the same technical effect can be achieved, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the video sharing method, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (11)

1. The video sharing method is characterized by comprising the following steps:
receiving a first input of a first object in a target picture by a user under the condition of displaying a playing interface of the target video, wherein the target picture is any video picture of the target video;
in response to the first input, associating the first object with a first contact;
transmitting a first video clip to the first contact, wherein the first video clip is a video clip generated based on target image content in at least two frames of target video frames, the target video frames are video frames comprising the first object in the target video, and the target image content comprises image content corresponding to the first object;
the first input includes a first sub-input, a second sub-input, a third sub-input, and a fourth sub-input, and the receiving a first input of a first object in a target picture by a user includes:
receiving a first sub-input of a first object in a target picture by a user;
the associating the first object with a first contact in response to the first input includes:
responsive to the first sub-input, displaying a first control;
Displaying at least one application program identifier under the condition that a second sub-input of a user to the first control is received, wherein each application program identifier corresponds to one application program;
displaying at least one contact person identifier corresponding to a target application program under the condition that a third sub-input of the target application identifier in the at least one application program identifier by a user is received, wherein the target application program is an application program corresponding to the target application program identifier;
under the condition that a fourth sub-input of a first contact person identifier in the at least one contact person identifier is received by a user, the first object is associated with the first contact person, wherein the first contact person is a contact person corresponding to the first contact person identifier;
the first control is displayed in a display area close to the first object.
2. The method of claim 1, wherein after the associating the first object with the first contact, the method further comprises:
and displaying a first contact identifier corresponding to the first contact in a first display area, wherein the first display area comprises a display area corresponding to the first object in the target picture.
3. The method of claim 2, wherein after the displaying the first contact identifier corresponding to the first contact in the first display area, the method further comprises:
receiving a second input of a user to the first contact identification;
and in response to the second input, cancelling display of the first contact identifier and cancelling association of the first object with the first contact.
4. The method of claim 2, wherein after the displaying the first contact identifier corresponding to the first contact in the first display area, the method further comprises:
receiving a third input of a user to the first contact identification;
and responding to the third input, canceling the display of the first contact identifier in the first display area, and displaying the first contact identifier in a second display area, wherein the second display area comprises a display area corresponding to a second object in the target picture.
5. The method of claim 2, wherein after the displaying the first contact identifier corresponding to the first contact in the first display area, the method further comprises:
Receiving a fourth input of a user to the first contact identification;
and responding to the fourth input, displaying the first contact identifier in a third display area, and keeping the first display area to display the first contact identifier, wherein the third display area comprises a display area corresponding to a third object in the target picture.
6. The method of claim 2, wherein prior to the sending the first video clip to the first contact, the method further comprises:
and displaying a first window, wherein the first window comprises a video picture of the first video clip and the first contact identifier.
7. The method of claim 6, wherein the sending the first video clip to the first contact comprises:
receiving a fifth input of a user to the first contact identifier in the first window;
and responding to the fifth input, and sending the first video clip to the contact corresponding to the first contact identifier.
8. The method of claim 6, wherein after displaying the first window, the method further comprises:
Receiving a sixth input of a user to the first contact identifier in the first window;
in response to the sixth input, the first contact identification is de-displayed in the first window and the first object is de-associated with the first contact.
9. The method of claim 6, wherein in the case of simultaneously displaying the first window and the second window, the method further comprises:
receiving a seventh input of a user to a target contact identifier in a target window;
responding to the seventh input, updating the display position of the target contact person identifier, and updating the association relation between the target contact person and the target video clip;
wherein, the second window comprises a video picture of a second video clip and a second contact identifier; the target window includes at least one of: the target contact identification comprises at least one of the following: a first contact identifier, a second contact identifier; the target contact person is a contact person corresponding to the target contact person identifier, and the target video clip is a video clip corresponding to the target window; the second video segment is a video segment generated based on second target image content in at least two second target video frames, wherein the second target video frames are video frames comprising a second object in the target video, and the second target image content is image content corresponding to the second object.
10. A video sharing apparatus, the video sharing apparatus comprising:
the first receiving module is used for receiving a first input of a first object in a target picture by a user under the condition of displaying a playing interface of the target video, wherein the target picture is any video picture of the target video;
a first response module for associating the first object with a first contact in response to the first input;
the sending module is used for sending a first video clip to the first contact person, wherein the first video clip is a video clip generated based on target image content in at least two frames of target video frames, the target video frames are video frames comprising the first object in the target video, and the target image content comprises image content corresponding to the first object;
the first input comprises a first sub-input, a second sub-input, a third sub-input and a fourth sub-input, and the first receiving module is specifically configured to receive the first sub-input of the first object in the target picture by a user;
the first response module includes:
a first response subunit for displaying the first control in response to the first sub-input;
The second response subunit is used for displaying at least one application program identifier under the condition that a second sub-input of the user to the first control is received, wherein each application program identifier corresponds to one application program;
a third response subunit, configured to display at least one contact identifier corresponding to a target application program under the condition that a third sub-input of the target application identifier from the at least one application program identifier by the user is received, where the target application program is an application program corresponding to the target application program identifier;
a fourth response subunit, configured to associate the first object with the first contact under the condition that a fourth sub-input of the user to the first contact identifier in the at least one contact identifier is received, where the first contact is a contact corresponding to the first contact identifier;
the first control is displayed in a display area close to the first object.
11. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the video sharing method of any one of claims 1-9.
CN202210496636.4A 2022-05-07 2022-05-07 Video sharing method and device and electronic equipment Active CN114928761B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210496636.4A CN114928761B (en) 2022-05-07 2022-05-07 Video sharing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210496636.4A CN114928761B (en) 2022-05-07 2022-05-07 Video sharing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN114928761A CN114928761A (en) 2022-08-19
CN114928761B true CN114928761B (en) 2024-04-12

Family

ID=82807772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210496636.4A Active CN114928761B (en) 2022-05-07 2022-05-07 Video sharing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114928761B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110072071A (en) * 2019-05-31 2019-07-30 努比亚技术有限公司 A kind of video record interaction control method, equipment and computer readable storage medium
CN111612873A (en) * 2020-05-29 2020-09-01 维沃移动通信有限公司 GIF picture generation method and device and electronic equipment
CN112188260A (en) * 2020-10-26 2021-01-05 咪咕文化科技有限公司 Video sharing method, electronic device and readable storage medium
CN113918522A (en) * 2021-10-15 2022-01-11 维沃移动通信有限公司 File generation method and device and electronic equipment
CN114449327A (en) * 2021-12-31 2022-05-06 北京百度网讯科技有限公司 Video clip sharing method and device, electronic equipment and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110072071A (en) * 2019-05-31 2019-07-30 努比亚技术有限公司 A kind of video record interaction control method, equipment and computer readable storage medium
CN111612873A (en) * 2020-05-29 2020-09-01 维沃移动通信有限公司 GIF picture generation method and device and electronic equipment
CN112188260A (en) * 2020-10-26 2021-01-05 咪咕文化科技有限公司 Video sharing method, electronic device and readable storage medium
CN113918522A (en) * 2021-10-15 2022-01-11 维沃移动通信有限公司 File generation method and device and electronic equipment
CN114449327A (en) * 2021-12-31 2022-05-06 北京百度网讯科技有限公司 Video clip sharing method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN114928761A (en) 2022-08-19

Similar Documents

Publication Publication Date Title
WO2022156368A1 (en) Recommended information display method and apparatus
EP3407215B1 (en) Method, device, and computer-readable storage medium for collecting information resources
CN112099705B (en) Screen projection method and device and electronic equipment
CN110602565A (en) Image processing method and electronic equipment
CN110933511A (en) Video sharing method, electronic device and medium
CN111857504A (en) Information display method and device, electronic equipment and storage medium
CN107729098B (en) User interface display method and device
CN112035877A (en) Information hiding method and device, electronic equipment and readable storage medium
CN114928761B (en) Video sharing method and device and electronic equipment
CN111368329A (en) Message display method and device, electronic equipment and storage medium
CN114374663B (en) Message processing method and message processing device
CN113709300B (en) Display method and device
CN113419660A (en) Video resource processing method and device, electronic equipment and storage medium
US20240184434A1 (en) Display method and apparatus
CN112286615B (en) Information display method and device for application program
CN112035032B (en) Expression adding method and device
CN110807116A (en) Data processing method and device and data processing device
CN117215456A (en) Display method and device and electronic equipment
CN115718581A (en) Information display method and device, electronic equipment and storage medium
CN117111811A (en) Screenshot method and device, electronic equipment and readable storage medium
CN117648144A (en) Image processing method, device, electronic equipment and readable storage medium
CN115981535A (en) Content processing method, content processing device, electronic equipment and storage medium
CN117676256A (en) Video playing method and device, electronic equipment and readable storage medium
CN117676007A (en) Information processing method, information processing device, electronic equipment and readable storage medium
CN114520796A (en) Head portrait display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant