WO2024037491A1 - 媒体内容处理方法、装置、设备及存储介质 - Google Patents

媒体内容处理方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2024037491A1
WO2024037491A1 PCT/CN2023/112878 CN2023112878W WO2024037491A1 WO 2024037491 A1 WO2024037491 A1 WO 2024037491A1 CN 2023112878 W CN2023112878 W CN 2023112878W WO 2024037491 A1 WO2024037491 A1 WO 2024037491A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
media content
preset
work
expression object
Prior art date
Application number
PCT/CN2023/112878
Other languages
English (en)
French (fr)
Inventor
万世奇
舒斯起
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2024037491A1 publication Critical patent/WO2024037491A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/434Query formulation using image data, e.g. images, photos, pictures taken by a user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • G06F16/436Filtering based on additional data, e.g. user or group profiles using biological or physiological data of a human being, e.g. blood pressure, facial expression, gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/44Browsing; Visualisation therefor

Definitions

  • the embodiments of the present disclosure relate to the field of computer technology, such as media content processing methods, devices, equipment and storage media.
  • emoticon objects such as emoticons
  • Emoticon objects are mostly provided by applications, and users can use emoticon objects in applications to chat or comment.
  • Embodiments of the present disclosure provide media content processing methods, devices, storage media and equipment, which can optimize media content processing solutions and generate expression objects based on media content.
  • embodiments of the present disclosure provide a media content processing method, including:
  • embodiments of the present disclosure also provide a media content processing device, including:
  • the media content display module is configured to display the media content in the target media work in the default page of the current application, where the media content includes pictures and/or videos;
  • a target content determination module configured to determine at least one target media content in the media content
  • an expression object generation module configured to generate at least one target expression object according to the at least one target media content in response to an expression object generation instruction for the at least one target media content, wherein the at least one target expression object is configured on The current application's expression selection panel.
  • embodiments of the present disclosure also provide an electronic device, where the electronic device includes:
  • processors one or more processors
  • a storage device configured to store one or more programs
  • the one or more processors are caused to implement the media content processing method provided by the embodiment of the present disclosure.
  • embodiments of the present disclosure also provide a storage medium containing computer-executable instructions, which, when executed by a computer processor, are used to execute the media content processing method provided by embodiments of the present disclosure.
  • Figure 1 is a schematic flowchart of a media content processing method provided by an embodiment of the present disclosure
  • Figure 2 is a schematic diagram of an interface provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of yet another media content processing method provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic flowchart of another media content processing method provided by an embodiment of the present disclosure.
  • Figure 5 is another schematic diagram of an interface provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic flowchart of another media content processing method provided by an embodiment of the present disclosure.
  • Figure 7 is a schematic diagram of an interface interaction provided by an embodiment of the present disclosure.
  • Figure 8 is a schematic structural diagram of a media content processing device provided by an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the term “include” and its variations are open-ended, ie, “including but not limited to.”
  • the term “based on” means “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; and the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • a prompt message is sent to the user to clearly remind the user that the operation requested will require the acquisition and use of the user's personal information. Therefore, users can autonomously choose whether to provide personal information to software or hardware such as electronic devices, applications, servers or storage media that perform the operations of the technical solution of the present disclosure based on the prompt information.
  • the method of sending prompt information to the user may be, for example, a pop-up window, and the prompt information may be presented in the form of text in the pop-up window.
  • the pop-up window can also contain a selection control for the user to choose "agree” or "disagree” to provide personal information to the electronic device.
  • FIG. 1 is a schematic flowchart of a media content processing method provided by an embodiment of the present disclosure. This embodiment is applicable to the case of media content processing.
  • the method can be executed by a media content processing device, which can be implemented in the form of software and/or hardware. Alternatively, it can be implemented by an electronic device.
  • the electronic device can be Mobile terminals such as mobile phones, smart watches, tablets, and personal digital assistants can also be devices such as personal computers (Personal Computer, PC) or servers.
  • PC Personal Computer
  • the method includes:
  • Step 101 Display the media content in the target media work on a preset page of the current application, where the media content includes pictures and/or videos.
  • the current application may be a default application
  • the default page may be a page in the default application
  • the default application may provide a media work display function and an expression object generation function.
  • the media work includes one or more media contents.
  • the media content may include pictures and/or videos, and may also include audio, etc.
  • the pictures can include static pictures, and can also include dynamic pictures, such as Graphics Interchange Format (GIF) animations.
  • GIF Graphics Interchange Format
  • the picture or video frame of the video may contain image content and/or text content, etc.
  • the target media work can be understood as the media work to which the media content currently displayed in the default page belongs.
  • the media content currently displayed on the default page may include all or part of the media content in the target media work, and the display method may be the same as or different from the display method of the target media work.
  • Media content can be displayed in thumbnail form, such as reducing image size or ratio, etc.
  • the publisher of the media work can set the attribute information of each media content in the media work before publishing the media work.
  • the attribute information can be used to indicate whether the corresponding media content is allowed to be used by other users to generate emoticon objects.
  • the attribute information corresponding to the media content displayed on the default page indicates that it is allowed to be used to generate expression objects. That is, the function of generating expression objects based on the media content in the media work is fully authorized by the publisher of the media work.
  • Figure 2 is a schematic diagram of an interface provided by an embodiment of the present disclosure.
  • media content 202 is displayed, and the displayed media content is part of the media in the target media work.
  • Content the fourth media content is not fully displayed. You can switch the display of different media contents by sliding left or right.
  • the target media work may also contain only one media content, such as a picture, and what is displayed at this time may be all or part of the media content.
  • Step 102 Determine at least one target media content among the media contents.
  • the target media content can be understood as the media material used to generate the expression object, and the determination of the target media content can be automatically determined by the current application, or can be determined independently by the user.
  • determining at least one target media content in the media content includes: responding Based on the selection operation for the media content, the selected at least one media content is determined as at least one target media content.
  • selection controls corresponding to each media content can also be displayed, such as check boxes or positioning cursors. Users can enter selection operations through the selection controls to select the content they want. Media content used to generate emoticon objects. After receiving the user's selection operation on the media content, in response to the user's selection operation, the selected media content is determined as the target media content.
  • a check box 203 can be displayed on the media content.
  • the user can select the media content to which the check box belongs by checking the check box.
  • the first media content in Figure 2 is selected. .
  • the media content can be automatically determined as the target media content.
  • Step 103 In response to the expression object generation instruction for the at least one target media content, generate at least one target expression object according to the at least one target media content, wherein the at least one target expression object is configured in the current application The program's expression selection panel.
  • the media resources corresponding to the target media content can be obtained, such as picture data or video frame data, etc., and image processing is performed on the obtained media resources. Format conversion or encoding and other related operations are performed to generate the corresponding target expression object.
  • the emoticon object may be an emoticon package or an emoticon sticker.
  • a preset generation control can also be displayed on the preset page, and the user can input expression object generation instructions by triggering the preset generation control.
  • an "Add" button 204 is displayed as a preset generation control. After the user clicks the "Add” button 204, the target expression object can be generated.
  • the target expression object is configured in the expression selection panel of the current application, and the user can select the target expression object based on the expression selection panel and apply the target expression object.
  • the generated target emoticon object can be used for information interaction, such as sending an instant message containing the target emoticon object, or publishing a comment message including the target emoticon object, etc.
  • the generated target expression object can be added to the expression library in the default application, so that it can be displayed in an expression selection panel in the default application for the user to select.
  • the target emoticon object can be added to a custom emoticon collection in the emoticon library.
  • the target emoticon object can be used for information interaction between the current user (that is, the user who triggered the generation of the target emoticon object) and the publisher of the target media work, to improve the interactive experience between the two.
  • the media content processing method provided by the embodiment of the present disclosure displays the media content in the target media work in the default page of the current application, where the media content includes pictures and/or videos, and in the media At least one target media content is determined in the content, and in response to the expression object generation instruction for the target media content, the target expression object is generated according to the target media content, and the target expression object is configured in the expression selection panel of the current application.
  • users can generate expression objects based on the media content in the media works during the process of viewing media works, and configure the generated expression objects in the expression selection panel of the current application to meet the user's personality.
  • Customize expression object generation needs and enrich expression object styles, so that when users want to select expression objects for application in the expression selection panel, they can have more personalized choices, improve user experience, and also enrich media content in media works. purpose to improve the utilization of media content resources.
  • the target media work includes a target picture work, and the target picture work includes at least one picture, that is, a single picture or multiple pictures.
  • the target picture work also includes audio.
  • the audio can be used as background music, and multiple pictures can be played in turn in a preset order, and loop playback is supported.
  • Figure 3 is a schematic flowchart of another media content processing method provided by an embodiment of the present disclosure. Taking the target media work as the target picture work as an example, based on the above optional embodiments, the method may include:
  • Step 301 Display at least one picture in the target picture work on the default page of the current application.
  • the target picture work includes multiple pictures
  • the pictures in the target picture work when they are displayed, they can be displayed one by one or in batches, and the number displayed in the batch is less than or equal to the target picture work.
  • the total number of pictures in the collection, and the display order of the pictures can be consistent with the order of the pictures when displaying the target picture work.
  • the media content 202 may be pictures in the target picture work.
  • Step 302 In response to the selection operation for at least one picture, determine the selected at least one picture as at least one target picture.
  • the target picture can be determined based on the user's selection operation on the check box.
  • Step 303 In response to the expression object generation instruction for at least one target picture, generate at least one target expression object based on the at least one target picture.
  • a single target expression object contains one or more target pictures.
  • each target picture can independently generate a target expression object; two or more target pictures can also be combined to generate a target expression object, for example, a dynamic image effect can be generated based on two or more target pictures. target expression object.
  • the target expression object includes a dynamic expression object
  • the dynamic expression object is generated from a dynamic picture in the at least one target picture and/or a plurality of static pictures in the at least one target picture.
  • the preset generation controls corresponding to the two generation methods can be displayed on the preset page, such as the first preset generation control and the "Add separately” button, and the second preset generation control and the "Merge and Add” button. , to meet the different expression object generation needs of users.
  • the media content processing method provided by the embodiment of the present disclosure allows the user to independently select pictures in the picture work during the process of viewing the picture work, and generate one or more expression objects based on one or more pictures selected by the user. , to meet users' personalized expression object generation needs, enrich expression object styles, and improve user experience. It can also enrich the uses of pictures in picture works and improve the utilization of picture resources.
  • the target media work includes a target video work
  • the target video work includes a plurality of video frames.
  • Figure 4 is a schematic flowchart of another media content processing method provided by an embodiment of the present disclosure. Taking the target media work as the target video work as an example, based on the above optional embodiments, the method may include:
  • Step 401 Display the video progress information corresponding to the target video work on the default page of the current application.
  • the video progress information may be a progress bar, a video frame sequence, or video chapter information, etc. If it is a video frame sequence, the displayed video frame sequence may include all or part of the video frames (which may be thumbnails) in the target video work, and the order of the video frames in the video frame sequence may be consistent with the order of the video frames in the target video. The order of playback in the works is consistent.
  • Figure 5 is a schematic diagram of another interface provided by an embodiment of the present disclosure, which displays a video frame sequence 502 of a video work in a preset page 501.
  • Step 402 In response to the video frame selection operation for the video progress information, determine at least one target video frame set, where each target video frame set includes at least one video frame.
  • a certain number of single video frames (can be understood as one or more video screenshots) in the target video work can be selected for expression object generation; one or more groups of consecutive multiple frames in the target video work can also be selected.
  • Video frames (can be understood as one or more video clips) are used to generate expression objects.
  • the video frame selection operation may be a selection operation on a single video frame or a batch selection operation on multiple video frames.
  • this step may include: in response to the start frame selection operation and the end frame selection operation for the video progress information, determine the start video frame and the end video frame; according to the start The video frame and the end video frame determine at least one target video frame set, wherein, Each target video frame set includes a start video frame, an end video frame, and zero or at least one intermediate video frame. The intermediate video frame is located between the corresponding start video frame and the corresponding between the ending video frames.
  • the advantage of this setting is that video clips can be easily selected so that corresponding expression objects can be quickly generated based on the video clips.
  • the target video frame set may also include only at least one intermediate video frame.
  • the start frame selection operation is used to select the start video frame
  • the end frame selection operation is used to select the end video frame.
  • the start frame selection logo and the end frame selection logo can be displayed in association with the video progress information. The user can adjust the start frame by The starting video frame is determined by the position pointed by the selection mark, and the end video frame is determined by adjusting the position pointed by the ending frame selection mark.
  • the start frame selection identification and the end frame selection identification may be displayed in pairs, and a pair of identifications corresponds to a target video frame set.
  • a target video frame set when there is no intermediate video frame between the start video frame and the end video frame, the target video frame set contains two video frames, that is, the start video frame and the end video frame.
  • a video frame selection box 503 is displayed in association with the video frame sequence 502.
  • the left boundary of the video frame selection box 503 can be understood as the start frame selection identifier, and the right boundary can be understood as the end frame. Select the logo.
  • the user can adjust the selected video frame range by dragging the left or right border of the video frame selection box 503 .
  • Step 403 In response to the expression object generation instruction for at least one target video frame set, generate at least one target expression object according to at least one target video frame set.
  • a single target expression object contains one or more target video frame sets.
  • each target video frame set can independently generate a target expression object; two or more target video frame sets can also be combined to generate a target expression object.
  • the target expression object includes a dynamic expression object
  • the dynamic expression object is generated from a target video frame set in the at least one target video frame set.
  • preset generation controls corresponding to the two generation methods can be displayed on the preset page, such as the third preset generation control, the "Generate separately” button, and the fourth preset generation control, the "Merge generation” button. , to meet the different expression object generation needs of users.
  • the media content processing method provided by the embodiment of the present disclosure allows the user to independently select video frames in the video work during the process of viewing the video work, and generate one or more video frame sets based on the user selected one or more video frame sets.
  • Emoticon objects meet users’ personalized emoticon object generation needs, enrich emoticon object styles, and improve user experience. At the same time, they can also enrich the uses of video content in video works and improve the utilization of video resources.
  • the method when said generating at least one target media content based on said at least one target media content Before marking the expression object, the method further includes: generating at least one preview expression object according to the at least one target media content, and displaying the at least one preview expression object.
  • one or more preview emoticon objects can be displayed.
  • the preview emoticon object can be displayed in a preset page or in a target display area outside the preset page.
  • a target display area can be set above the preset page 201 and above the preset page 501, and the preview expression object can be displayed in the target display area.
  • the method further includes: receiving an editing operation for the at least one preview expression object; and generating at least one target expression object according to the at least one target media content, including: according to the at least one target
  • the media content and the editing result of the editing operation generate at least one target expression object.
  • the method before displaying the media content in the target media work in the preset page of the current application, the method further includes: in the target display page of the target media work in the current application, in response to the response to the preset It is assumed that the first preset trigger operation of the page is to display the preset page.
  • the advantage of this setting is that it allows users to easily enter the preset page while viewing the target media works.
  • the first preset triggering operation may be a triggering operation for a preset entrance of the preset page, or may be an operation acting on the preset page to trigger entry into the preset page, such as a double-click operation.
  • the preset entrance can be an entrance control in the target display page, such as a "convert to emoticon” button, etc.
  • the display of the preset page can be triggered.
  • the size of the default page can be the same as or different from the size of the target display page.
  • the preset page can be superimposed on the target display page.
  • the target media works can continue to be displayed on the target display page.
  • the user You can also continue to view the target media works.
  • the target media work may carry a work tag
  • the work tag may be used, for example, to indicate the type of the media work to which it belongs or a topic related to the media work to which it belongs.
  • the publisher of the target media work can add a work tag to the target media work.
  • the preset logo may include work tags related to expressions, which may also be called preset expression tags. For example, it may include tags such as #expression ⁇ #, #expression#, # ⁇ #, #expression ⁇ # or # ⁇ expression#.
  • the method further includes: in the process of displaying the target media work in the target display page, in response to the second preset triggering operation, displaying a preset control display area in the target display page; determining Whether the target media work carries the preset logo; if the target media work carries the preset logo, the preset page is displayed at the first preset display position in the preset control display area default entry.
  • the advantage of this setting is that the display location of the preset entrance can be flexibly determined based on whether the target media work carries a preset logo. If the target media work carries a preset logo, it will be displayed at the preset display location.
  • the preset control display area may also include preset interaction controls, such as watch together controls, like controls, sharing controls, comment controls, etc., and may also include preset function controls, such as save controls, etc.
  • the first preset display position may be a preset fixed position in the preset control display area, or may be a relative position determined based on the display position of the preset interactive control and/or the preset function control in the preset control display area.
  • the relative position relationship between a default placement position and the current placement position can be preset.
  • the method further includes: if the target media work does not carry the preset logo, in the preset control display area
  • the second preset display position in displays the default entrance of the preset page, wherein the display priority of the first preset display position is higher than the display priority of the second preset display position.
  • FIG. 6 is a schematic flowchart of another media content processing method provided by an embodiment of the present disclosure.
  • the embodiment of the present disclosure is explained based on the optional solutions in the above embodiments.
  • the method includes the following steps:
  • Step 601 In the process of displaying the target media work in the target display page of the current application, in response to the second preset triggering operation, display the preset control display area in the target display page.
  • Figure 7 is a schematic diagram of an interface interaction provided by an embodiment of the present disclosure. For example, taking the target media work as the target picture work, the target picture work is displayed in the target display page 701, and the target picture work is displayed in the target picture work. When two pictures are taken, the user inputs a long press operation (preset trigger operation) on the target display page 701, and the preset control display area 702 is displayed on the target display page 701.
  • a long press operation preset trigger operation
  • Step 602 Determine whether the target media work carries a preset identification. If so, perform step 603; if the target media work does not carry a preset identification, perform step 604.
  • step 602 can also be executed before the preset control display area is displayed on the target display page.
  • the preset entrance is directly determined based on the determination result of step 602. Display position can increase the display speed of the preset entrance.
  • step 603 can be executed.
  • Step 603 Display the default entrance of the default page at the first default display position in the default control display area, and execute step 605.
  • the display priority of the preset placement may be determined based on the relative positional relationship between the preset placement and the placement of the preset interactive controls and/or preset functional controls in the preset control display area.
  • the default entry is the add emoticon button 703, and the default function control is the save button 704.
  • the add emoticon button 703 is displayed before the save button 704; if the target picture work does not carry a logo, the add emoticon button 703 is displayed after the save button 704, for example, the add emoticon button 703 and the save button 704 exchange positions, or replace the add emoticon button 703 in the figure with other controls. For example, if you look at the controls together, the add emoticon button 703 is set after the save button 704. After the user inputs a left sliding operation, the add emoticon button 703 is displayed.
  • Step 604 Display the default entrance of the default page at the second default placement in the default control display area, where the display priority of the first default placement is higher than the display priority of the second default placement. class.
  • Step 605 In response to the triggering operation for the default entry of the default page, display the default page.
  • Step 606 Display the media content in the target media work on the default page.
  • Step 607 In response to a selection operation on media content, determine at least one selected media content as at least one target media content.
  • the user clicks on the check box of the second image containing the moon pattern to determine the second image as the target media content.
  • Step 608 Generate at least one preview expression object according to at least one target media content, and display at least one preview expression object.
  • a preview expression object is generated based on the second picture and displayed in the target display area 706.
  • a preview emoticon object can be generated based on the currently newly selected picture and displayed.
  • the user can switch the display of different preview emoticon objects by triggering the media content in the preset page.
  • Step 609 Receive an editing operation for at least one preview expression object.
  • Step 610 In response to the expression object generation instruction for at least one target media content, generate at least one target expression object according to at least one target media content and the editing result of the editing operation.
  • the preset control display area 702 may include a "synthesize and generate” button that synthesizes multiple pictures or multiple video frames into one expression object. There is also a “Generate separately” button to synthesize expression objects separately, allowing users to select different expression object synthesis results according to their needs.
  • Step 611 Configure at least one target expression object in the expression selection panel of the current application.
  • the target expression object can be added to the expression library of the current application to configure the target expression object in the expression selection panel of the current application, and return to the target display page to continue.
  • the addition success notification can also be displayed on the target display page, such as "emoticon added successfully" as shown in Figure 7.
  • the user can conveniently trigger the display of the control display area while browsing media works, and determine the preset for generating expression objects based on whether the media work carries a preset expression tag.
  • the user wants to generate an expression object based on the media content in the media work, he can trigger the default entrance and enter the default page, select the media content on the default page, and view the preview.
  • Emoticon objects users can also edit based on the previewed emoticon objects to generate personalized emoticon objects that better suit their own needs, and add them to the emoticon library, so that they can later find and use the emoticon object in the emoticon library. It is conducive to improving the interactive experience with interactive objects.
  • Figure 8 is a schematic structural diagram of a media content processing device provided by an embodiment of the present disclosure. As shown in Figure 8, the device includes: a media content display module 801, a target content determination module 802, and an expression object generation module 803.
  • the media content display module 801 is configured to display the media content in the target media work in a preset page, where the media content includes pictures and/or videos; the target content determination module 802 is configured to At least one target media content is determined in the media content; the expression object generation module 803 is configured to generate a target expression object according to the at least one target media content in response to the expression object generation instruction.
  • the media content processing device displays the media content in the target media work in the default page of the current application, where the media content includes pictures and/or videos, and at least one target media is determined in the media content.
  • Content in response to the expression object generation instruction for the target media content, generate the target expression object according to the target media content, and the target expression object is configured in the expression selection panel of the current application.
  • the target content determination module is configured to determine at least one selected media content as at least one target media content in response to a selection operation on the media content.
  • the target media work includes a target picture work, and the target picture work includes at least one picture; wherein the media content display module is configured to: display the target in a preset page of the current application. At least one picture in the picture work; wherein the target content determination module is configured to: in response to a selection operation on the at least one picture, determine the selected at least one picture as at least one target picture.
  • the target picture works include moving pictures and/or static pictures.
  • the expression object generation module 803 is configured to generate at least one target expression object according to the target media content in the following manner: generate at least one target expression object according to the at least one target picture, wherein a single target expression The object contains one or more target pictures from at least one target picture.
  • the target media work includes a target video work, and the target video work includes a plurality of video frames; wherein the media content display module is configured to: display the said media content in a preset page of the current application.
  • the expression object generation module 803 is configured to generate at least one target expression object according to the at least one target media content in the following manner: generate at least one target expression object according to the at least one target video frame set, wherein, A single target expression object contains one or more target video frame sets in at least one target expression object.
  • the at least one target expression object includes a dynamic expression object
  • the dynamic expression object is generated by at least one of the following: a dynamic picture of the at least one target picture, a plurality of static pictures in the at least one target picture. pictures and a set of target video frames in the at least one set of target video frames.
  • the target content determination module includes: a video frame determination unit configured to determine a start video frame and an end video frame in response to a start frame selection operation and an end frame selection operation for the video progress information; video A frame set determining unit configured to determine at least one target video frame set according to the starting video frame and the end video frame.
  • the device further includes: a preview expression object generation module, configured to generate at least one preview expression object according to the at least one target media content before generating at least one target expression object according to the at least one target media content. , and display the at least one preview emoticon object.
  • a preview expression object generation module configured to generate at least one preview expression object according to the at least one target media content before generating at least one target expression object according to the at least one target media content.
  • the device further includes: an editing operation receiving module, configured to receive an editing operation for the at least one preview expression object; wherein, the expression object generating module 803 is configured to generate an editing operation according to the at least one target in the following manner: The media content generates at least one target expression object: according to the at least one target media content and the editing result of the editing operation, at least one target expression object is generated.
  • an editing operation receiving module configured to receive an editing operation for the at least one preview expression object
  • the expression object generating module 803 is configured to generate an editing operation according to the at least one target in the following manner: The media content generates at least one target expression object: according to the at least one target media content and the editing result of the editing operation, at least one target expression object is generated.
  • the device also includes: a preset page display module, configured to display the media content in the target media work on the preset page of the current application, on the target display page of the target media work of the current application. , displaying the preset page in response to a first preset triggering operation for the preset page.
  • a preset page display module configured to display the media content in the target media work on the preset page of the current application, on the target display page of the target media work of the current application. , displaying the preset page in response to a first preset triggering operation for the preset page.
  • the device further includes: a display area display module configured to display the target media work in the target display page in response to the second preset trigger operation during the process of displaying the target media work in the target display page.
  • a preset control display area configured to determine whether the target media work carries the preset identification
  • a first preset entrance display module configured to determine if the target media work carries the preset identification , then the default entrance of the default page is displayed at the first default display position in the default control display area.
  • the device further includes: a second preset entrance display module configured to, after determining whether the target media work carries the preset identification, if the target media work does not carry the preset identification , then the preset entrance to the preset page is displayed at the second preset display position in the preset control display area, where the display priority of the first preset display position is higher than that of the second preset display position.
  • the display priority of the default placement is configured to, after determining whether the target media work carries the preset identification, if the target media work does not carry the preset identification , then the preset entrance to the preset page is displayed at the second preset display position in the preset control display area, where the display priority of the first preset display position is higher than that of the second preset display position. The display priority of the default placement.
  • the media content processing device provided by the embodiments of the present disclosure can execute the media content processing method provided by any embodiment of the present disclosure, and has corresponding functional modules and effects for executing the method.
  • each unit and module included in the above device is only divided according to functional logic, but is not limited to the above division, as long as it can achieve the corresponding function; in addition, the specific names of each functional unit are only for the convenience of distinguishing each other. It is not used to limit the scope of protection of the embodiments of the present disclosure.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • Terminal devices in embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA), tablet computers (Portable Android Device, PAD), portable multimedia players Mobile terminals such as (Portable Media Player, PMP), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and fixed terminals such as digital television (television, TV), desktop computers, etc.
  • PDA Personal Digital Assistant
  • PMP Portable Multimedia Player
  • vehicle-mounted terminals such as vehicle-mounted navigation terminals
  • fixed terminals such as digital television (television, TV), desktop computers, etc.
  • the electronic device shown in FIG. 9 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 900 may include a processing device (such as a central processing unit, a graphics processor, etc.) 901, which may process data according to a program stored in a read-only memory (Read-Only Memory, ROM) 902 or from a storage device. 908 loads the program in the random access memory (Random Access Memory, RAM) 903 to perform various appropriate actions and processing. In the RAM 903, various programs and data required for the operation of the electronic device 900 are also stored.
  • the processing device 901, ROM 902 and RAM 903 are connected to each other via a bus 904.
  • An input/output (I/O) interface 905 is also connected to bus 904.
  • the following devices can be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) , an output device 907 such as a speaker, a vibrator, etc.; a storage device 908 including a magnetic tape, a hard disk, etc.; and a communication device 909.
  • the communication device 909 may allow the electronic device 900 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 9 illustrates an electronic device 900 having various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via communication device 909, or from storage device 908, or from ROM 902.
  • the processing device 901 When the computer program is executed by the processing device 901, the above-mentioned functions defined in the method of the embodiment of the present disclosure are performed.
  • the electronic device provided by the embodiments of the present disclosure and the media content processing method provided by the above embodiments belong to the same inventive concept.
  • Technical details that are not described in detail in this embodiment can be referred to the above embodiments, and this embodiment has the same features as the above embodiments. Effect.
  • Embodiments of the present disclosure provide a computer storage medium on which a computer program is stored.
  • the program is executed by a processor, the media content processing method provided by the above embodiments is implemented.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of computer readable storage media may include, but are not limited to: electrical connections having one or more wires, portable computer disks, hard drives, RAM, ROM, Erasable Programmable Read-Only Memory Memory, EPROM), flash memory, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
  • Program code contained on a computer-readable medium can be transmitted using any appropriate medium, including but not limited to: wires, optical cables, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
  • the client and server can communicate using any currently known or future developed network protocol, such as HyperText Transfer Protocol (HTTP), and can communicate with digital data in any form or medium.
  • HTTP HyperText Transfer Protocol
  • Communications e.g., communications network
  • Examples of communication networks include Local Area Networks (LANs), Wide Area Networks (WANs), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any current network for knowledge or future research and development.
  • LANs Local Area Networks
  • WANs Wide Area Networks
  • the Internet e.g., the Internet
  • end-to-end networks e.g., ad hoc end-to-end networks
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device displays the media content in the target media work in the preset page of the current application program, Wherein, the media content includes pictures and/or videos; in the media determine at least one target media content in the body content; in response to the expression object generation instruction for the at least one target media content, generate at least one target expression object according to the at least one target media content, wherein the at least one target expression object Is configured in the current application's expression selection panel.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages—such as "C" or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user computer through any kind of network, including a LAN or WAN, or may be connected to an external computer (such as through the Internet using an Internet service provider).
  • each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure can be implemented in software or hardware.
  • the name of a module does not constitute a limitation on the module itself under certain circumstances.
  • the target content determination module can also be described as "a module that determines at least one target media content in the media content.”
  • exemplary types of hardware logic components include: field programmable gate array (Field Programmable Gate Array, FPGA), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), application specific standard product (Application Specific Standard Parts (ASSP), System on Chip (SOC), Complex Programmable Logic Device (CPLD), etc.
  • machine-readable media may be tangible media that may contain or store A program stored for use by or in conjunction with an instruction execution system, device, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include an electrical connection based on one or more wires, a portable computer disk, a hard disk, RAM, ROM, EPROM or flash memory, optical fiber, portable CD-ROM, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • a media content processing method including: displaying the media content in the target media work in a preset page of the current application, wherein the media content includes pictures and/or or video; determining at least one target media content in the media content; in response to an expression object generation instruction for the at least one target media content, generating at least one target expression object according to the at least one target media content, wherein, The at least one target expression object is configured in the expression selection panel of the current application.
  • determining at least one target media content in the media content includes: in response to a selection operation for the media content, determining the selected at least one media content to be at least A target media content.
  • the target media work includes a target picture work, and the target picture work includes at least one picture; wherein the target media work is displayed in a preset page of the current application.
  • the media content in the media content includes: displaying at least one picture in the target picture work in a preset page of the current application; wherein, in response to the selection operation for the media content, at least one of the selected
  • Determining the media content as at least one target media content includes: in response to a selection operation on the at least one picture, determining the selected at least one picture as at least one target picture.
  • the at least one target picture work includes moving pictures and/or static pictures.
  • generating at least one target expression object according to the target media content includes: generating at least one target expression object according to the at least one target picture, wherein the single target expression object One or more target pictures are included in the at least one target picture.
  • the target media work includes a target video work, and the target video work includes a plurality of video frames; wherein the target media work is displayed in a preset page of the current application.
  • the media content in the media content includes: displaying the video progress information corresponding to the target video work in a preset page of the current application; wherein, in response to the selection operation for the media content, at least one selected media
  • the content is determined to be at least one target media content, including
  • the method includes: in response to the video frame selection operation for the video progress information, determining at least one target video frame set, wherein each target video frame set includes at least one video frame.
  • generating at least one target expression object according to the at least one target media content includes: generating at least one target expression object according to the at least one target video frame set, wherein a single target The expression object contains one or more target video frame sets in at least one target video frame set.
  • the at least one target expression object includes a dynamic expression object generated by at least one of the following: a dynamic picture of the at least one target picture, the at least one A plurality of still pictures of the target picture, and a target video frame set in the at least one target video frame set.
  • determining at least one target video frame set in response to a video frame selection operation for the video progress information includes: in response to a start frame selection for the video progress information The operation and the end frame selection operation determine a start video frame and an end video frame; determine at least one target video frame set according to the start video frame and the end video frame.
  • the present disclosure before generating at least one target expression object according to the at least one target media content, it further includes: generating at least one preview expression object according to the at least one target media content, and displaying The at least one preview emoticon object.
  • the method further includes: receiving an editing operation for the at least one preview expression object; and generating at least one target expression object according to the at least one target media content, including: according to the at least one target expression object.
  • a target media content and the editing result of the editing operation generate at least one target expression object.
  • the method before displaying the media content in the target media work in the preset page of the current application, the method further includes: in the target display page of the target media work in the current application, responding to In a first preset triggering operation for the preset page, the preset page is displayed.
  • it further includes: in the process of displaying the target media work in the target display page, in response to a second preset triggering operation, displaying a preset in the target display page.
  • Control display area determine whether the target media work carries the preset logo; if the target media work carries the preset logo, display it at the first preset display position in the preset control display area. The default entry of the default page.
  • the method further includes: if the target media work does not carry the preset identification, in the The second default display position in the default control display area displays the default entrance of the default page, Wherein, the display priority of the first preset placement position is higher than the display priority of the second preset placement position.
  • a media content processing device including: a media content display module configured to display the media content in the target media work in a preset page of the current application, wherein the media The content includes pictures and/or videos; a target content determination module is configured to determine at least one target media content in the media content; an expression object generation module is configured to respond to an expression object generation instruction for the at least one target media content , generating at least one target expression object according to the at least one target media content, wherein the at least one target expression object is configured in the expression selection panel of the current application.
  • an electronic device includes: one or more processors; a storage device configured to store one or more programs.
  • the one or more A program is executed by the one or more processors, so that the one or more processors implement the media content processing method provided by the embodiment of the present disclosure.
  • a storage medium containing computer-executable instructions is also provided.
  • the computer-executable instructions when executed by a computer processor, are used to perform the media content processing provided by the embodiments of the present disclosure. method.

Abstract

本公开实施例公开了媒体内容处理方法、装置、设备及存储介质。该方法包括:在当前应用程序的预设页面中展示目标媒体作品中的媒体内容,其中,媒体内容包括图片和/或视频,在媒体内容中确定至少一个目标媒体内容,响应于针对至少一个目标媒体内容的表情对象生成指令,根据目标媒体内容生成至少一个目标表情对象,至少一个目标表情对象被配置在当前应用程序的表情选择面板。

Description

媒体内容处理方法、装置、设备及存储介质
本申请要求在2022年08月15日提交中国专利局、申请号为202210977422.9的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及计算机技术领域,例如及媒体内容处理方法、装置、设备及存储介质。
背景技术
随着互联网技术的快速发展,用户之间的交流越来越便利,用户可以通过应用程序进行各种各样的信息交互。
基于表情包等表情对象的交互是一种利用图片来表示感情的方式,表情对象相比于文字信息,能够更加生动贴切地表达用户感情,已得到广泛的应用。表情对象多由应用程序提供,用户可以使用应用程序中的表情对象进行聊天或评论等。
发明内容
本公开实施例提供了媒体内容处理方法、装置、存储介质及设备,可以优化媒体内容处理方案,可以根据媒体内容生成表情对象。
第一方面,本公开实施例提供了媒体内容处理方法,包括:
在当前应用程序的预设页面中展示目标媒体作品中的媒体内容,其中,所述媒体内容包括图片和/或视频;
在所述媒体内容中确定至少一个目标媒体内容;
响应于针对所述至少一个目标媒体内容的表情对象生成指令,根据所述至少一个目标媒体内容生成至少一个目标表情对象,其中,所述至少一个目标表情对象被配置在所述当前应用程序的表情选择面板。
第二方面,本公开实施例还提供了媒体内容处理装置,包括:
媒体内容展示模块,设置为在当前应用程序的预设页面中展示目标媒体作品中的媒体内容,其中,所述媒体内容包括图片和/或视频;
目标内容确定模块,设置为在所述媒体内容中确定至少一个目标媒体内容;
表情对象生成模块,设置为响应于针对所述至少一个目标媒体内容的表情对象生成指令,根据所述至少一个目标媒体内容生成至少一个目标表情对象,其中,所述至少一个目标表情对象被配置在所述当前应用程序的表情选择面板。
第三方面,本公开实施例还提供了一种电子设备,所述电子设备包括:
一个或多个处理器;
存储装置,设置为存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现本公开实施例提供的媒体内容处理方法。
第四方面,本公开实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行本公开实施例提供的媒体内容处理方法。
附图说明
贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例所提供的一种媒体内容处理方法的流程示意图;
图2为本公开实施例所提供的一种界面示意图;
图3为本公开实施例所提供的又一种媒体内容处理方法的流程示意图;
图4为本公开实施例所提供的另一种媒体内容处理方法的流程示意图;
图5为本公开实施例所提供的另一种界面示意图;
图6为本公开实施例所提供的另一种媒体内容处理方法的流程示意图;
图7为本公开实施例所提供的一种界面交互示意图;
图8为本公开实施例所提供的一种媒体内容处理装置的结构示意图;
图9为本公开实施例所提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图描述本公开的实施例。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
可以理解的是,在使用本公开各实施例公开的技术方案之前,均应当依据相关法律法规通过恰当的方式对本公开所涉及个人信息的类型、使用范围、使用场景等告知用户并获得用户的授权。
例如,在响应于接收到用户的主动请求时,向用户发送提示信息,以明确地提示用户,其请求执行的操作将需要获取和使用到用户的个人信息。从而,使得用户可以根据提示信息来自主地选择是否向执行本公开技术方案的操作的电子设备、应用程序、服务器或存储介质等软件或硬件提供个人信息。
作为一种可选的但非限定性的实现方式,响应于接收到用户的主动请求,向用户发送提示信息的方式例如可以是弹窗的方式,弹窗中可以以文字的方式呈现提示信息。此外,弹窗中还可以承载供用户选择“同意”或者“不同意”向电子设备提供个人信息的选择控件。
可以理解的是,上述通知和获取用户授权过程仅是示意性的,不对本公开的实现方式构成限定,其它满足相关法律法规的方式也可应用于本公开的实现方式中。
可以理解的是,本技术方案所涉及的数据(包括但不限于数据本身、数据的获取或使用)应当遵循相应法律法规及相关规定的要求。
图1为本公开实施例所提供的一种媒体内容处理方法的流程示意图,本公 开实施例适用于媒体内容处理的情形,该方法可以由媒体内容处理装置来执行,该装置可以通过软件和/或硬件的形式实现,可选的,通过电子设备来实现,该电子设备可以是手机、智能手表、平板电脑以及个人数字助理等移动终端,也可以为个人计算机(Personal Computer,PC)端或服务器等设备。
如图1所示,所述方法包括:
步骤101、在当前应用程序的预设页面中展示目标媒体作品中的媒体内容,其中,所述媒体内容包括图片和/或视频。
本公开实施例中,当前应用程序可以是预设应用程序,预设页面可以是预设应用程序中的页面,预设应用程序可提供媒体作品展示功能和表情对象生成功能。示例性的,用户在需要使用表情对象生成功能时,可以在预设应用程序中打开预设页面。媒体作品中包括一个或多个媒体内容,媒体内容可包括图片和/或视频,还可包括音频等,具体不做限定。其中,图片可以包括静态图片,还可包括动态图片,如图形交换格式(Graphics Interchange Format,GIF)动图。图片中或视频的视频帧中可包含图像内容和/或文字内容等。
本公开实施例中,目标媒体作品可以理解为预设页面中当前展示的媒体内容所属的媒体作品。预设页面中当前展示的媒体内容,可以包括目标媒体作品中的全部或部分媒体内容,展示方式可以与目标媒体作品的展示方式相同或不同,例如,相对于目标媒体作品的展示方式来说,可以对媒体内容进行缩略展示,如缩小图像尺寸或比例等。
媒体作品的发布者可以在发布媒体作品之前,设定媒体作品中每个媒体内容的属性信息,该属性信息可以用于指示对应的媒体内容是否允许被其他用户用于生成表情对象。预设页面中展示的媒体内容对应的属性信息指示允许被用于生成表情对象,也即,根据媒体作品中的媒体内容生成表情对象的功能为得到媒体作品的发布者的充分授权的。
示例性的,图2为本公开实施例所提供的一种界面示意图,如图2所示,在预设页面201中,展示媒体内容202,所展示的媒体内容为目标媒体作品中的部分媒体内容,其中第4个媒体内容未展示完全,可通过输入向左滑动或向右滑动等操作切换不同媒体内容的展示。目标媒体作品中也可能仅包含一个媒体内容,如一张图片,此时展示的可以是该媒体内容的全部内容或部分内容。
步骤102、在所述媒体内容中确定至少一个目标媒体内容。
本公开实施例中,目标媒体内容可以理解为用于生成表情对象的媒体素材,目标媒体内容的确定可以由当前应用程序自动确定,也可由用户自主确定。
可选的,所述在所述媒体内容中确定至少一个目标媒体内容,包括:响应 于针对所述媒体内容的选择操作,将被选中的至少一个媒体内容确定为至少一个目标媒体内容。这样设置的好处在于,可以允许用户更加自由地进行目标媒体内容的选择,以实现表情对象的个性化定制。
示例性的,在预设页面中展示媒体内容的同时,还可展示每个媒体内容对应的选择控件,如复选框或定位光标等,用户可以通过选择控件输入选择操作,以选中自己想要用于生成表情对象的媒体内容。在接收到用户针对媒体内容的选择操作后,响应于用户的该选择操作,将被选中的媒体内容确定为目标媒体内容。
如图2所示,可以在媒体内容上展示复选框203,用户可以通过勾选复选框的方式对复选框所属的媒体内容进行选择,如图2中的第一个媒体内容被选中。
可选的,在目标媒体作品中仅包括一个媒体内容的情况下,可以自动将该媒体内容确定为目标媒体内容。
步骤103、响应于针对所述至少一个目标媒体内容的表情对象生成指令,根据所述至少一个目标媒体内容生成至少一个目标表情对象,其中,所述至少一个目标表情对象被配置在所述当前应用程序的表情选择面板。
示例性的,在接收到针对目标媒体内容的表情对象生成指令后,可以获取所针对的目标媒体内容对应的媒体资源,如图片数据或视频帧数据等,对所获取的媒体资源进行图像处理、格式转换或编码等相关操作,生成对应的目标表情对象。其中,表情对象可以为表情包或表情贴纸等对象。
可选的,在预设页面中还可以展示预设生成控件,用户可以通过触发预设生成控件来输入表情对象生成指令。如图2所示,显示作为预设生成控件的“添加”按钮204,用户点击“添加”按钮204后,可生成目标表情对象。
本公开实施例中,目标表情对象被配置在所述当前应用程序的表情选择面板,用户可以基于表情选择面板进行目标表情对象的选择,并对目标表情对象进行应用。可选的,所生成的目标表情对象可用于信息交互,如发送包含目标表情对象的即时消息,又如发布包含目标表情对象的评论消息等。所生成的目标表情对象可被添加至预设应用程序中的表情库中,从而可以显示在预设应用程序中的表情选择面板供用户选择。例如,目标表情对象可以添加至表情库中的自定义表情集合中。可选的,目标表情对象可用于当前用户(也即触发生成目标表情对象的用户)与目标媒体作品的发布者之间的信息交互,提升两者之间的交互体验。
本公开实施例提供的媒体内容处理方法,在当前应用程序的预设页面中展示目标媒体作品中的媒体内容,其中,媒体内容包括图片和/或视频,在媒体内 容中确定至少一个目标媒体内容,响应于针对目标媒体内容的表情对象生成指令,根据目标媒体内容生成目标表情对象,目标表情对象被配置在当前应用程序的表情选择面板。通过采用上述技术方案,可以使得用户在查看媒体作品的过程中,根据媒体作品中的媒体内容生成表情对象,并将所生成的表情对象配置在当前应用程序的表情选择面板中,满足用户的个性化表情对象生成需求,丰富表情对象样式,使得用户想要在表情选择面板中选择表情对象进行应用时,可以有更多的个性化的选择,提升用户体验,同时也可丰富媒体作品中媒体内容的用途,提升媒体内容资源的利用率。
在一些实施例中,所述目标媒体作品包括目标图片作品,所述目标图片作品中包括至少一张图片,也即单张图片或多张图片。示例性的,目标图片作品中还包括音频,目标图片作品在进行展示时,音频可作为背景音乐,并对多张图片按照预设顺序依次轮流播放,并支持循环播放。
图3为本公开实施例所提供的又一种媒体内容处理方法的流程示意图,以目标媒体作品为目标图片作品为例,在上述可选实施例基础上进行说明,该方法可包括:
步骤301、在当前应用程序的预设页面中展示目标图片作品中的至少一张图片。
示例性的,目标图片作品中包括多张图片的情况下,在对目标图片作品中的图片进行展示时,可以进行逐张展示,也可进行批量展示,批量展示的数量小于或等于目标图片作品中的图片总数量,图片的展示顺序可以与展示目标图片作品时的图片的顺序一致。
示例性的,如图2所示,媒体内容202可以是目标图片作品中的图片。
步骤302、响应于用于针对至少一张图片的选择操作,将被选中的至少一张图片确定为至少一张目标图片。
示例性的,如图2所示,可以根据用户对复选框的选择操作,确定目标图片。
步骤303、响应于针对至少一张目标图片的表情对象生成指令,根据所述至少一张目标图片生成至少一个目标表情对象。
可选的,单个目标表情对象中包含一张或多张目标图片。
示例性的,每张目标图片均可以单独生成一个目标表情对象;两张或更多的目标图片也可以合并生成一个目标表情对象,例如可以是基于两张或更多的目标图片生成动态图效果的目标表情对象。
可选的,目标表情对象包括动态表情对象,所述动态表情对象由所述至少一张目标图片中的动态图片和/或至少一张目标图片中的多张静态图片生成。
可选的,可以在预设页面中显示两种生成方式分别对应的预设生成控件,例如第一预设生成控件,“分别添加”按钮,以及第二预设生成控件,“合并添加”按钮,以满足用户不同的表情对象生成需求。
本公开实施例提供的媒体内容处理方法,可以使得用户在查看图片作品的过程中,对图片作品中的图片进行自主选择,并根据用户选择的一张或多张图片生成一个或多个表情对象,满足用户的个性化表情对象生成需求,丰富表情对象样式,提升用户体验,同时也可丰富图片作品中图片的用途,提升图片资源的利用率。
在一些实施例中,所述目标媒体作品包括目标视频作品,所述目标视频作品中包括多个视频帧。
图4为本公开实施例所提供的另一种媒体内容处理方法的流程示意图,以目标媒体作品为目标视频作品为例,在上述可选实施例基础上进行说明,该方法可包括:
步骤401、在当前应用程序的预设页面中展示目标视频作品对应的视频进度信息。
示例性的,视频进度信息可以是进度条,也可以是视频帧序列,或者视频章节信息等。若为视频帧序列,所展示的视频帧序列中可以包含目标视频作品中的全部或部分视频帧(可以是缩略图),视频帧序列中的视频帧的顺序可以与所述视频帧在目标视频作品中的播放顺序一致。
图5为本公开实施例所提供的另一种界面示意图,在预设页面501中展示视频作品的视频帧序列502。
步骤402、响应于针对视频进度信息的视频帧选择操作,确定至少一个目标视频帧集合,其中,每个目标视频帧集合中包括至少一个视频帧。
示例性的,可以选择目标视频作品中的一定数量的单个视频帧(可理解为一个或多个视频截图)用于表情对象生成;也可以选择目标视频作品中的一组或多组的连续多个视频帧(可理解为一个或多个视频片段)用于表情对象的生成。视频帧选择操作可以是对单个视频帧的选择操作,也可以是对多个视频帧的批量选择操作。
示例性的,对于视频片段的选取,本步骤可包括:响应于针对所述视频进度信息的起始帧选择操作和结束帧选择操作,确定起始视频帧和结束视频帧;根据所述起始视频帧和所述结束视频帧确定至少一个目标视频帧集合,其中, 每个目标视频帧集合中包括一个起始视频帧、一个结束视频帧、以及零个或至少一个中间视频帧,所述中间视频帧在所述视频进度信息中位于对应的起始视频帧和对应的结束视频帧之间。这样设置的好处在于,可以便捷地进行视频片段的选取,以便根据视频片段快速生成对应的表情对象。可选的,目标视频帧集合中也可仅包括至少一个中间视频帧。
起始帧选择操作用于选择起始视频帧,结束帧选择操作用于选择结束视频帧,可以在视频进度信息上关联显示起始帧选择标识和结束帧选择标识,用户可以通过调整起始帧选择标识所指向的位置来确定起始视频帧,通过调整结束帧选择标识所指向的位置来确定结束视频帧。
示例性的,起始帧选择标识和结束帧选择标识可以成对显示,一对标识对应一个目标视频帧集合。对于一个目标视频帧集合,当起始视频帧和结束视频帧之间不存在中间视频帧时,目标视频帧集合包含两个视频帧,也即起始视频帧和结束视频帧。
示例性的,如图5所示,在视频帧序列502上关联显示视频帧选择框503,视频帧选择框503的左侧边界可以理解为起始帧选择标识,右侧边界可以理解为结束帧选择标识。用户可以通过拖拽视频帧选择框503左侧边界或右侧边界对所选视频帧范围进行调整。
步骤403、响应于针对至少一个目标视频帧集合的表情对象生成指令,根据至少一个目标视频帧集合生成至少一个目标表情对象。
可选的,单个目标表情对象中包含一个或多个目标视频帧集合。
示例性的,每个目标视频帧集合均可以单独生成一个目标表情对象;两个或更多的目标视频帧集合也可以合并生成一个目标表情对象。
可选的,所述目标表情对象包括动态表情对象,所述动态表情对象由所述至少一个目标视频帧集合中的目标视频帧集合生成。
可选的,可以在预设页面中显示两种生成方式分别对应的预设生成控件,例如第三预设生成控件,“分别生成”按钮,以及第四预设生成控件,“合并生成”按钮,以满足用户不同的表情对象生成需求。
本公开实施例提供的媒体内容处理方法,可以使得用户在查看视频作品的过程中,对视频作品中的视频帧进行自主选择,并根据用户选择的一个或多个视频帧集合生成一个或多个表情对象,满足用户的个性化表情对象生成需求,丰富表情对象样式,提升用户体验,同时也可丰富视频作品中视频内容的用途,提升视频资源的利用率。
在一些实施例中,在所述根据所述至少一个目标媒体内容生成至少一个目 标表情对象之前,还包括:根据所述至少一个目标媒体内容生成至少一个预览表情对象,并展示所述至少一个预览表情对象。这样设置的好处在于,可以在生成表情对象之前提供预览功能,让用户可以预先查看表情对象的效果,避免反复修改,提高表情对象生成效率。
可选的,预览表情对象为多个的情况下,可以展示一个或多个预览表情对象。
示例性的,预览表情对象可以在预设页面中展示,也可以在预设页面之外的目标展示区域内进行展示。例如,如图2和图5所示,可以在预设页面201的上方和预设页面501的上方设置目标展示区域,并在目标展示区域展示预览表情对象。
在一些实施例中,所述方法还包括:接收针对所述至少一个预览表情对象的编辑操作;所述根据所述至少一个目标媒体内容生成至少一个目标表情对象,包括:根据所述至少一个目标媒体内容和所述编辑操作的编辑结果,生成至少一个目标表情对象。这样设置的好处在于,可以允许用户在预览表情对象基础上进行编辑,使得生成的目标表情对象更加贴合用户的自身需求。其中,编辑操作例如可以包括添加文字、添加贴图或调整尺寸等。
在一些实施例中,在当前应用程序的预设页面中展示目标媒体作品中的媒体内容之前,还包括:在当前应用程序的所述目标媒体作品的目标展示页面中,响应于针对所述预设页面的第一预设触发操作,显示所述预设页面。这样设置的好处在于,允许用户在查看目标媒体作品的过程中,便捷进入预设页面。
可选的,第一预设触发操作可以是针对预设页面的预设入口的触发操作,还可以是作用于预设页面的用于触发进入预设页面的操作,如双击操作等。
可选的,预设入口可以是目标展示页面中的入口控件,如“转换成表情”按钮等,通过触发预设入口,可以触发预设页面的显示。
可选的,预设页面的尺寸可以与目标展示页面的尺寸相同或不同。当预设页面的尺寸小于目标展示页面的尺寸时,预设页面可以叠加于目标展示页面上层,这种情况下,目标媒体作品可以在目标展示页面中继续展示,用户在设置表情的过程中,还可以继续查看目标媒体作品。
本公开实施例中,目标媒体作品可以携带作品标签,作品标签例如可以用于指示所属媒体作品的类型或与所属媒体作品相关的话题等。示例性的,目标媒体作品的发布者在发布目标媒体作品时,可以为目标媒体作品添加作品标签。预设标识可以包括与表情相关的作品标签,也可称为预设表情标签,例如可包括#表情包#、#表情#、#斗图#、#表情图#或#新表情#等标签。
在一些实施例中,还包括:在所述目标展示页面中展示所述目标媒体作品的过程中,响应于第二预设触发操作,在所述目标展示页面中显示预设控件展示区域;确定所述目标媒体作品是否携带有预设标识;若所述目标媒体作品携带有所述预设标识,则在所述预设控件展示区域中的第一预设展示位置,展示所述预设页面的预设入口。这样设置的好处在于,可以根据目标媒体作品是否携带有预设标识来灵活地确定预设入口的展示位置,若目标媒体作品携带有预设标识,则在预设展示位置进行展示。
示例性的,预设控件展示区域中还可包括预设交互控件,如一起看控件、点赞控件、分享控件和评论控件等,还可包括预设功能控件,如保存控件等。第一预设展示位置可以是预设控件展示区域中预先设置的固定位置,还可以是根据预设控件展示区域的预设交互控件和/或预设功能控件的展示位置确定的相对位置,第一预设展示位置与该当前展示位置之间的相对位置关系可以预先设定。
在一些实施例中,在所述确定所述目标媒体作品是否携带有预设标识之后,还包括:若所述目标媒体作品未携带有所述预设标识,则在所述预设控件展示区域中的第二预设展示位置,展示所述预设页面的预设入口,其中,所述第一预设展示位置的展示优先级高于所述第二预设展示位置的展示优先级。这样设置的好处在于,在携带有预设标识的情况下,说明目标媒体作品被用于生成表情的可能性更高一些,在展示优先级更高的位置对预设入口进行展示,可以更加便于用户触发预设入口,若目标媒体作品未携带有预设标识,则展示优先级可以低一些,以合理利用预设控件展示区域进行控件展示。
图6为本公开实施例所提供的另一种媒体内容处理方法的流程示意图,本公开实施例以上述实施例中可选方案为基础进行说明。该方法包括如下步骤:
步骤601、在当前应用程序的目标展示页面中展示目标媒体作品的过程中,响应于第二预设触发操作,在目标展示页面中显示预设控件展示区域。
图7为本公开实施例所提供的一种界面交互示意图,示例性的,以目标媒体作品为目标图片作品为例,在目标展示页面701中展示目标图片作品,在展示目标图片作品中的第二张图片时,用户在目标展示页面701中输入长按操作(预设触发操作),在目标展示页面701中显示预设控件展示区域702。
步骤602、确定目标媒体作品是否携带有预设标识,若是,则执行步骤603;若目标媒体作品未携带有预设标识,执行步骤604。
可选的,步骤602也可在目标展示页面中显示预设控件展示区域之前执行,在显示预设控件展示区域之后,根据步骤602的判定结果直接决定预设入口的 展示位置,可以提升预设入口的展示速度。
如图7所示,目标图片作品携带有#表情包#标签(预设标识),则可执行步骤603。
步骤603、在预设控件展示区域中的第一预设展示位置,展示预设页面的预设入口,执行步骤605。
示例性的,预设展示位置的展示优先级可以根据预设展示位置与预设控件展示区域中的预设交互控件和/或预设功能控件的展示位置的相对位置关系确定。
例如,如图7所示,预设入口为添加表情按钮703,预设功能控件为保存按钮704。若目标图片作品携带预设标识,则添加表情按钮703显示于保存按钮704之前;若目标图片作品不携带标识,则添加表情按钮703显示于保存按钮704之后,例如,添加表情按钮703和保存按钮704交换位置,或图中添加表情按钮703替换为其他控件,如一起看控件,将添加表情按钮703设置于保存按钮704之后,用户输入向左滑动操作后,显示添加表情按钮703。
步骤604、在预设控件展示区域中的第二预设展示位置,展示预设页面的预设入口,其中,第一预设展示位置的展示优先级高于第二预设展示位置的展示优先级。
步骤605、响应于针对预设页面的预设入口的触发操作,显示预设页面。
如图7所示,用户点击添加表情按钮703后,显示预设页面705。
步骤606、在预设页面中展示目标媒体作品中的媒体内容。
如图7所示,在预设页面705中展示目标图片作品中的多张图片,每张图片上显示有复选框。
步骤607、响应于针对媒体内容的选择操作,将被选中的至少一个媒体内容确定为至少一个目标媒体内容。
如图7所示,用户点击包含月亮图案的第二张图片的复选框,将第二张图片确定为目标媒体内容。
步骤608、根据至少一个目标媒体内容生成至少一个预览表情对象,并展示至少一个预览表情对象。
如图7所示,根据第二张图片生成预览表情对象,并在目标展示区域706进行展示。可选的,若用户继续选择其他图片,则可以根据当前新选择的图片生成预览表情对象,并进行展示。可选的,用户可以通过触发预设页面中的媒体内容来切换不同预览表情对象的展示。
步骤609、接收针对至少一个预览表情对象的编辑操作。
如图7所示,若用户想要对表情对象进行进一步编辑,可以点击编辑按钮进行编辑,如添加文字“晚安”,编辑结果可以实时在目标展示区域内进行展示。
步骤610、响应于针对至少一个目标媒体内容的表情对象生成指令,根据至少一个目标媒体内容和编辑操作的编辑结果,生成至少一个目标表情对象。
如图7所示,若用户对当前的预览效果满意,则可通过点击添加按钮来输入表情对象生成指令,以指示表情对象的生成。
示例性的,若用户对初始的预览表情对象满意,则无需编辑,可以直接通过点击添加按钮来输入表情对象生成指令。
在其他实施例中,在支持用户选择多张图片或多个视频帧的情况下,预设控件展示区域702可以包括将多张图片或多个视频帧合成为一个表情对象的“合成生成”按钮以及分别合成表情对象的“分别生成”按钮,以供用户根据需要选择不同的表情对象合成结果。
步骤611、将至少一个目标表情对象配置在当前应用程序的表情选择面板中。
示例性的,目标表情对象生成成功后,可将目标表情对象添加至当前应用程序的表情库中,以将目标表情对象配置在当前应用程序的表情选择面板中,并返回至目标展示页面中继续展示目标媒体作品。此外,还可在目标展示页面显示添加成功通知,如图7中所示的“表情添加成功”。
本公开实施例提供的媒体内容处理方法,用户在浏览媒体作品的过程中,可以便捷地触发控件展示区域的显示,并根据媒体作品是否携带有预设表情标签来确定用于生成表情对象的预设页面的预设入口的展示位置,在用户想要根据媒体作品中的媒体内容生成表情对象时,可以触发预设入口并进入预设页面,在预设页面进行媒体内容的选择,并查看预览表情对象,用户还可以在预览表情对象基础上进行编辑,进而生成更加符合自身需求的个性化的表情对象,添加至表情库中,便于后续在表情库中查找到该表情对象并进行使用,有利于提升与交互对象之间的交互体验。
图8为本公开实施例所提供的一种媒体内容处理装置的结构示意图,如图8所示,所述装置包括:媒体内容展示模块801、目标内容确定模块802以及表情对象生成模块803。
媒体内容展示模块801,设置为在预设页面中展示目标媒体作品中的媒体内容,其中,所述媒体内容包括图片和/或视频;目标内容确定模块802,设置为 在媒体内容中确定至少一个目标媒体内容;表情对象生成模块803,设置为响应于表情对象生成指令,根据所述至少一个目标媒体内容生成目标表情对象。
本公开实施例所提供的媒体内容处理装置,在当前应用程序的预设页面中展示目标媒体作品中的媒体内容,其中,媒体内容包括图片和/或视频,在媒体内容中确定至少一个目标媒体内容,响应于针对目标媒体内容的表情对象生成指令,根据目标媒体内容生成目标表情对象,目标表情对象被配置在当前应用程序的表情选择面板。通过采用上述技术方案,可以使得用户在查看媒体作品的过程中,根据媒体作品中的媒体内容生成表情对象,并将所生成的表情对象配置在当前应用程序的表情选择面板中,满足用户的个性化表情对象生成需求,丰富表情对象样式,使得用户想要在表情选择面板中选择表情对象进行应用时,可以有更多的个性化的选择,提升用户体验,同时也可丰富媒体作品中媒体内容的用途,提升媒体内容资源的利用率。
可选的,所述目标内容确定模块是设置为:响应于针对所述媒体内容的选择操作,将被选中的至少一个媒体内容确定为至少一个目标媒体内容。
可选的,所述目标媒体作品包括目标图片作品,所述目标图片作品中包括至少一个图片;其中,所述媒体内容展示模块是设置为:在当前应用程序的预设页面中展示所述目标图片作品中的至少一张图片;其中,所述目标内容确定模块是设置为:响应于针对所述至少一张图片的选择操作,将被选中的至少一张图片确定为至少一张目标图片。
可选的,所述目标图片作品包括动图图片和/或静态图片。
可选的,表情对象生成模块803是设置为通过如下方式所述根据所述目标媒体内容生成至少一个目标表情对象:根据所述至少一张目标图片生成至少一个目标表情对象,其中,单个目标表情对象中包含至少一张目标图片中的一张或多张目标图片。
可选的,所述目标媒体作品包括目标视频作品,所述目标视频作品中包括多个视频帧;其中,所述媒体内容展示模块是设置为:在当前应用程序的预设页面中展示所述目标视频作品对应的视频进度信息;其中,所述目标内容确定模块是设置为:响应于针对所述视频进度信息的视频帧选择操作,确定至少一个目标视频帧集合,其中,每个目标视频帧集合中包括至少一个视频帧。
可选的,表情对象生成模块803是设置为通过如下方式所述根据所述至少一个目标媒体内容生成至少一个目标表情对象:根据所述至少一个目标视频帧集合生成至少一个目标表情对象,其中,单个目标表情对象中包含至少一个目标表情对象中的一个或多个目标视频帧集合。
可选的,所述至少一个目标表情对象包括动态表情对象,所述动态表情对象由以下至少一个生成:所述至少一张目标图片的动态图片、所述至少一张目标图片中的多张静态图片以及所述至少一个目标视频帧集合中的目标视频帧集合。
可选的,所述目标内容确定模块包括:视频帧确定单元,设置为响应于针对所述视频进度信息的起始帧选择操作和结束帧选择操作,确定起始视频帧和结束视频帧;视频帧集合确定单元,设置为根据所述起始视频帧和所述结束视频帧确定至少一个目标视频帧集合。
可选的,该装置还包括:预览表情对象生成模块,设置为在所述根据所述至少一个目标媒体内容生成至少一个目标表情对象之前,根据所述至少一个目标媒体内容生成至少一个预览表情对象,并展示所述至少一个预览表情对象。
可选的,该装置还包括:编辑操作接收模块,设置为接收针对所述至少一个预览表情对象的编辑操作;其中,表情对象生成模块803是设置为通过如下方式所述根据所述至少一个目标媒体内容生成至少一个目标表情对象:根据所述至少一个目标媒体内容和所述编辑操作的编辑结果,生成至少一个目标表情对象。
可选的,该装置还包括:预设页面显示模块,设置为在当前应用程序的预设页面中展示目标媒体作品中的媒体内容之前,在当前应用程序的所述目标媒体作品的目标展示页面中,响应于针对所述预设页面的第一预设触发操作,显示所述预设页面。
可选的,该装置还包括:展示区域显示模块,设置为在所述目标展示页面中展示所述目标媒体作品的过程中,响应于第二预设触发操作,在所述目标展示页面中显示预设控件展示区域;预设标识确定模块,设置为确定所述目标媒体作品是否携带有预设标识;第一预设入口展示模块,设置为若所述目标媒体作品携带有所述预设标识,则在所述预设控件展示区域中的第一预设展示位置,展示所述预设页面的预设入口。
可选的,该装置还包括:第二预设入口展示模块,设置为在所述确定所述目标媒体作品是否携带有预设标识之后,若所述目标媒体作品未携带有所述预设标识,则在所述预设控件展示区域中的第二预设展示位置,展示所述预设页面的预设入口,其中,所述第一预设展示位置的展示优先级高于所述第二预设展示位置的展示优先级。
本公开实施例所提供的媒体内容处理装置可执行本公开任意实施例所提供的媒体内容处理方法,具备执行方法相应的功能模块和效果。
上述装置所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本公开实施例的保护范围。
图9为本公开实施例所提供的一种电子设备的结构示意图。下面参考图9,其示出了适于用来实现本公开实施例的电子设备(例如图9中的终端设备或服务器)900的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistan,PDA)、平板电脑(Portable Android Device,PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(television,TV)、台式计算机等等的固定终端。图9示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图9所示,电子设备900可以包括处理装置(例如中央处理器、图形处理器等)901,其可以根据存储在只读存储器(Read-Only Memory,ROM)902中的程序或者从存储装置908加载到随机访问存储器(Random Access Memory,RAM)903中的程序而执行各种适当的动作和处理。在RAM 903中,还存储有电子设备900操作所需的各种程序和数据。处理装置901、ROM 902以及RAM903通过总线904彼此相连。输入/输出(Input/Output,I/O)接口905也连接至总线904。
通常,以下装置可以连接至I/O接口905:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置906;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置907;包括例如磁带、硬盘等的存储装置908;以及通信装置909。通信装置909可以允许电子设备900与其他设备进行无线或有线通信以交换数据。虽然图9示出了具有各种装置的电子设备900,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置909从网络上被下载和安装,或者从存储装置908被安装,或者从ROM 902被安装。在该计算机程序被处理装置901执行时,执行本公开实施例的方法中限定的上述功能。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
本公开实施例提供的电子设备与上述实施例提供的媒体内容处理方法属于同一发明构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且本实施例与上述实施例具有相同的效果。
本公开实施例提供了一种计算机存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例所提供的媒体内容处理方法。
本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)、闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如超文本传输协议(HyperText Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:在当前应用程序的预设页面中展示目标媒体作品中的媒体内容,其中,所述媒体内容包括图片和/或视频;在所述媒 体内容中确定至少一个目标媒体内容;响应于针对所述至少一个目标媒体内容的表情对象生成指令,根据所述至少一个目标媒体内容生成至少一个目标表情对象,其中,所述至少一个目标表情对象被配置在所述当前应用程序的表情选择面板。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括(LAN或WAN—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该模块本身的限定,例如,目标内容确定模块还可以被描述为“在所述媒体内容中确定至少一个目标媒体内容的模块”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存 储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、RAM、ROM、EPROM或快闪存储器、光纤、便捷式CD-ROM、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,提供了一种媒体内容处理方法,包括:在当前应用程序的预设页面中展示目标媒体作品中的媒体内容,其中,所述媒体内容包括图片和/或视频;在所述媒体内容中确定至少一个目标媒体内容;响应于针对所述至少一个目标媒体内容的表情对象生成指令,根据所述至少一个目标媒体内容生成至少一个目标表情对象,其中,所述至少一个目标表情对象被配置在所述当前应用程序的表情选择面板。
根据本公开的一个或多个实施例,所述在所述媒体内容中确定至少一个目标媒体内容,包括:响应于针对所述媒体内容的选择操作,将被选中的至少一个媒体内容确定为至少一个目标媒体内容。
根据本公开的一个或多个实施例,所述目标媒体作品包括目标图片作品,所述目标图片作品中包括至少一张图片;其中,所述在当前应用程序的预设页面中展示目标媒体作品中的媒体内容,包括:在当前应用程序的预设页面中展示所述目标图片作品中的至少一张图片;其中,所述响应于针对所述媒体内容的选择操作,将被选中的至少一个媒体内容确定为至少一个目标媒体内容,包括:响应于针对所述至少一张图片的选择操作,将被选中的至少一张图片确定为至少一张目标图片。
根据本公开的一个或多个实施例,所述至少一张目标图片作品包括动图图片和/或静态图片。
根据本公开的一个或多个实施例,所述根据所述目标媒体内容生成至少一个目标表情对象,包括:根据所述至少一张目标图片生成至少一个目标表情对象,其中,单个目标表情对象中包含所述至少一张目标图片中的一张或多张目标图片。
根据本公开的一个或多个实施例,所述目标媒体作品包括目标视频作品,所述目标视频作品中包括多个视频帧;其中,所述在当前应用程序的预设页面中展示目标媒体作品中的媒体内容,包括:在当前应用程序的预设页面中展示所述目标视频作品对应的视频进度信息;其中,所述响应于针对所述媒体内容的选择操作,将被选中的至少一个媒体内容确定为至少一个目标媒体内容,包 括:响应于针对所述视频进度信息的视频帧选择操作,确定至少一个目标视频帧集合,其中,每个目标视频帧集合中包括至少一个视频帧。
根据本公开的一个或多个实施例,所述根据所述至少一个目标媒体内容生成至少一个目标表情对象,包括:根据所述至少一个目标视频帧集合生成至少一个目标表情对象,其中,单个目标表情对象中包含至少一个目标视频帧集合中的一个或多个目标视频帧集合。
根据本公开的一个或多个实施例,所述至少一个目标表情对象包括动态表情对象,所述动态表情对象由以下至少一个生成:所述至少一张目标图片的动态图片、所述至少一张目标图片的多张静态图片、以及所述至少一个目标视频帧集合中的目标视频帧集合。
根据本公开的一个或多个实施例,所述响应于针对所述视频进度信息的视频帧选择操作,确定至少一个目标视频帧集合,包括:响应于针对所述视频进度信息的起始帧选择操作和结束帧选择操作,确定起始视频帧和结束视频帧;根据所述起始视频帧和所述结束视频帧确定至少一个目标视频帧集合。
根据本公开的一个或多个实施例,在所述根据所述至少一个目标媒体内容生成至少一个目标表情对象之前,还包括:根据所述至少一个目标媒体内容生成至少一个预览表情对象,并展示所述至少一个预览表情对象。
根据本公开的一个或多个实施例,还包括:接收针对所述至少一个预览表情对象的编辑操作;所述根据所述至少一个目标媒体内容生成至少一个目标表情对象,包括:根据所述至少一个目标媒体内容和所述编辑操作的编辑结果,生成至少一个目标表情对象。
根据本公开的一个或多个实施例,在当前应用程序的预设页面中展示目标媒体作品中的媒体内容之前,还包括:在当前应用程序的所述目标媒体作品的目标展示页面中,响应于针对所述预设页面的第一预设触发操作,显示所述预设页面。
根据本公开的一个或多个实施例,还包括:在所述目标展示页面中展示所述目标媒体作品的过程中,响应于第二预设触发操作,在所述目标展示页面中显示预设控件展示区域;确定所述目标媒体作品是否携带有预设标识;若所述目标媒体作品携带有所述预设标识,则在所述预设控件展示区域中的第一预设展示位置,展示所述预设页面的预设入口。
根据本公开的一个或多个实施例,在所述确定所述目标媒体作品是否携带有预设标识之后,还包括:若所述目标媒体作品未携带有所述预设标识,则在所述预设控件展示区域中的第二预设展示位置,展示所述预设页面的预设入口, 其中,所述第一预设展示位置的展示优先级高于所述第二预设展示位置的展示优先级。
根据本公开的一个或多个实施例,提供了媒体内容处理装置,包括:媒体内容展示模块,设置为在当前应用程序的预设页面中展示目标媒体作品中的媒体内容,其中,所述媒体内容包括图片和/或视频;目标内容确定模块,设置为在所述媒体内容中确定至少一个目标媒体内容;表情对象生成模块,设置为响应于针对所述至少一个目标媒体内容的表情对象生成指令,根据所述至少一个目标媒体内容生成至少一个目标表情对象,其中,所述至少一个目标表情对象被配置在所述当前应用程序的表情选择面板。
根据本公开的一个或多个实施例,还提供了一种电子设备,所述电子设备包括:一个或多个处理器;存储装置,设置为存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现本公开实施例提供的媒体内容处理方法。
根据本公开的一个或多个实施例,还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行本公开实施例提供的媒体内容处理方法。
虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (18)

  1. 一种媒体内容处理方法,包括:
    在当前应用程序的预设页面中展示目标媒体作品中的媒体内容,其中,所述媒体内容包括图片和/或视频;
    在所述媒体内容中确定至少一个目标媒体内容;
    响应于针对所述至少一个目标媒体内容的表情对象生成指令,根据所述至少一个目标媒体内容生成至少一个目标表情对象,其中,所述至少一个目标表情对象被配置在所述当前应用程序的表情选择面板。
  2. 根据权利要求1所述的方法,其中,所述在所述媒体内容中确定至少一个目标媒体内容,包括:
    响应于针对所述媒体内容的选择操作,将被选中的至少一个媒体内容确定为至少一个目标媒体内容。
  3. 根据权利要求2所述的方法,其中,所述目标媒体作品包括目标图片作品,所述目标图片作品中包括至少一个图片;
    其中,所述在当前应用程序的预设页面中展示目标媒体作品中的媒体内容,包括:
    在当前应用程序的预设页面中展示所述目标图片作品中的至少一张图片;
    其中,所述响应于针对所述媒体内容的选择操作,将被选中的至少一个媒体内容确定为至少一个目标媒体内容,包括:
    响应于针对所述至少一张图片的选择操作,将被选中的至少一张图片确定为至少一张目标图片。
  4. 根据权利要求1所述的方法,其中,所述至少一张目标图片作品包括动图图片和/或静态图片。
  5. 根据权利要求3所述的方法,其中,所述根据所述至少一个目标媒体内容生成至少一个目标表情对象,包括:
    根据所述至少一张目标图片生成至少一个目标表情对象,其中,单个目标表情对象中包含所述至少一张目标图片中的一张或多张目标图片。
  6. 根据权利要求2所述的方法,其中,所述目标媒体作品包括目标视频作品,所述目标视频作品中包括多个视频帧;
    其中,所述在当前应用程序的预设页面中展示目标媒体作品中的媒体内容,包括:
    在当前应用程序的预设页面中展示所述目标视频作品对应的视频进度信 息;
    其中,所述响应于针对所述媒体内容的选择操作,将被选中的至少一个媒体内容确定为至少一个目标媒体内容,包括:
    响应于针对所述视频进度信息的视频帧选择操作,确定至少一个目标视频帧集合,其中,每个目标视频帧集合中包括至少一个视频帧。
  7. 根据权利要求6所述的方法,其中,所述根据所述至少一个目标媒体内容生成至少一个目标表情对象,包括:
    根据所述至少一个目标视频帧集合生成至少一个目标表情对象,其中,单个目标表情对象中包含所述至少一个目标视频帧集合中的一个或多个目标视频帧集合。
  8. 根据权利要求4或6所述的方法,其中,所述至少一个目标表情对象包括动态表情对象,所述动态表情对象由以下至少一个生成:所述至少一张目标图片中的动态图片、所述至少一张目标图片中的多张静态图片以及所述至少一个目标视频帧集合中的目标视频帧集合。
  9. 根据权利要求6所述的方法,其中,所述响应于针对所述视频进度信息的视频帧选择操作,确定至少一个目标视频帧集合,包括:
    响应于针对所述视频进度信息的起始帧选择操作和结束帧选择操作,确定起始视频帧和结束视频帧;
    根据所述起始视频帧和所述结束视频帧确定至少一个目标视频帧集合。
  10. 根据权利要求1所述的方法,在所述根据所述至少一个目标媒体内容生成至少一个目标表情对象之前,还包括:
    根据所述至少一个目标媒体内容生成至少一个预览表情对象,并展示所述至少一个预览表情对象。
  11. 根据权利要求10所述的方法,还包括:
    接收针对所述至少一个预览表情对象的编辑操作;
    所述根据所述至少一个目标媒体内容生成至少一个目标表情对象,包括:
    根据所述至少一个目标媒体内容和所述编辑操作的编辑结果,生成至少一个目标表情对象。
  12. 根据权利要求1所述的方法,在当前应用程序的预设页面中展示目标媒体作品中的媒体内容之前,还包括:
    在当前应用程序的所述目标媒体作品的目标展示页面中,响应于针对所述 预设页面的第一预设触发操作,显示所述预设页面。
  13. 根据权利要求12所述的方法,还包括:
    在所述目标展示页面中展示所述目标媒体作品的过程中,响应于第二预设触发操作,在所述目标展示页面中显示预设控件展示区域;
    确定所述目标媒体作品是否携带有预设标识;
    响应于所述目标媒体作品携带有所述预设标识的确定结果,在所述预设控件展示区域中的第一预设展示位置,展示所述预设页面的预设入口。
  14. 根据权利要求13所述的方法,在所述确定所述目标媒体作品是否携带有预设标识之后,还包括:
    响应于所述目标媒体作品未携带有所述预设标识的确定结果,在所述预设控件展示区域中的第二预设展示位置,展示所述预设页面的预设入口,其中,所述第一预设展示位置的展示优先级高于所述第二预设展示位置的展示优先级。
  15. 一种媒体内容处理装置,包括:
    媒体内容展示模块,设置为在当前应用程序的预设页面中展示目标媒体作品中的媒体内容,其中,所述媒体内容包括图片和/或视频;
    目标内容确定模块,设置为在所述媒体内容中确定至少一个目标媒体内容;
    表情对象生成模块,设置为响应于针对所述至少一个目标媒体内容的表情对象生成指令,根据所述至少一个目标媒体内容生成至少一个目标表情对象,其中,所述至少一个目标表情对象被配置在所述当前应用程序的表情选择面板。
  16. 根据权利要求15所述的装置,包括:用于执行如权利要求2-14任一项所述的方法的模块。
  17. 一种电子设备,包括:
    至少一个处理器;
    存储装置,设置为存储至少一个程序,
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-14中任一所述的媒体内容处理方法。
  18. 一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-14中任一所述的媒体内容处理方法。
PCT/CN2023/112878 2022-08-15 2023-08-14 媒体内容处理方法、装置、设备及存储介质 WO2024037491A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210977422.9 2022-08-15
CN202210977422.9A CN115269886A (zh) 2022-08-15 2022-08-15 媒体内容处理方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2024037491A1 true WO2024037491A1 (zh) 2024-02-22

Family

ID=83751424

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/112878 WO2024037491A1 (zh) 2022-08-15 2023-08-14 媒体内容处理方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN115269886A (zh)
WO (1) WO2024037491A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115269886A (zh) * 2022-08-15 2022-11-01 北京字跳网络技术有限公司 媒体内容处理方法、装置、设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107370887A (zh) * 2017-08-30 2017-11-21 维沃移动通信有限公司 一种表情生成方法及移动终端
CN109120866A (zh) * 2018-09-27 2019-01-01 腾讯科技(深圳)有限公司 动态表情生成方法、装置、计算机可读存储介质和计算机设备
CN112131422A (zh) * 2020-10-23 2020-12-25 腾讯科技(深圳)有限公司 表情图片生成方法、装置、设备及介质
WO2021169134A1 (zh) * 2020-02-28 2021-09-02 北京百度网讯科技有限公司 表情包生成方法、装置、设备和介质
CN113538628A (zh) * 2021-06-30 2021-10-22 广州酷狗计算机科技有限公司 表情包生成方法、装置、电子设备及计算机可读存储介质
CN113568551A (zh) * 2021-07-26 2021-10-29 北京达佳互联信息技术有限公司 图片保存方法及装置
CN114693827A (zh) * 2022-04-07 2022-07-01 深圳云之家网络有限公司 表情生成方法、装置、计算机设备和存储介质
CN115269886A (zh) * 2022-08-15 2022-11-01 北京字跳网络技术有限公司 媒体内容处理方法、装置、设备及存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2736231A4 (en) * 2012-06-30 2015-04-15 Huawei Tech Co Ltd DYNAMIC EXPRESSION DISPLAY METHOD AND MOBILE TERMINAL
US9516259B2 (en) * 2013-10-22 2016-12-06 Google Inc. Capturing media content in accordance with a viewer expression
CN111050187B (zh) * 2019-12-09 2020-12-15 腾讯科技(深圳)有限公司 一种虚拟视频处理的方法、装置及存储介质
CN111586466B (zh) * 2020-05-08 2021-05-28 腾讯科技(深圳)有限公司 一种视频数据处理方法、装置及存储介质
CN112817670A (zh) * 2020-08-05 2021-05-18 腾讯科技(深圳)有限公司 基于会话的信息展示方法、装置、设备及存储介质
CN111966804A (zh) * 2020-08-11 2020-11-20 深圳传音控股股份有限公司 一种表情处理方法、终端及存储介质
CN112907703A (zh) * 2021-01-18 2021-06-04 深圳全民吃瓜科技有限公司 一种表情包生成方法及系统
CN113934349B (zh) * 2021-10-28 2023-11-07 北京字跳网络技术有限公司 交互方法、装置、电子设备和存储介质
CN114880062B (zh) * 2022-05-30 2023-11-14 网易(杭州)网络有限公司 聊天表情展示方法、设备、电子设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107370887A (zh) * 2017-08-30 2017-11-21 维沃移动通信有限公司 一种表情生成方法及移动终端
CN109120866A (zh) * 2018-09-27 2019-01-01 腾讯科技(深圳)有限公司 动态表情生成方法、装置、计算机可读存储介质和计算机设备
WO2021169134A1 (zh) * 2020-02-28 2021-09-02 北京百度网讯科技有限公司 表情包生成方法、装置、设备和介质
CN112131422A (zh) * 2020-10-23 2020-12-25 腾讯科技(深圳)有限公司 表情图片生成方法、装置、设备及介质
CN113538628A (zh) * 2021-06-30 2021-10-22 广州酷狗计算机科技有限公司 表情包生成方法、装置、电子设备及计算机可读存储介质
CN113568551A (zh) * 2021-07-26 2021-10-29 北京达佳互联信息技术有限公司 图片保存方法及装置
CN114693827A (zh) * 2022-04-07 2022-07-01 深圳云之家网络有限公司 表情生成方法、装置、计算机设备和存储介质
CN115269886A (zh) * 2022-08-15 2022-11-01 北京字跳网络技术有限公司 媒体内容处理方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN115269886A (zh) 2022-11-01

Similar Documents

Publication Publication Date Title
WO2021196903A1 (zh) 视频处理方法、装置、可读介质及电子设备
WO2021218518A1 (zh) 视频的处理方法、装置、设备及介质
EP4124052A1 (en) Video production method and apparatus, and device and storage medium
WO2022052838A1 (zh) 视频文件的处理方法、装置、电子设备及计算机存储介质
WO2021135626A1 (zh) 菜单项选择方法、装置、可读介质及电子设备
EP4333439A1 (en) Video sharing method and apparatus, device, and medium
WO2020220773A1 (zh) 图片预览信息的显示方法、装置、电子设备及计算机可读存储介质
WO2021218555A1 (zh) 信息展示方法、装置和电子设备
WO2023103956A1 (zh) 数据交互方法、装置、电子设备、存储介质和程序产品
WO2020220776A1 (zh) 图片类评论数据的展示方法、装置、设备及介质
WO2023179424A1 (zh) 弹幕添加方法、装置、电子设备和存储介质
WO2024037491A1 (zh) 媒体内容处理方法、装置、设备及存储介质
US20240126417A1 (en) Method, form data processing method, apparatus, and electronic device for form generation
WO2021218556A1 (zh) 信息展示方法、装置和电子设备
WO2023169356A1 (zh) 图像处理方法、装置、设备及存储介质
US20240064367A1 (en) Video processing method and apparatus, electronic device, and storage medium
WO2023185647A1 (zh) 媒体内容的显示方法、装置、设备、存储介质和程序产品
CN114363686B (zh) 多媒体内容的发布方法、装置、设备和介质
EP4134904A1 (en) Image special effect configuration method, image recognition method, apparatuses, and electronic device
WO2024022179A1 (zh) 媒体内容的显示方法、装置、电子设备和存储介质
WO2024078516A1 (zh) 媒体内容展示方法、装置、设备及存储介质
WO2024037557A1 (zh) 特效道具处理方法、装置、电子设备及存储介质
WO2024012058A1 (zh) 一种表情图片预览方法、装置、设备及介质
WO2023207543A1 (zh) 媒体内容的发布方法、装置、设备、存储介质和程序产品
WO2023216936A1 (zh) 视频播放方法、装置、电子设备、存储介质和程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23854392

Country of ref document: EP

Kind code of ref document: A1