WO2022052838A1 - 视频文件的处理方法、装置、电子设备及计算机存储介质 - Google Patents

视频文件的处理方法、装置、电子设备及计算机存储介质 Download PDF

Info

Publication number
WO2022052838A1
WO2022052838A1 PCT/CN2021/115733 CN2021115733W WO2022052838A1 WO 2022052838 A1 WO2022052838 A1 WO 2022052838A1 CN 2021115733 W CN2021115733 W CN 2021115733W WO 2022052838 A1 WO2022052838 A1 WO 2022052838A1
Authority
WO
WIPO (PCT)
Prior art keywords
identification information
video file
interactive
interaction
preset
Prior art date
Application number
PCT/CN2021/115733
Other languages
English (en)
French (fr)
Inventor
王星懿
范嘉佳
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Priority to BR112023001285A priority Critical patent/BR112023001285A2/pt
Priority to KR1020227036625A priority patent/KR20220156910A/ko
Priority to JP2022564729A priority patent/JP2023522759A/ja
Priority to EP21865893.8A priority patent/EP4093042A4/en
Publication of WO2022052838A1 publication Critical patent/WO2022052838A1/zh
Priority to US17/887,138 priority patent/US11889143B2/en
Priority to US18/541,783 priority patent/US20240114197A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/11Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4753End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for user identification, e.g. by entering a PIN or password
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present disclosure relates to the technical field of video processing, and in particular, the present disclosure relates to a video file processing method, apparatus, electronic device, and computer-readable storage medium.
  • users can watch videos in video applications, and applications usually set up a comment area or a message area.
  • applications usually set up a comment area or a message area.
  • users can also use @ Interact with other users.
  • the system will prompt user B.
  • User B can jump to the comment area to view the message according to the prompt, or, without viewing the comment area, The system pushes user A's message to user B separately.
  • the present disclosure provides a video file processing method, device, electronic device, and computer-readable storage medium, which can solve the problem of poor interaction between users when watching videos.
  • the technical solution is as follows:
  • a method for processing a video file comprising:
  • a preset second editing interface is displayed; the second editing interface includes the preset interaction Label;
  • a device for processing video files comprising:
  • the first processing module is configured to, in the preset first editing interface for the original video file, display the preset second editing interface when receiving a trigger instruction for the preset first interactive function; the second editing interface
  • the editing interface includes preset interactive labels;
  • a second processing module configured to receive the first identification information of the interactive object determined by the editor in the interactive label, and obtain the interactive label containing the first identification information
  • the third processing module is configured to generate a target video file including the interaction tag when receiving the editing completion instruction initiated by the editor, and publish the target video file.
  • the third aspect processor, memory and bus
  • the bus for connecting the processor and the memory
  • the memory for storing operation instructions
  • the processor is used for invoking the operation instructions, and the executable instructions cause the processor to perform operations corresponding to the video file processing method shown in the first aspect of the present disclosure.
  • a computer-readable storage medium is provided, and a computer program is stored on the computer-readable storage medium.
  • the program is executed by a processor, the method for processing a video file shown in the first aspect of the present disclosure is implemented.
  • a preset second editing interface is displayed, and the second editing interface includes the preset interaction label; then receive the first identification information of the interactive object determined by the editor in the interactive label, and obtain the interactive label containing the first identification information; when receiving the editing completion instruction initiated by the editor, generate an interactive label containing the first identification information.
  • the target video file of the interactive tag and publish the target video file.
  • FIG. 1 is a schematic flowchart of a method for processing a video file according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of a method for processing a video file according to another embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a first editing interface in the present disclosure
  • FIGS. 4A to 4C are schematic diagrams of an interface for editing an interactive label in a second editing interface according to the present disclosure
  • 5A-5C are interface schematic diagrams 2 of editing interactive labels in the second editing interface in the present disclosure.
  • FIG. 6 is a schematic diagram of a playback interface when an interactive object plays a target video file in the present disclosure
  • FIG. 7 is a schematic diagram of a playback interface after an interactive object clicks on the second prompt information in the present disclosure
  • FIG. 8 is a schematic diagram of a playback interface when an editor plays a target video file in the present disclosure
  • FIG. 9 is a schematic diagram of a playback interface when other users play a target video file in the present disclosure.
  • FIG. 10 is a schematic diagram of a playback interface when any user in the present disclosure plays an updated target video file
  • FIG. 11 is a schematic structural diagram of an apparatus for processing a video file according to another embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of an electronic device for processing video files according to another embodiment of the present disclosure.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • the video file processing method, device, electronic device and computer-readable storage medium provided by the present disclosure are intended to solve the above technical problems in the prior art.
  • a method for processing a video file includes:
  • Step S101 in the preset first editing interface for the original video file, when receiving a trigger instruction for the preset first interactive function, display a preset second editing interface; the second editing interface includes a preset editing interface. interactive label;
  • an application client for playing video files and editing video files is installed in the terminal.
  • the application client is preset with at least one playing interface for playing video files, and for editing video files.
  • At least one editing interface for the file is preset with at least one playing interface for playing video files, and for editing video files.
  • playing video files and editing video files may be the same application client or different application clients, which may be set according to actual needs in practical applications, which is not limited in this embodiment of the present disclosure .
  • the original video file can be edited by the editor to shoot the finished video file.
  • the editor can edit the original video file in each editing interface of the application client to obtain the edited video file, and then upload the edited video file to the server to share with others; or Without editing, upload the original video file directly to the server to share with others.
  • the editor opens the preset first editing interface, and then imports the original video file and edits the original video file.
  • the interactive function may be an "@" function, for example, an editor @ owns a friend.
  • the application client When the application client receives the trigger instruction for the first interactive function, it can display a preset second editing interface, and the second editing interface includes a preset interactive label; wherein the editor can edit the interaction in the interactive label The identification information of the object.
  • Step S102 receiving the first identification information of the interactive object determined by the editor in the interactive label, and obtaining the interactive label containing the first identification information;
  • the editor can determine the first identification information of the interactive object, so as to obtain the interactive label including the first identification information. For example, when the interactive function is @friend, then the interactive object corresponding to the first interactive function is the friend B of editor A@, and the first identification information is the ID (Identity document, identity number) of B, so that the information containing B is obtained.
  • the interactive tag of the ID which can be displayed in the video image when the video file is played.
  • Step S103 when an editing completion instruction initiated by the editor is received, a target video file containing an interactive tag is generated, and the target video file is released.
  • a virtual button for generating the target video file can be preset in the editing interface.
  • the application client can generate the target video file containing the interactive tag based on the editing completion instruction. and publish the target video file.
  • a preset second editing interface is displayed, and the second editing interface includes a preset editing interface.
  • the set interactive label then receive the first identification information of the interactive object corresponding to the first interactive function in the interactive label, and obtain the interactive label containing the first identification information; when receiving the editing completion instruction initiated by the editor, generate an interactive label including Tag the target video file and publish the target video file.
  • a method for processing a video file includes:
  • Step S201 in the preset first editing interface for the original video file, when a trigger instruction for the preset first interactive function is received, the preset second editing interface is displayed; the second editing interface includes the preset second editing interface. interactive label;
  • an application client for playing video files and editing video files is installed in the terminal.
  • the application client is preset with at least one playing interface for playing video files, and for editing video files.
  • At least one editing interface for the file may have the following characteristics:
  • the device On the hardware system, the device has a central processing unit, a memory, an input component and an output component, that is to say, the device is often a microcomputer device with communication functions. In addition, it can also have a variety of input methods, such as keyboard, mouse, touch screen, microphone and camera, etc., and can adjust the input according to needs. At the same time, devices often have multiple output methods, such as receivers, display screens, etc., which can also be adjusted as needed;
  • the device In the software system, the device must have an operating system, such as Windows Mobile, Symbian, Palm, Android, iOS, etc. At the same time, these operating systems are becoming more and more open, and personalized applications developed based on these open operating system platforms emerge in an endless stream, such as address books, calendars, notepads, calculators and various games, etc. Customize the needs of users;
  • an operating system such as Windows Mobile, Symbian, Palm, Android, iOS, etc.
  • the device has flexible access modes and high-bandwidth communication performance, and can automatically adjust the selected communication mode according to the selected business and the environment, so as to facilitate the use of users.
  • the device can support GSM (Global System for Mobile Communication, Global System for Mobile Communications), WCDMA (Wideband Code Division Multiple Access, Wideband Code Division Multiple Access), CDMA2000 (Code Division Multiple Access, Code Division Multiple Access), TDSCDMA (Time Division- Synchronous Code Division Multiple Access), Wi-Fi (Wireless-Fidelity, wireless fidelity) and WiMAX (Worldwide Interoperability for Microwave Access), etc., so as to adapt to a variety of standard networks, Not only supports voice services, but also supports a variety of wireless data services;
  • GSM Global System for Mobile Communication
  • WCDMA Wideband Code Division Multiple Access
  • CDMA2000 Code Division Multiple Access, Code Division Multiple Access
  • TDSCDMA Time Division- Synchronous Code Division Multiple Access
  • Wi-Fi Wireless-Fidelity, wireless fidelity
  • WiMAX Worldwide Interoperability for Microwave
  • the equipment pays more attention to humanization, personalization and multi-function.
  • the equipment has changed from the "equipment-centered" model to the "people-centered” model, integrating embedded computing, control technology, artificial intelligence technology and biometric authentication technology, which fully reflects the purpose of people-oriented .
  • the device can adjust the settings according to individual needs, making it more personalized.
  • the device itself integrates a lot of software and hardware, and its functions are becoming more and more powerful.
  • playing video files and editing video files may be the same application client or different application clients, which may be set according to actual needs in practical applications, which is not limited in this embodiment of the present disclosure .
  • the original video file can be edited by the editor to shoot the finished video file.
  • the editor can edit the original video file in each editing interface of the application client to obtain the edited video file, and then upload the edited video file to the server to share with others; or Without editing, upload the original video file directly to the server to share with others.
  • the editor opens the preset first editing interface, and then imports the original video file and edits the original video file.
  • the interactive function may be an "@" function, for example, an editor @ owns a friend.
  • a trigger instruction for the first interactive function is initiated, and the application client can display the preset second editing interface after receiving the trigger instruction.
  • the trigger instruction is generated in the following manner:
  • the face recognition of the original video file is successful
  • the editor triggers a virtual button corresponding to the first interactive function in the first editing interface.
  • the application client can perform face recognition on the original video file, and if the face recognition is successful, a trigger instruction can be generated; or, a first interaction is preset in the first editing interface
  • the virtual button corresponding to the function when the editor clicks the virtual button, the application client can generate a trigger instruction.
  • the application client can perform face recognition on the original video file by first playing the original video file, and then performing face recognition on the played video image; or, the application client can play the original video file in the background and perform face recognition identify.
  • face recognition on video files by first playing the original video file, and then performing face recognition on the played video image; or, the application client can play the original video file in the background and perform face recognition identify.
  • other methods for performing face recognition on video files are also applicable to the embodiments of the present disclosure, which are not limited in the embodiments of the present disclosure.
  • the editor edits the original video in the first editing interface as shown in FIG. 3, and the application client recognizes that there is a portrait in the current video image, then the first interaction can be displayed in the first editing interface
  • the first prompt information 301 of the function, wherein the first interactive function may correspond to the virtual button 302 in the first editing interface.
  • other virtual buttons may also be preset in the first editing interface, which may be set according to actual requirements in practical applications, which are not limited in the embodiments of the present disclosure.
  • Step S202 receiving the first identification information of the interactive object determined by the editor in the interactive label, and obtaining the interactive label containing the first identification information;
  • the editor can determine the first identification information of the interactive object, so as to obtain the interactive label including the first identification information. For example, when the interactive function is @friend, then the interactive object corresponding to the first interactive function is the friend B of editor A@, and the first identification information is the ID (Identity document, identity number) of B, so that the information containing B is obtained.
  • the interactive tag of the ID which can be displayed in the video image when the video file is played.
  • the second editing interface includes a preset identification information list, and the identification information list includes identification information of at least one interactive object;
  • the first identification information of the interactive object determined by the editor is received in the interactive label, and the interactive label containing the first identification information is obtained, including:
  • an interactive label including any identification information is generated.
  • the second editing interface may include a preset interaction label and a preset identification information list, where the identification information list includes identification information of at least one interactive object.
  • the application client may display the preset interactive label and the preset identification information list in the second editing interface.
  • a selection instruction for any identification information is initiated.
  • the application client enters any identification information corresponding to the selection instruction into the pre-selection instruction.
  • the set interactive label when the editor determines to generate an interactive label, an interactive label containing any of the identification information is generated.
  • a preset interaction label 401 and an identification information list 402 are displayed, wherein the interaction label "@" of an interaction function is preset.
  • the application client inputs 401 the "little star”, as shown in FIG. 4B .
  • the generation instruction for generating the interactive label is initiated.
  • the application client After receiving the generation instruction, the application client generates the interactive label containing "Little Star", as shown in Figure 4C.
  • identification information list in the second editing interface may be the editor's friend list, the editor's recently contacted friends, or may also be other types of identification information lists. Set according to requirements, which is not limited in this embodiment of the present disclosure.
  • the editor can also change the style of the interactive label. For example, in the interactive label shown in FIG. 4C , when the editor clicks on the interactive label, the style of the interactive label can be changed.
  • the styles of the interactive labels may also be changed in other ways, which are not limited in this embodiment of the present disclosure.
  • the interactive label includes a preset first text box
  • the first identification information of the interactive object determined by the editor is received in the interactive label, and the interactive label containing the first identification information is obtained, including:
  • the second editing interface may also include a preset first text box.
  • the application client may display the preset first text box in the second editing interface.
  • the editor can directly input the instruction "@" of the interactive function and the identification information of the interactive object in the first text box, and then determine to generate an interactive label to generate an interactive label including any identification information.
  • a preset first text box 501 is displayed, and then the editor can input “@ ⁇ ” in the first text box, as shown in FIG. 5B , when editing
  • the generation instruction for generating the interactive label is initiated.
  • the application client After receiving the generation instruction, the application client generates the interactive label containing "Little Star", as shown in Figure 4C.
  • a preset list of identification information is displayed, as shown in FIG. 5C .
  • the editor can directly select the interactive object without inputting the identification information of the interactive object, which provides convenience for the editor.
  • interaction object and the object corresponding to face recognition may be the same or different.
  • object of successful face recognition in the original video file is A
  • the interaction object of editor @ can be A or B.
  • the interaction tag may include identification information of one interaction object, or may include identification information of multiple interaction objects, for example, the editor @ three interaction objects A, B, and C at the same time.
  • the editor @ three interaction objects A, B, and C at the same time.
  • Step S203 when receiving the editing completion instruction initiated by the editor, generate a target video file containing an interactive tag, and release the target video file;
  • a virtual button for generating the target video file can be preset in the editing interface.
  • the application client can generate the interactive label based on the editing completion instruction. target video file, and publish the target video file. For example, when the editor clicks "OK" in the lower right corner as shown in FIG. 4C, the editing completion instruction is triggered, and the application client can generate the target video file containing the interactive tag based on the editing completion instruction.
  • the target video file can be uploaded to the preset server for publishing.
  • any user including the editor of the target video file
  • the preset server sends the target video file after receiving the playback request, thereby realizing the Sharing of the target video file.
  • Step S204 when receiving the playback instruction for the target video file initiated by the player, obtain the second identification information of the target video file and the player;
  • the application client can generate a playback request based on the playback instruction, and send the playback request to the preset server to obtain the target video. file, and obtain the second identification information of the player at the same time.
  • the application client can obtain the second identification information of the player in addition to obtaining the target video file from the preset server.
  • Step S205 if the second identification information is the same as the first identification information, when the target video file is played, the first identification information and the preset second prompt information of the second interactive function are displayed in the interactive label;
  • the acquired second identification information is the same as the above-mentioned first identification information, it means that the player is the above-mentioned interactive object
  • the target video file is played in the playback interface
  • an interactive label is displayed, and the interactive label includes the first identification information of the interactive object and the second prompt information of the preset second interactive function; wherein, the second interactive function can be a "comment" function, and the second prompt information can be Information that prompts the interacting object to comment.
  • the target video file can be played and an interactive label can be displayed in the playback interface, and the interactive label includes the first identification information "" @ ⁇ ” and the second prompt message “click here to comment”.
  • Step S206 when receiving the click instruction for the second prompt information initiated by the player, displaying the preset second text box;
  • a click command is initiated, and the application client can display a preset second text box after receiving the click command, and the second text box is used to receive the input from the interactive object. interactive information, while the second text box is in an editable state.
  • the preset second text box 701 can be displayed, and the second text box is in the editable state. condition.
  • Step S207 receiving the interaction information input in the second text box
  • the interactive object can input interactive information in the second text box.
  • the interactive object inputs the interactive information of "la la la la la la la la" in the second text box.
  • the second prompt information can be displayed in the interaction label; if there is interaction information in the interaction label, the interaction information can be directly displayed.
  • Step S208 when the confirmation instruction is received, the updated interaction label is displayed; the updated interaction label includes the interaction information;
  • the application client sends the interaction information to the preset server, and the preset server uses the interaction information to update the interaction label of the target video file. to obtain an updated visual video file containing the updated interaction label.
  • the preset server updates and obtains the updated target video file
  • any user initiates a playback request and obtains the updated target video file.
  • the user watches the updated target video file, he can see the updated interactive label.
  • the updated interaction label includes interaction information.
  • Step S209 if the second identification information is different from the first identification information and is the same as the editor's third identification information, then when the target video file is played, the first identification information and the preset first identification information are displayed in the interactive label.
  • Three prompt information
  • the second identification information is different from the first identification information and is the same as the editor's third identification information, it means that the player is not an interactive object, but an editor, then when playing the target video file, in The target video file is played in the playback interface, and an interactive label is displayed at the same time, wherein the interactive label includes the first identification information and the preset third prompt information.
  • the target video file can be played and interactive tags can be displayed in the playback interface.
  • the interactive tags include The first identification information "@ ⁇ " and the preset third prompt information "friend comments will be displayed here".
  • Step S2010 if the second identification information is different from the first identification information and the editor's third identification information, when the target video file is played, the first identification information is displayed in the interactive label, and the first identification information is used to view the first identification information.
  • the target video file is being played.
  • the interactive label includes the first identification information and a data interface for viewing the relevant information corresponding to the first identification information, such as an individual viewing the interactive object.
  • the data interface of the home page, etc. so that the user can click the data interface to view the personal home page of the interactive object.
  • the target video file can be played and the interactive label displayed in the playback interface. It includes the first identification information "@ ⁇ ” and the related information "view personal homepage” corresponding to the first identification information.
  • the player clicks "View Personal Homepage” the personal homepage of "Little Star” can be displayed in the application client.
  • a play request initiated by any user to the preset server is a play request for the updated target video file.
  • the updated target video file can be obtained.
  • the playback request only needs to include the identification information of the video file.
  • the latest video file can be obtained according to the identification information in the playback request. That is to say, when the preset server receives the playback request, if the target video file is stored in the preset server, it will deliver the target video file; If the updated target video file is stored, then the updated target video file is delivered without the need for the user to distinguish.
  • the application client After receiving the updated target video file delivered by the preset server, the application client can play the target video file in the playback interface and display the updated interactive label at the same time.
  • the target video file is played in the playback interface as shown in Figure 10, and the updated interactive label is displayed at the same time.
  • the updated interactive label includes the first identification information "@ ⁇ ” and the interactive information "Xiao Xing's comment: la la la” la la la la la la la la la la la.”
  • the preset second editing interface is displayed, and the second editing interface is displayed.
  • Including a preset interactive label then receiving the first identification information of the interactive object determined by the editor in the interactive label, and obtaining the interactive label containing the first identification information; when receiving the editing completion instruction initiated by the editor, generate an interactive Tag the target video file and publish the target video file.
  • the target video file contains the identification information of the interactive object
  • the interactive object browses the target video file, he can directly comment in the interactive tag, which does not affect the browsing of the video file, but also can interact, which improves the interaction.
  • the interactive experience of the object is not limited to, but not affect the browsing of the video file, but also can interact, which improves the interaction.
  • the editor can also directly view the interactive information from the updated interactive tab without flipping through operations, thereby improving the editor's interactive experience.
  • FIG. 11 is a schematic structural diagram of an apparatus for processing a video file provided by another embodiment of the present disclosure. As shown in FIG. 11 , the apparatus in this embodiment may include:
  • the first processing module 1101 is configured to, in the preset first editing interface for the original video file, display the preset second editing interface when receiving a trigger instruction for the preset first interactive function; the second editing interface
  • the interface includes preset interactive labels;
  • the second processing module 1102 is configured to receive the first identification information of the interaction object corresponding to the first interaction function in the interaction label, and obtain the interaction label including the first identification information;
  • the third processing module 1103 is configured to generate a target video file containing an interactive tag when an editing completion instruction initiated by the editor is received, and publish the target video file.
  • the second editing interface includes a preset identification information list, and the identification information list includes identification information of at least one interactive object;
  • the second processing module is specifically used for:
  • a selection instruction for any identification information in the identification information list is received; when a generation instruction for generating an interactive label is received, an interactive label including any identification information is generated.
  • the interactive label includes a preset first text box
  • the second processing module is specifically used for:
  • the fourth processing module is used to obtain the second identification information of the target video file and the player when receiving the playback instruction for the target video file initiated by the player;
  • the fifth processing module is configured to display the first identification information and the preset second prompt information of the second interactive function in the interactive label when the target video file is played if the second identification information is the same as the first identification information ;
  • the sixth processing module is used to display the preset second text box when receiving the click instruction for the second prompt information initiated by the player;
  • a receiving module for receiving the interaction information input in the second text box
  • the seventh processing module is used for displaying the updated interaction label when receiving the confirmation instruction; the updated interaction label includes interaction information.
  • the eighth processing module is used to display the first identification information in the interactive label when the target video file is played if the second identification information is different from the first identification information and is the same as the editor's third identification information, and The default third prompt message.
  • the ninth processing module is used for displaying the first identification information in the interactive label when playing the target video file, and for A data interface for viewing related information corresponding to the first identification information.
  • the trigger instruction is generated in the following manner:
  • the face recognition of the original video file is successful
  • the editor triggers a virtual button corresponding to the first interactive function in the first editing interface.
  • the video file processing apparatus of this embodiment can execute the video file processing methods shown in the first embodiment and the second embodiment of the present disclosure, and the implementation principles thereof are similar, which will not be repeated here.
  • the preset second editing interface is displayed, and the second editing interface is displayed.
  • Including a preset interactive label then receiving the first identification information of the interactive object determined by the editor in the interactive label, and obtaining the interactive label containing the first identification information; when receiving the editing completion instruction initiated by the editor, generate an interactive Tag the target video file and publish the target video file.
  • the target video file contains the identification information of the interactive object
  • the interactive object browses the target video file, he can directly comment in the interactive tag, which does not affect the browsing of the video file, but also can interact, which improves the interaction.
  • the interactive experience of the object is not limited to, but not affect the browsing of the video file, but also can interact, which improves the interaction.
  • the editor can also directly view the interactive information from the updated interactive tab without flipping through operations, thereby improving the editor's interactive experience.
  • FIG. 12 it shows a schematic structural diagram of an electronic device 1200 suitable for implementing an embodiment of the present disclosure.
  • the electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (eg, mobile terminals such as in-vehicle navigation terminals), etc., and stationary terminals such as digital TVs, desktop computers, and the like.
  • the electronic device shown in FIG. 12 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • the electronic device includes: a memory and a processor, wherein the processor here may be referred to as the processing device 1201 described below, and the memory may include a read-only memory (ROM) 1202, a random access memory (RAM) 1203, and a storage device described below.
  • the electronic device 1200 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 1201, which may Various appropriate actions and processes are performed by the program in or loaded from the storage device 1208 into the random access memory (RAM) 1203.
  • RAM 1203 various programs and data required for the operation of the electronic device 1200 are also stored.
  • the processing device 1201, the ROM 1202, and the RAM 1203 are connected to each other through a bus 1204.
  • An input/output (I/O) interface 1205 is also connected to bus 1204 .
  • the following devices may be connected to the I/O interface 1205: input devices 1206 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 1207 of a computer, etc.; a storage device 1208 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 1209. Communication means 1209 may allow electronic device 1200 to communicate wirelessly or by wire with other devices to exchange data.
  • FIG. 12 shows an electronic device 1200 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 1209, or from the storage device 1208, or from the ROM 1202.
  • the processing apparatus 1201 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the client and server can use any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects.
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: in the preset first editing interface for the original video file, when receiving the In response to the trigger instruction of the preset first interactive function, a preset second editing interface is displayed, and the second editing interface includes a preset interaction label; the first interaction object determined by the editor is received in the interaction label. identification information, and obtain an interactive label including the first identification information; when receiving an editing completion instruction initiated by the editor, generate a target video file including the interactive label, and publish the target video file.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the modules or units involved in the embodiments of the present disclosure may be implemented in software or hardware. Among them, the name of the module or unit does not constitute a limitation of the unit itself under certain circumstances.
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLDs Complex Programmable Logical Devices
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • Example 1 provides a video file processing method, including:
  • a preset second editing interface is displayed; the second editing interface includes the preset interaction Label;
  • the second editing interface includes a preset identification information list, and the identification information list includes identification information of at least one interactive object;
  • the receiving, in the interaction label, the first identification information of the interaction object determined by the editor, and obtaining the interaction label including the first identification information includes:
  • the interactive label includes a preset first text box
  • the receiving, in the interaction label, the first identification information of the interaction object determined by the editor, and obtaining the interaction label including the first identification information includes:
  • an interaction label including the input identification information is generated.
  • the second identification information is the same as the first identification information, when the target video file is played, the first identification information and the first identification information of the preset second interactive function are displayed in the interactive label. 2. prompt information;
  • the updated interaction label is displayed; the updated interaction label includes the interaction information.
  • the second identification information is different from the first identification information and is the same as the editor's third identification information, then when the target video file is played, the first identification information is displayed in the interactive label. identification information, and preset third prompt information.
  • the first identification is displayed in the interaction label when the target video file is played. information, and a data interface for viewing related information corresponding to the first identification information.
  • the trigger instruction is generated in the following manner:
  • the editor triggers a virtual button corresponding to the first interactive function in the first editing interface.
  • Example 2 provides the apparatus of Example 1, including:
  • the first processing module is configured to, in the preset first editing interface for the original video file, display the preset second editing interface when receiving a trigger instruction for the preset first interactive function; the second editing interface
  • the editing interface includes preset interactive labels;
  • a second processing module configured to receive the first identification information of the interaction object corresponding to the first interaction function in the interaction label, and obtain the interaction label including the first identification information
  • the third processing module is configured to generate a target video file including the interaction tag when receiving the editing completion instruction initiated by the editor, and publish the target video file.
  • the second editing interface includes a preset identification information list, and the identification information list includes identification information of at least one interactive object;
  • the second processing module is specifically used for:
  • a selection instruction for any identification information in the identification information list is received; when a generation instruction for generating an interactive label is received, an interactive label including the any identification information is generated.
  • the interactive label includes a preset first text box
  • the second processing module is specifically used for:
  • a fourth processing module configured to acquire the second identification information of the target video file and the player when receiving a playback instruction for the target video file initiated by the player;
  • a fifth processing module configured to display the first identification information in the interactive label when the target video file is played, and preset the first identification information if the second identification information is the same as the first identification information
  • a sixth processing module configured to display a preset second text box when receiving a click instruction initiated by the player for the second prompt information
  • a receiving module configured to receive the interaction information input in the second text box
  • the seventh processing module is configured to display the updated interaction label when receiving the confirmation instruction; the updated interaction label includes the interaction information.
  • the eighth processing module is used for, if the second identification information is different from the first identification information, and is the same as the editor's third identification information, when playing the target video file, in the The first identification information and the preset third prompt information are displayed in the interactive label.
  • the ninth processing module is configured to, if the second identification information is different from the first identification information and the editor's third identification information, when the target video file is played, in the interactive label
  • the first identification information and a data interface for viewing the related information corresponding to the first identification information are displayed in .
  • the trigger instruction is generated in the following manner:
  • the face recognition of the original video file is successful
  • the editor triggers a virtual button corresponding to the first interactive function in the first editing interface.

Abstract

本公开提供了一种视频文件的处理方法、装置、电子设备及计算机可读存储介质,涉及视频处理领域。该方法包括:在针对原始视频文件的预设的第一编辑界面中,当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面,所述第二编辑界面包括预设的交互标签;在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签;当接收到所述编辑者发起的编辑完成指令时,生成包含所述交互标签的目标视频文件,并发布所述目标视频文件。本公开强化了用户与好友交互时的互动感,从而提升了好友间的社交渗透和互动反馈率。

Description

视频文件的处理方法、装置、电子设备及计算机存储介质
相关申请的交叉引用
本申请要求于2020年09月09日提交的,申请号为202010943738.7、发明名称为“视频文件的处理方法、装置、电子设备及计算机可读存储介质”的中国专利申请的优先权,该申请的全文通过引用结合在本申请中。
技术领域
本公开涉及视频处理技术领域,具体而言,本公开涉及一种视频文件的处理方法、装置、电子设备及计算机可读存储介质。
背景技术
在日常生活中,用户可以在视频类的应用程序中观看视频,而且应用程序通常都会设置评论区或留言区,用户除了可以在评论区或留言区中发表评论或留言,还可以通过@的方式与其它用户进行互动。
比如,用户A在评论区@用户B,并对用户B进行留言,系统会对用户B进行提示,用户B可以根据提示跳转至评论区查看留言,或者,在不查看评论区的情况下,系统将用户A的留言单独推送给用户B。
但是上述互动方式存在以下问题:
1)用户之间的互动需要在评论区中进行,互动感较差;
2)用户之间没有互动。
因此,亟需一种视频处理方法来解决观看视频时用户之间互动感较差的问题。
发明内容
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护 的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
本公开提供了一种视频文件的处理方法、装置、电子设备及计算机可读存储介质,可以解决观看视频时用户之间互动感较差的问题。所述技术方案如下:
第一方面,提供了一种视频文件的处理方法,该方法包括:
在针对原始视频文件的预设的第一编辑界面中,当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面;所述第二编辑界面包括预设的交互标签;
在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签;
当接收到所述编辑者发起的编辑完成指令时,生成包含所述交互标签的目标视频文件,并发布所述目标视频文件。
第二方面,提供了一种视频文件的处理装置,该装置包括:
第一处理模块,用于在针对原始视频文件的预设的第一编辑界面中,当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面;所述第二编辑界面包括预设的交互标签;
第二处理模块,用于在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签;
第三处理模块,用于当接收到所述编辑者发起的编辑完成指令时,生成包含所述交互标签的目标视频文件,并发布所述目标视频文件。
第三方面,处理器、存储器和总线;
所述总线,用于连接所述处理器和所述存储器;
所述存储器,用于存储操作指令;
所述处理器,用于通过调用所述操作指令,可执行指令使处理器执行 如本公开的第一方面所示的视频文件的处理方法对应的操作。
第四方面,提供了一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现本公开第一方面所示的视频文件的处理方法。
本公开提供的技术方案带来的有益效果是:
在针对原始视频文件的预设的第一编辑界面中,当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面,所述第二编辑界面包括预设的交互标签;然后在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签;当接收到所述编辑者发起的编辑完成指令时,生成包含所述交互标签的目标视频文件,并发布所述目标视频文件。这样,在编辑者编辑视频文件的过程中,编辑者可以在视频文件中通过交互标签的形式与交互对象进行互动,相对于传统的在评论区进行互动的方式,强化了用户与好友交互时的互动感,从而提升了好友间的社交渗透和互动反馈率。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开一个实施例提供的一种视频文件的处理方法的流程示意图;
图2为本公开另一实施例提供的一种视频文件的处理方法的流程示意图;
图3为本公开中第一编辑界面的示意图;
图4A~4C为本公开中在第二编辑界面中编辑交互标签的界面示意图一;
图5A~5C为本公开中在第二编辑界面中编辑交互标签的界面示意图二;
图6为本公开中交互对象播放目标视频文件时的播放界面示意图;
图7为本公开中交互对象点击第二提示信息后的播放界面示意图;
图8为本公开中编辑者播放目标视频文件时的播放界面示意图;
图9为本公开中其它用户播放目标视频文件时的播放界面示意图;
图10为本公开中任一用户播放更新后的目标视频文件时的播放界面示意图;
图11为本公开又一实施例提供的一种视频文件的处理装置的结构示意图;
图12为本公开又一实施例提供的一种视频文件的处理的电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元一定 为不同的装置、模块或单元,也并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
本公开提供的视频文件的处理方法、装置、电子设备和计算机可读存储介质,旨在解决现有技术的如上技术问题。
下面以具体地实施例对本公开的技术方案以及本公开的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本公开的实施例进行描述。
在一个实施例中提供了一种视频文件的处理方法,如图1所示,该方法包括:
步骤S101,在针对原始视频文件的预设的第一编辑界面中,当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面;第二编辑界面包括预设的交互标签;
在本公开实施例中,终端中安装有用于播放视频文件、编辑视频文件的应用程序客户端,相应地,应用程序客户端预设有用于播放视频文件的至少一个播放界面,以及用于编辑视频文件的至少一个编辑界面。
需要说明的是,播放视频文件和编辑视频文件可以是相同的应用程序客户端,也可以是不同的应用程序客户端,在实际应用中可以根据实际需求进行设置,本公开实施例对此不作限制。
进一步,原始视频文件可以编辑者拍摄完成的视频文件。在实际应用中,编辑者可以在应用程序客户端的各个编辑界面中对原始视频文件进行编辑,得到编辑完成的视频文件,然后再将编辑完成的视频文件上传至服务器,从而与他人分享;也可以不通过编辑,直接将原始视频文件上传至 服务器,从而与他人分享。
具体而言,编辑者打开预设的第一编辑界面,然后导入原始视频文件并对原始视频文件进行编辑。其中,交互功能可以是“@”功能,比如,编辑者@自己的好友。
当应用程序客户端接收到针对第一交互功能的触发指令时,即可展示预设的第二编辑界面,第二编辑界面包括预设的交互标签;其中,编辑者可以在交互标签中编辑交互对象的标识信息。
步骤S102,在交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含第一标识信息的交互标签;
在第二编辑界面中,编辑者可以确定出交互对象的第一标识信息,从而得到包含第一标识信息的交互标签。比如,当交互功能为@好友时,那么第一交互功能对应的交互对象为编辑者A@的好友B,第一标识信息为B的ID(Identity document,身份标识号),从而得到包含B的ID的交互标签,该交互标签可以在播放视频文件时展示在视频图像中。
步骤S103,当接收到编辑者发起的编辑完成指令时,生成包含交互标签的目标视频文件,并发布所述目标视频文件。
编辑界面中可以预设用于生成目标视频文件的虚拟按钮,当编辑者点击该虚拟按钮,触发了编辑完成指令时,应用程序客户端即可基于编辑完成指令生成包含交互标签的目标视频文件,并发布该目标视频文件。
在本公开实施例中,在针对原始视频文件的预设的第一编辑界面中,当接收到针对第一交互功能的触发指令时,展示预设的第二编辑界面,第二编辑界面包括预设的交互标签;然后在交互标签中接收第一交互功能对应的交互对象的第一标识信息,得到包含第一标识信息的交互标签;当接收到编辑者发起的编辑完成指令时,生成包含交互标签的目标视频文件,并发布目标视频文件。这样,在编辑者编辑视频文件的过程中,编辑者可以在视频文件中通过交互标签的形式与交互对象进行互动,相对于传统的在评论区进行互动的方式,强化了用户与好友交互时的互动感,从而提升了好友间的社交渗透和互动反馈率。
在另一个实施例中提供了一种视频文件的处理方法,如图2所示,该方法包括:
步骤S201,在针对原始视频文件的预设的第一编辑界面中,当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面;第二编辑界面包括预设的交互标签;
在本公开实施例中,终端中安装有用于播放视频文件、编辑视频文件的应用程序客户端,相应地,应用程序客户端预设有用于播放视频文件的至少一个播放界面,以及用于编辑视频文件的至少一个编辑界面。其中,终端可以具有如下特点:
(1)在硬件体系上,设备具备中央处理器、存储器、输入部件和输出部件,也就是说,设备往往是具备通信功能的微型计算机设备。另外,还可以具有多种输入方式,诸如键盘、鼠标、触摸屏、送话器和摄像头等,并可以根据需要进行调整输入。同时,设备往往具有多种输出方式,如受话器、显示屏等,也可以根据需要进行调整;
(2)在软件体系上,设备必须具备操作系统,如Windows Mobile、Symbian、Palm、Android、iOS等。同时,这些操作系统越来越开放,基于这些开放的操作系统平台开发的个性化应用程序层出不穷,如通信簿、日程表、记事本、计算器以及各类游戏等,极大程度地满足了个性化用户的需求;
(3)在通信能力上,设备具有灵活的接入方式和高带宽通信性能,并且能根据所选择的业务和所处的环境,自动调整所选的通信方式,从而方便用户使用。设备可以支持GSM(Global System for Mobile Communication,全球移动通信系统)、WCDMA(Wideband Code Division Multiple Access,宽带码分多址)、CDMA2000(Code Division Multiple Access,码分多址)、TDSCDMA(Time Division-Synchronous Code Division Multiple Access,时分同步码分多址)、Wi-Fi(Wireless-Fidelity,无线保真)以及WiMAX(Worldwide Interoperability for Microwave Access,全球微波互联接入)等,从而适应多种制式网络,不仅支持语音业务,更支持 多种无线数据业务;
(4)在功能使用上,设备更加注重人性化、个性化和多功能化。随着计算机技术的发展,设备从“以设备为中心”的模式进入“以人为中心”的模式,集成了嵌入式计算、控制技术、人工智能技术以及生物认证技术等,充分体现了以人为本的宗旨。由于软件技术的发展,设备可以根据个人需求调整设置,更加个性化。同时,设备本身集成了众多软件和硬件,功能也越来越强大。
需要说明的是,播放视频文件和编辑视频文件可以是相同的应用程序客户端,也可以是不同的应用程序客户端,在实际应用中可以根据实际需求进行设置,本公开实施例对此不作限制。
进一步,原始视频文件可以编辑者拍摄完成的视频文件。在实际应用中,编辑者可以在应用程序客户端的各个编辑界面中对原始视频文件进行编辑,得到编辑完成的视频文件,然后再将编辑完成的视频文件上传至服务器,从而与他人分享;也可以不通过编辑,直接将原始视频文件上传至服务器,从而与他人分享。
具体而言,编辑者打开预设的第一编辑界面,然后导入原始视频文件并对原始视频文件进行编辑。其中,交互功能可以是“@”功能,比如,编辑者@自己的好友。
当编辑者点击了虚拟按钮302即发起了针对第一交互功能的触发指令,应用程序客户端接收到触发指令后即可展示预设的第二编辑界面。
在本公开一种优选实施例中,触发指令通过如下方式生成:
在第一编辑界面中对原始视频文件进行人脸识别成功;
或,
编辑者触发第一编辑界面中与第一交互功能对应的虚拟按钮。
具体而言,在编辑的过程中,应用程序客户端可以对原始视频文件进行人脸识别,如果人脸识别成功,那么就可以生成触发指令;或者,第一编辑界面中预设有第一交互功能对应的虚拟按钮,当编辑者点击了该虚拟按钮时,应用程序客户端即可生成触发指令。
其中,应用程序客户端对原始视频文件进行人脸识别可以是先播放原 始视频文件,然后对播放的视频图像进行人脸识别;或者,应用程序客户端可以在后台播放原始视频文件并进行人脸识别。当然,对视频文件进行人脸识别的其它方法也是适用于本公开实施例的,本公开实施例对此不作限制。
比如,编辑者在如图3所示的第一编辑界面中对原始视频进行编辑,并且应用程序客户端识别到当前的视频图像中存在人像,那么就可以在第一编辑界面中展示第一交互功能的第一提示信息301,其中,第一交互功能在第一编辑界面中可以对应虚拟按钮302。当然,第一编辑界面中还可以预设其它虚拟按钮,在实际应用中可以根据实际需求进行设置,本公开实施例对此不作限制。
步骤S202,在交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含第一标识信息的交互标签;
在第二编辑界面中,编辑者可以确定出交互对象的第一标识信息,从而得到包含第一标识信息的交互标签。比如,当交互功能为@好友时,那么第一交互功能对应的交互对象为编辑者A@的好友B,第一标识信息为B的ID(Identity document,身份标识号),从而得到包含B的ID的交互标签,该交互标签可以在播放视频文件时展示在视频图像中。
在本公开一种优选实施例中,第二编辑界面包括预设的标识信息列表,标识信息列表包括至少一个交互对象的标识信息;
在交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含第一标识信息的交互标签,包括:
接收针对标识信息列表中任一标识信息的选择指令;
当接收到生成交互标签的生成指令时,生成包含任一标识信息的交互标签。
具体而言,第二编辑界面中可以包括预设的交互标签和预设的标识信息列表,该标识信息列表中包括至少一个交互对象的标识信息。应用程序客户端在展示第二编辑界面时,可以在第二编辑界面中展示预设的交互标签和预设的标识信息列表。当编辑者从各个标识信息中选择任一标识信息时,即发起了针对该任一标识信息的选择指令,应用程序客户端在接收到 选择指令后,将选择指令对应的任一标识信息输入预设的交互标签中,当编辑者确定生成交互标签时,生成包含该任一标识信息的交互标签。
比如,在如图4A所示的第二编辑界面中,展示了预设的交互标签401和标识信息列表402,其中,交互标签中预设了交互功能的交互指令“@”。当编辑者选择了标识信息列表中的“小星星”时,应用程序客户端将“小星星”输入401,如图4B所示。当编辑者点击了右上角的“完成”时即发起了生成交互标签的生成指令,应用程序客户端在接收到生成指令后,生成包含“小星星”的交互标签,如图4C所示。
需要说明的是,第二编辑界面中的标识信息列表可以是编辑者的好友列表,也可以是编辑者最近联系的好友,或者还可以是其它类型的标识信息列表,在实际应用中可以根据实际需求进行设置,本公开实施例对此不作限制。
进一步,在生成交互标签之后,编辑者还可以对交互标签的样式进行更换。比如,在如图4C所示中的交互标签,当编辑者点击交互标签即可更换交互标签的样式。当然,在实际应用中还可以通过其它方式来更换交互标签的样式,本公开实施例对此也不作限制。
在本公开一种优选实施例中,交互标签包含预设的第一文本框;
在交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含第一标识信息的交互标签,包括:
接收在第一文本框中输入的标识信息;
当接收到生成交互标签的生成指令时,生成包含输入的标识信息的交互标签。
具体而言,第二编辑界面也可以包含预设的第一文本框。应用程序客户端在展示第二编辑界面时,可以在第二编辑界面中展示预设的第一文本框。编辑者可以直接在第一文本框中输入交互功能的指令“@”和交互对象的标识信息,然后确定生成交互标签即可生成包含该任一标识信息的交互标签。
比如,如图5A所示的第二编辑界面中,展示了预设的第一文本框501,然后编辑者可以在第一文本框中输入“@小星星”,如图5B所示,当编 辑者点击了右上角的“完成”时即发起了生成交互标签的生成指令,应用程序客户端在接收到生成指令后,生成包含“小星星”的交互标签,如图4C所示。
或者,编辑者在第一文本框中输入交互功能的指令(比如“@”)后,展示预设的标识信息列表,如图5C所示。这样编辑者就可以直接选择交互对象,不用输入交互对象的标识信息,为编辑者提供了便利。
需要说明的是,交互对象与人脸识别对应的对象可以是相同的,也可以是不同的。比如,原始视频文件中人脸识别成功的对象是A,编辑者@的交互对象可以是A,也可以是B。
而且,交互标签中可以包含一个交互对象的标识信息,也可以包含多个交互对象的标识信息,比如,编辑者同时@了A、B、C三个交互对象。在实际应用中可以根据实际需求进行设置,本公开实施例对此不作限制。
步骤S203,当接收到编辑者发起的编辑完成指令时,生成包含交互标签的目标视频文件,并发布目标视频文件;
具体而言,编辑界面中可以预设用于生成目标视频文件的虚拟按钮,当编辑者点击该虚拟按钮,触发了编辑完成指令时,应用程序客户端即可基于编辑完成指令生成包含交互标签的目标视频文件,并发布该目标视频文件。比如,当编辑者点击了如图4C所示的右下角的“确定”之后,即触发了编辑完成指令,应用程序客户端即可基于该编辑完成指令生成包含交互标签的目标视频文件。
应用程序客户端生成了目标视频文件之后,即可将目标视频文件上传至预设服务器进行发布。这样,任一用户(包括目标视频文件的编辑者)都可以通过向预设服务器发送播放该目标视频文件的播放请求,预设服务器接收到该播放请求后下发目标视频文件,从而实现了对该目标视频文件的分享。
步骤S204,当接收到播放者发起的针对目标视频文件的播放指令时,获取目标视频文件和播放者的第二标识信息;
具体而言,播放者通过应用程序客户端的播放界面发起播放目标视频文件的播放指令时,应用程序客户端可以基于该播放指令生成播放请求, 并将该播放请求发送至预设服务器从而获取目标视频文件,同时获取播放者的第二标识信息。
在实际应用中,用户在使用应用程序客户端的时候,都有一个对应的标识信息,该标识信息可以是应用程序客户端给用户临时分配的,也可以是用户自己通过注册等方式确定的。所以,应用于本公开实施例中,播放者通过应用程序客户端播放目标视频文件时,应用程序客户端除了可以从预设服务器获取目标视频文件,还可以获取播放者的第二标识信息。
步骤S205,若第二标识信息与第一标识信息相同,则在播放目标视频文件时,在交互标签中展示第一标识信息,以及预设的第二交互功能的第二提示信息;
具体而言,如果获取到的第二标识信息与上述的第一标识信息是相同的,则表示播放者就是上述的交互对象,那么在播放目标视频文件时,在播放界面中播放目标视频文件,同时展示交互标签,该交互标签包括交互对象的第一标识信息,以及预设的第二交互功能的第二提示信息;其中,第二交互功能可以是“评论”功能,第二提示信息可以是提示交互对象进行评论的信息。
比如,在如图6所示的播放界面中,通过上述方式识别到播放者就是“小星星”,那么在播放界面中可以播放目标视频文件和展示交互标签,交互标签中包括第一标识信息“@小星星”和第二提示信息“点击这里评论”。
步骤S206,当接收到播放者发起的针对第二提示信息的点击指令时,展示预设的第二文本框;
当播放者点击了该交互标签时,即发起了点击指令,应用程序客户端在接收到该点击指令后即可展示预设的第二文本框,该第二文本框用于接收交互对象输入的交互信息,同时该第二文本框处于可编辑状态。
比如,如图7所示的播放界面中,当交互对象点击了如图6所示中的第二提示信息后,即可展示预设的第二文本框701,该第二文本框处于可编辑状态。
步骤S207,接收在第二文本框中输入的交互信息;
在展示了第二文本框之后,交互对象即可在第二文本框中输入交互信息。比如,交互对象在第二文本框中输入“啦啦啦啦啦啦啦啦啦”的交互信息。
在实际应用中,如果交互标签中不存在交互信息,那么就可以在交互标签中展示第二提示信息;如果交互标签中存在交互信息,那么就可以直接展示交互信息即可。
步骤S208,当接收到确认指令时,展示更新后的交互标签;所述更新后的交互标签包括所述交互信息;
交互对象在输入完成交互信息并触发用于发表交互信息的确认指令时,应用程序客户端将交互信息发送至预设服务器,预设服务器采用该交互信息更新目标视频文件的交互标签,得到更新后的交互标签,从而得到包含更新后的交互标签的更新后的目视视频文件。
预设服务器更新得到更新后的目标视频文件后,任一用户发起播放请求获取到的就是更新后的目标视频文件了,用户观看更新后的目标视频文件时,就可以看到更新后的交互标签,更新后的交互标签包括交互信息。
步骤S209,若第二标识信息与第一标识信息不相同,且与编辑者的第三标识信息相同,则在播放目标视频文件时,在交互标签中展示第一标识信息,以及预设的第三提示信息;
具体而言,如果第二标识信息与第一标识信息不相同,且与编辑者的第三标识信息相同,则表示播放者不是交互对象,而是编辑者,那么在播放目标视频文件时,在播放界面中播放目标视频文件,同时展示交互标签即可,其中,交互标签中包括第一标识信息和预设的第三提示信息。
比如,在如图8所示的播放界面中,通过上述方式识别到播放者不是“小星星”,而是编辑者,那么在播放界面中可以播放目标视频文件和展示交互标签,交互标签中包括第一标识信息“@小星星”和预设的第三提示信息“好友评论将展示在这里”。
步骤S2010,若第二标识信息与第一标识信息、编辑者的第三标识信息均不相同,则在播放目标视频文件时,在交互标签中展示第一标识信息,以及用于查看第一标识信息对应的相关信息的数据接口。
具体而言,如果第二标识信息与第一标识信息不相同,且与编辑者的第三标识信息也不相同,则表示播放者既不是交互对象,也不是编辑者,那么在播放目标视频文件时,在播放界面中播放目标视频文件,同时展示交互标签即可,其中,交互标签中包括第一标识信息和用于查看第一标识信息对应的相关信息的数据接口,比如查看交互对象的个人主页的数据接口等等,这样用户点击该数据接口即可查看交互对象的个人主页。
比如,在如图9所示的播放界面中,通过上述方式识别到播放者既不是“小星星”,也不是编辑者,那么在播放界面中可以播放目标视频文件和展示交互标签,交互标签中包括第一标识信息“@小星星”和第一标识信息对应的相关信息“查看个人主页”。当播放者点击“查看个人主页”时,即可在应用程序客户端中展示“小星星”的个人主页。
具体而言,当预设服务器对目标视频文件进行更新,得到更新后的目标视频文件后,任一用户向预设服务器发起的播放请求都是针对更新后的目标视频文件的播放请求。用户通过应用程序客户端发起播放请求后,即可获取更新后的目标视频文件。
需要说明的是,用户并不需要分辩预设服务器中存储的是目标视频文件还是更新后的目标视频文件,播放请求中包括视频文件的标识信息即可,预设服务器在接收到播放请求后,根据播放请求中的标识信息获取最新的视频文件即可,也就是说,预设服务器在接收到播放请求时,如果预设服务器中存储的是目标视频文件,那么就下发目标视频文件;如果存储的是更新后的目标视频文件,那么就下发更新后的目标视频文件,不需要用户来分辩。本公开实施例仅仅只是为了方便理解进行的解释说明,并不是对其进行限制。
应用程序客户端在接收到预设服务器下发的更新后的目标视频文件后,即可在播放界面中播放目标视频文件,同时展示更新后的交互标签了。
比如,在如图10的播放界面中播放目标视频文件,同时展示更新后的交互标签,更新后的交互标签包括第一标识信息“@小星星”以及交互信息“小星星的评论:啦啦啦啦啦啦啦啦啦”。
在本公开实施例中,在针对原始视频文件的预设的第一编辑界面中,当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面,第二编辑界面包括预设的交互标签;然后在交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含第一标识信息的交互标签;当接收到编辑者发起的编辑完成指令时,生成包含交互标签的目标视频文件,并发布目标视频文件。这样,在编辑者编辑视频文件的过程中,编辑者可以在视频文件中通过交互标签的形式与交互对象进行互动,相对于传统的在评论区进行互动的方式,强化了用户与好友交互时的互动感,从而提升了好友间的社交渗透和互动反馈率。
进一步,由于目标视频文件中包含交互对象的标识信息,所以当交互对象在浏览该目标视频文件时,可以直接在交互标签中进行评论,既不影响浏览视频文件,又可以进行互动,提高了交互对象的交互体验。
而且,其它用户可以在浏览视频文件的时候直接查看交互对象的相关信息和评论信息,不需要通过搜索、翻阅等操作来查找交互对象相关信息,从而提高了其它用户的交互体验。
同时,编辑者也可以直接从更新后的交互标签中查看交互信息,不需要翻阅等操作,从而也提高了编辑者的交互体验。
图11为本公开又一实施例提供的一种视频文件的处理装置的结构示意图,如图11所示,本实施例的装置可以包括:
第一处理模块1101,用于在针对原始视频文件的预设的第一编辑界面中,当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面;第二编辑界面包括预设的交互标签;
第二处理模块1102,用于在交互标签中接收第一交互功能对应的交互对象的第一标识信息,得到包含第一标识信息的交互标签;
第三处理模块1103,用于当接收到编辑者发起的编辑完成指令时,生成包含交互标签的目标视频文件,并发布目标视频文件。
在本公开一种优选实施例中,第二编辑界面包括预设的标识信息列表,标识信息列表包括至少一个交互对象的标识信息;
第二处理模块具体用于:
接收针对标识信息列表中任一标识信息的选择指令;当接收到生成交互标签的生成指令时,生成包含任一标识信息的交互标签。
在本公开一种优选实施例中,交互标签包含预设的第一文本框;
第二处理模块具体用于:
接收在第一文本框中输入的标识信息;当接收到生成交互标签的生成指令时,生成包含输入的标识信息的交互标签。
在本公开一种优选实施例中,还包括:
第四处理模块,用于当接收到播放者发起的针对目标视频文件的播放指令时,获取目标视频文件和播放者的第二标识信息;
第五处理模块,用于若第二标识信息与第一标识信息相同,则在播放目标视频文件时,在交互标签中展示第一标识信息,以及预设的第二交互功能的第二提示信息;
第六处理模块,用于当接收到播放者发起的针对第二提示信息的点击指令时,展示预设的第二文本框;
接收模块,用于接收在第二文本框中输入的交互信息;
第七处理模块,用于当接收到确认指令时,展示更新后的交互标签;更新后的交互标签包括交互信息。
在本公开一种优选实施例中,还包括:
第八处理模块,用于若第二标识信息与第一标识信息不相同,且与编辑者的第三标识信息相同,则在播放目标视频文件时,在交互标签中展示第一标识信息,以及预设的第三提示信息。
在本公开一种优选实施例中,还包括:
第九处理模块,用于若第二标识信息与第一标识信息、编辑者的第三标识信息均不相同,则在播放目标视频文件时,在交互标签中展示第一标识信息,以及用于查看第一标识信息对应的相关信息的数据接口。
在本公开一种优选实施例中,触发指令通过如下方式生成:
在第一编辑界面中对原始视频文件进行人脸识别成功;
或,
编辑者触发第一编辑界面中与第一交互功能对应的虚拟按钮。
本实施例的视频文件的处理装置可执行本公开第一个实施例、第二个实施例所示的视频文件的处理方法,其实现原理相类似,此处不再赘述。
在本公开实施例中,在针对原始视频文件的预设的第一编辑界面中,当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面,第二编辑界面包括预设的交互标签;然后在交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含第一标识信息的交互标签;当接收到编辑者发起的编辑完成指令时,生成包含交互标签的目标视频文件,并发布目标视频文件。这样,在编辑者编辑视频文件的过程中,编辑者可以在视频文件中通过交互标签的形式与交互对象进行互动,相对于传统的在评论区进行互动的方式,强化了用户与好友交互时的互动感,从而提升了好友间的社交渗透和互动反馈率。
进一步,由于目标视频文件中包含交互对象的标识信息,所以当交互对象在浏览该目标视频文件时,可以直接在交互标签中进行评论,既不影响浏览视频文件,又可以进行互动,提高了交互对象的交互体验。
而且,其它用户可以在浏览视频文件的时候直接查看交互对象的相关信息和评论信息,不需要通过搜索、翻阅等操作来查找交互对象相关信息,从而提高了其它用户的交互体验。
同时,编辑者也可以直接从更新后的交互标签中查看交互信息,不需要翻阅等操作,从而也提高了编辑者的交互体验。
下面参考图12,其示出了适于用来实现本公开实施例的电子设备1200的结构示意图。本公开实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图12示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
电子设备包括:存储器以及处理器,其中,这里的处理器可以称为下文所述的处理装置1201,存储器可以包括下文中的只读存储器(ROM)1202、随机访问存储器(RAM)1203以及存储装置1208中的至少一项,具体如下所示:如图12所示,电子设备1200可以包括处理装置(例如中央处理器、图形处理器等)1201,其可以根据存储在只读存储器(ROM)1202中的程序或者从存储装置1208加载到随机访问存储器(RAM)1203中的程序而执行各种适当的动作和处理。在RAM 1203中,还存储有电子设备1200操作所需的各种程序和数据。处理装置1201、ROM 1202以及RAM 1203通过总线1204彼此相连。输入/输出(I/O)接口1205也连接至总线1204。
通常,以下装置可以连接至I/O接口1205:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置1206;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置1207;包括例如磁带、硬盘等的存储装置1208;以及通信装置1209。通信装置1209可以允许电子设备1200与其他设备进行无线或有线通信以交换数据。虽然图12示出了具有各种装置的电子设备1200,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置1209从网络上被下载和安装,或者从存储装置1208被安装,或者从ROM 1202被安装。在该计算机程序被处理装置1201执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更 具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:在针对原始视频文件的预设的第一编辑界面中,当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面,所述第二编辑界面包括预设的交互标签;在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签;当接收到所述编辑者发起的编辑完成指令时,生成包含所述交互标签的目标视频文件,并发布所述目标视频文件。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块或单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块或单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含 或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,【示例一】提供了一种视频文件的处理方法,包括:
在针对原始视频文件的预设的第一编辑界面中,当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面;所述第二编辑界面包括预设的交互标签;
在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签;
当接收到所述编辑者发起的编辑完成指令时,生成包含所述交互标签的目标视频文件,并发布所述目标视频文件。
在本公开一种优选实施例中,所述第二编辑界面包括预设的标识信息列表,所述标识信息列表包括至少一个交互对象的标识信息;
所述在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签,包括:
接收针对所述标识信息列表中任一标识信息的选择指令;
当接收到生成交互标签的生成指令时,生成包含所述任一标识信息的交互标签。
在本公开一种优选实施例中,所述交互标签包含预设的第一文本框;
所述在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签,包括:
接收在所述第一文本框中输入的标识信息;
当接收到生成交互标签的生成指令时,生成包含所述输入的标识信息的交互标签。
在本公开一种优选实施例中,还包括:
当接收到播放者发起的针对所述目标视频文件的播放指令时,获取所述目标视频文件和所述播放者的第二标识信息;
若所述第二标识信息与所述第一标识信息相同,则在播放所述目标视频文件时,在所述交互标签中展示所述第一标识信息,以及预设的第二交互功能的第二提示信息;
当接收到所述播放者发起的针对所述第二提示信息的点击指令时,展示预设的第二文本框;
接收在所述第二文本框中输入的交互信息;
当接收到确认指令时,展示更新后的交互标签;所述更新后的交互标签包括所述交互信息。
在本公开一种优选实施例中,还包括:
若所述第二标识信息与所述第一标识信息不相同,且与所述编辑者的第三标识信息相同,则在播放所述目标视频文件时,在所述交互标签中展示所述第一标识信息,以及预设的第三提示信息。
在本公开一种优选实施例中,还包括:
若所述第二标识信息与所述第一标识信息、所述编辑者的第三标识信息均不相同,则在播放所述目标视频文件时,在所述交互标签中展示所述第一标识信息,以及用于查看所述第一标识信息对应的相关信息的数据接口。
在本公开一种优选实施例中,所述触发指令通过如下方式生成:
在所述第一编辑界面中对所述原始视频文件进行人脸识别成功;
或,
所述编辑者触发所述第一编辑界面中与所述第一交互功能对应的虚拟按钮。
根据本公开的一个或多个实施例,【示例二】提供了示例一的装置,包括:
第一处理模块,用于在针对原始视频文件的预设的第一编辑界面中,当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面;所述第二编辑界面包括预设的交互标签;
第二处理模块,用于在所述交互标签中接收所述第一交互功能对应的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签;
第三处理模块,用于当接收到所述编辑者发起的编辑完成指令时,生成包含所述交互标签的目标视频文件,并发布所述目标视频文件。
在本公开一种优选实施例中,所述第二编辑界面包括预设的标识信息列表,所述标识信息列表包括至少一个交互对象的标识信息;
所述第二处理模块具体用于:
接收针对所述标识信息列表中任一标识信息的选择指令;当接收到生成交互标签的生成指令时,生成包含所述任一标识信息的交互标签。
在本公开一种优选实施例中,所述交互标签包含预设的第一文本框;
所述第二处理模块具体用于:
接收在所述第一文本框中输入的标识信息;当接收到生成交互标签的生成指令时,生成包含所述输入的标识信息的交互标签。
在本公开一种优选实施例中,还包括:
第四处理模块,用于当接收到播放者发起的针对所述目标视频文件的播放指令时,获取所述目标视频文件和所述播放者的第二标识信息;
第五处理模块,用于若所述第二标识信息与所述第一标识信息相同,则在播放所述目标视频文件时,在所述交互标签中展示所述第一标识信息,以及预设的第二交互功能的第二提示信息;
第六处理模块,用于当接收到所述播放者发起的针对所述第二提示信息的点击指令时,展示预设的第二文本框;
接收模块,用于接收在所述第二文本框中输入的交互信息;
第七处理模块,用于当接收到确认指令时,展示更新后的交互标签;所述更新后的交互标签包括所述交互信息。
在本公开一种优选实施例中,还包括:
第八处理模块,用于若所述第二标识信息与所述第一标识信息不相同,且与所述编辑者的第三标识信息相同,则在播放所述目标视频文件时,在所述交互标签中展示所述第一标识信息,以及预设的第三提示信息。
在本公开一种优选实施例中,还包括:
第九处理模块,用于若所述第二标识信息与所述第一标识信息、所述编辑者的第三标识信息均不相同,则在播放所述目标视频文件时,在所述交互标签中展示所述第一标识信息,以及用于查看所述第一标识信息对应的相关信息的数据接口。
在本公开一种优选实施例中,所述触发指令通过如下方式生成:
在第一编辑界面中对原始视频文件进行人脸识别成功;
或,
编辑者触发第一编辑界面中与第一交互功能对应的虚拟按钮。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的 特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (10)

  1. 一种视频文件的处理方法,其特征在于,包括:
    在针对原始视频文件的预设的第一编辑界面中,当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面;所述第二编辑界面包括预设的交互标签;
    在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签;
    当接收到所述编辑者发起的编辑完成指令时,生成包含所述交互标签的目标视频文件,并发布所述目标视频文件。
  2. 根据权利要求1所述的视频文件的处理方法,其特征在于,所述第二编辑界面包括预设的标识信息列表,所述标识信息列表包括至少一个交互对象的标识信息;
    所述在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签,包括:
    接收针对所述标识信息列表中任一标识信息的选择指令;
    当接收到生成交互标签的生成指令时,生成包含所述任一标识信息的交互标签。
  3. 根据权利要求1所述的视频文件的处理方法,其特征在于,所述交互标签包含预设的第一文本框;
    所述在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签,包括:
    接收在所述第一文本框中输入的标识信息;
    当接收到生成交互标签的生成指令时,生成包含所述输入的标识信息的交互标签。
  4. 根据权利要求1所述的视频文件的处理方法,其特征在于,还包括:
    当接收到播放者发起的针对所述目标视频文件的播放指令时,获取所 述目标视频文件和所述播放者的第二标识信息;
    若所述第二标识信息与所述第一标识信息相同,则在播放所述目标视频文件时,在所述交互标签中展示所述第一标识信息,以及预设的第二交互功能的第二提示信息;
    当接收到所述播放者发起的针对所述第二提示信息的点击指令时,展示预设的第二文本框;
    接收在所述第二文本框中输入的交互信息;
    当接收到确认指令时,展示更新后的交互标签;所述更新后的交互标签包括所述交互信息。
  5. 根据权利要求1所述的视频文件的处理方法,其特征在于,还包括:
    若所述第二标识信息与所述第一标识信息不相同,且与所述编辑者的第三标识信息相同,则在播放所述目标视频文件时,在所述交互标签中展示所述第一标识信息,以及预设的第三提示信息。
  6. 根据权利要求1所述的视频文件的处理方法,其特征在于,还包括:
    若所述第二标识信息与所述第一标识信息、所述编辑者的第三标识信息均不相同,则在播放所述目标视频文件时,在所述交互标签中展示所述第一标识信息,以及用于查看所述第一标识信息对应的相关信息的数据接口。
  7. 根据权利要求1所述的视频文件的处理方法,其特征在于,所述触发指令通过如下方式生成:
    在所述第一编辑界面中对所述原始视频文件进行人脸识别成功;
    或,
    所述编辑者触发所述第一编辑界面中与所述第一交互功能对应的虚拟按钮。
  8. 一种视频文件的处理装置,其特征在于,包括:
    第一处理模块,用于在针对原始视频文件的预设的第一编辑界面中, 当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面;所述第二编辑界面包括预设的交互标签;
    第二处理模块,用于在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签;
    第三处理模块,用于当接收到所述编辑者发起的编辑完成指令时,生成包含所述交互标签的目标视频文件,并发布所述目标视频文件。
  9. 一种电子设备,其特征在于,其包括:
    处理器、存储器和总线;
    所述总线,用于连接所述处理器和所述存储器;
    所述存储器,用于存储操作指令;
    所述处理器,用于通过调用所述操作指令,执行上述权利要求1-7中任一项所述的视频文件的处理方法。
  10. 一种计算机可读存储介质,其特征在于,所述计算机存储介质用于存储计算机指令,当其在计算机上运行时,使得计算机可以执行上述权利要求1-7中任一项所述的视频文件的处理方法。
PCT/CN2021/115733 2020-09-09 2021-08-31 视频文件的处理方法、装置、电子设备及计算机存储介质 WO2022052838A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
BR112023001285A BR112023001285A2 (pt) 2020-09-09 2021-08-31 Método e aparelho de processamento de arquivos de vídeo, dispositivo eletrônico e meio de armazenamento por computador
KR1020227036625A KR20220156910A (ko) 2020-09-09 2021-08-31 비디오 파일의 처리 방법, 장치, 전자 기기 및 컴퓨터 저장 매체
JP2022564729A JP2023522759A (ja) 2020-09-09 2021-08-31 動画ファイルの処理方法、装置、電子機器及びコンピュータ記憶媒体
EP21865893.8A EP4093042A4 (en) 2020-09-09 2021-08-31 VIDEO FILE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE AND COMPUTER STORAGE MEDIUM
US17/887,138 US11889143B2 (en) 2020-09-09 2022-08-12 Video file processing method and apparatus, electronic device, and computer storage medium
US18/541,783 US20240114197A1 (en) 2020-09-09 2023-12-15 Video file processing method and apparatus, electronic device, and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010943738.7A CN112040330B (zh) 2020-09-09 2020-09-09 视频文件的处理方法、装置、电子设备及计算机存储介质
CN202010943738.7 2020-09-09

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/887,138 Continuation US11889143B2 (en) 2020-09-09 2022-08-12 Video file processing method and apparatus, electronic device, and computer storage medium

Publications (1)

Publication Number Publication Date
WO2022052838A1 true WO2022052838A1 (zh) 2022-03-17

Family

ID=73585150

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/115733 WO2022052838A1 (zh) 2020-09-09 2021-08-31 视频文件的处理方法、装置、电子设备及计算机存储介质

Country Status (7)

Country Link
US (2) US11889143B2 (zh)
EP (1) EP4093042A4 (zh)
JP (1) JP2023522759A (zh)
KR (1) KR20220156910A (zh)
CN (1) CN112040330B (zh)
BR (1) BR112023001285A2 (zh)
WO (1) WO2022052838A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112040330B (zh) 2020-09-09 2021-12-07 北京字跳网络技术有限公司 视频文件的处理方法、装置、电子设备及计算机存储介质
CN113419800B (zh) * 2021-06-11 2023-03-24 北京字跳网络技术有限公司 交互方法、装置、介质和电子设备
CN113655930B (zh) * 2021-08-30 2023-01-10 北京字跳网络技术有限公司 信息发布方法、信息的展示方法、装置、电子设备及介质
CN113741757B (zh) * 2021-09-16 2023-10-17 北京字跳网络技术有限公司 显示提醒信息的方法、装置、电子设备和存储介质
CN114430499B (zh) * 2022-01-27 2024-02-06 维沃移动通信有限公司 视频编辑方法、视频编辑装置、电子设备和可读存储介质
CN115941841A (zh) * 2022-12-06 2023-04-07 北京字跳网络技术有限公司 关联信息展示方法、装置、设备、存储介质和程序产品

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130254816A1 (en) * 2012-03-21 2013-09-26 Sony Corporation Temporal video tagging and distribution
CN103873945A (zh) * 2014-02-21 2014-06-18 周良文 与视频节目中对象进行社交的系统、方法
CN105847913A (zh) * 2016-05-20 2016-08-10 腾讯科技(深圳)有限公司 一种控制视频直播的方法、移动终端及系统
CN106446056A (zh) * 2016-09-05 2017-02-22 奇异牛科技(深圳)有限公司 一种基于移动端图片定义标签的系统及其方法
CN108289057A (zh) * 2017-12-22 2018-07-17 北京达佳互联信息技术有限公司 数据分享方法、系统及移动终端
US10063910B1 (en) * 2017-10-31 2018-08-28 Rovi Guides, Inc. Systems and methods for customizing a display of information associated with a media asset
CN110378247A (zh) * 2019-06-26 2019-10-25 腾讯科技(深圳)有限公司 虚拟对象识别方法和装置、存储介质及电子装置
CN110460578A (zh) * 2019-07-09 2019-11-15 北京达佳互联信息技术有限公司 建立关联关系的方法、装置及计算机可读存储介质
CN110868639A (zh) * 2019-11-28 2020-03-06 北京达佳互联信息技术有限公司 视频合成方法及装置
CN111325004A (zh) * 2020-02-21 2020-06-23 腾讯科技(深圳)有限公司 一种文件评论、查看方法
CN112040330A (zh) * 2020-09-09 2020-12-04 北京字跳网络技术有限公司 视频文件的处理方法、装置、电子设备及计算机存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101686830B1 (ko) 2013-05-30 2016-12-15 페이스북, 인크. 온라인 소셜 네트워크 상의 이미지를 위한 태그 제안
CN104581409A (zh) * 2015-01-22 2015-04-29 广东小天才科技有限公司 一种虚拟互动视频播放方法和装置
CN107484019A (zh) * 2017-08-03 2017-12-15 乐蜜有限公司 一种视频文件的发布方法及装置
CN110049266A (zh) * 2019-04-10 2019-07-23 北京字节跳动网络技术有限公司 视频数据发布方法、装置、电子设备和存储介质
CN111523053A (zh) * 2020-04-26 2020-08-11 腾讯科技(深圳)有限公司 信息流处理方法、装置、计算机设备和存储介质
CN111580724B (zh) * 2020-06-28 2021-12-10 腾讯科技(深圳)有限公司 一种信息互动方法、设备及存储介质

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130254816A1 (en) * 2012-03-21 2013-09-26 Sony Corporation Temporal video tagging and distribution
CN103873945A (zh) * 2014-02-21 2014-06-18 周良文 与视频节目中对象进行社交的系统、方法
CN105847913A (zh) * 2016-05-20 2016-08-10 腾讯科技(深圳)有限公司 一种控制视频直播的方法、移动终端及系统
CN106446056A (zh) * 2016-09-05 2017-02-22 奇异牛科技(深圳)有限公司 一种基于移动端图片定义标签的系统及其方法
US10063910B1 (en) * 2017-10-31 2018-08-28 Rovi Guides, Inc. Systems and methods for customizing a display of information associated with a media asset
CN108289057A (zh) * 2017-12-22 2018-07-17 北京达佳互联信息技术有限公司 数据分享方法、系统及移动终端
CN110378247A (zh) * 2019-06-26 2019-10-25 腾讯科技(深圳)有限公司 虚拟对象识别方法和装置、存储介质及电子装置
CN110460578A (zh) * 2019-07-09 2019-11-15 北京达佳互联信息技术有限公司 建立关联关系的方法、装置及计算机可读存储介质
CN110868639A (zh) * 2019-11-28 2020-03-06 北京达佳互联信息技术有限公司 视频合成方法及装置
CN111325004A (zh) * 2020-02-21 2020-06-23 腾讯科技(深圳)有限公司 一种文件评论、查看方法
CN112040330A (zh) * 2020-09-09 2020-12-04 北京字跳网络技术有限公司 视频文件的处理方法、装置、电子设备及计算机存储介质

Also Published As

Publication number Publication date
EP4093042A4 (en) 2023-05-24
BR112023001285A2 (pt) 2023-03-21
US11889143B2 (en) 2024-01-30
CN112040330B (zh) 2021-12-07
CN112040330A (zh) 2020-12-04
KR20220156910A (ko) 2022-11-28
EP4093042A1 (en) 2022-11-23
US20220394319A1 (en) 2022-12-08
JP2023522759A (ja) 2023-05-31
US20240114197A1 (en) 2024-04-04

Similar Documents

Publication Publication Date Title
WO2022052838A1 (zh) 视频文件的处理方法、装置、电子设备及计算机存储介质
US11621022B2 (en) Video file generation method and device, terminal and storage medium
WO2022152064A1 (zh) 视频生成方法、装置、电子设备和存储介质
CN111970571B (zh) 视频制作方法、装置、设备及存储介质
US20240121468A1 (en) Display method, apparatus, device and storage medium
US20220368798A1 (en) Method and device for displaying video playback interface, terminal device, and storage medium
JP2023523067A (ja) ビデオ処理方法、装置、機器及び媒体
CN111343074B (zh) 一种视频处理方法、装置和设备以及存储介质
WO2022193867A1 (zh) 一种视频处理方法、装置、电子设备及存储介质
US11886484B2 (en) Music playing method and apparatus based on user interaction, and device and storage medium
JP2023539815A (ja) 議事録のインタラクション方法、装置、機器及び媒体
CN112000267A (zh) 信息显示方法、装置、设备及存储介质
WO2023155822A1 (zh) 会话的方法、装置、电子设备和存储介质
WO2023103889A1 (zh) 视频处理方法、装置、电子设备及存储介质
US20240064367A1 (en) Video processing method and apparatus, electronic device, and storage medium
WO2023134610A1 (zh) 一种视频展示与交互方法、装置、电子设备及存储介质
CN112241397A (zh) 多媒体文件的分享方法、装置、电子设备及可读存储介质
CN114363686B (zh) 多媒体内容的发布方法、装置、设备和介质
WO2024037491A1 (zh) 媒体内容处理方法、装置、设备及存储介质
WO2024037480A1 (zh) 交互方法、装置、电子设备和存储介质
WO2024078516A1 (zh) 媒体内容展示方法、装置、设备及存储介质
WO2024046360A1 (zh) 媒体内容处理方法、装置、设备、可读存储介质及产品
WO2023134558A1 (zh) 交互方法、装置、电子设备、存储介质和程序产品
EP4336329A1 (en) Multimedia processing method and apparatus, and device and medium
CN112307393A (zh) 信息发布方法、装置和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21865893

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021865893

Country of ref document: EP

Effective date: 20220815

ENP Entry into the national phase

Ref document number: 20227036625

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022564729

Country of ref document: JP

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112023001285

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 112023001285

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20230124

NENP Non-entry into the national phase

Ref country code: DE