US20230038810A1 - Method and apparatus for synthesizing video - Google Patents

Method and apparatus for synthesizing video Download PDF

Info

Publication number
US20230038810A1
US20230038810A1 US17/969,174 US202217969174A US2023038810A1 US 20230038810 A1 US20230038810 A1 US 20230038810A1 US 202217969174 A US202217969174 A US 202217969174A US 2023038810 A1 US2023038810 A1 US 2023038810A1
Authority
US
United States
Prior art keywords
video
friend
displaying
sticker
recommendation list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/969,174
Inventor
Qian Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to US17/969,174 priority Critical patent/US20230038810A1/en
Assigned to Beijing Dajia Internet Information Technology Co., Ltd. reassignment Beijing Dajia Internet Information Technology Co., Ltd. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, QIAN
Publication of US20230038810A1 publication Critical patent/US20230038810A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47214End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for content reservation or setting reminders; for requesting event notification, e.g. of sport results or stock market
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4826End-user interface for program selection using recommendation lists, e.g. of programs or channels sorted out according to their score
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software

Definitions

  • the present disclosure relates to a technical field of short videos, and particularly, to a method and an apparatus for synthesizing a video.
  • the disclosure provides a method and an apparatus for synthesizing a video.
  • the technical solution of the present disclosure will be explained as follows.
  • Embodiments of the present disclosure provide a method for synthesizing a video, the method includes: displaying the video on a video editing interface; acquiring a friend recommendation list in response to a video sharing instruction; in which the friend recommendation list is used for indicating a plurality of sharing objects; displaying the friend recommendation list and generating a synthesized video based on the video and a target sharing object selected from the plurality of sharing objects, wherein a reminding mark for indicating the target sharing object is displayed on a video picture of the synthesized video.
  • Embodiments of the present disclosure provide an apparatus for synthesizing a video.
  • the apparatus includes a displaying module, a recommending module and a video synthesis module.
  • the displaying module is configured to display the video on a video editing interface.
  • the recommending module is configured to acquire a friend recommendation list and display the friend recommendation list in response to a video sharing instruction.
  • the friend recommendation list is used for indicating a plurality of sharing objects.
  • the video synthesis module is configured to generate a synthesized video based on the video to be synthesized and a target sharing object selected from the plurality of sharing objects. A reminding mark for indicating the target sharing object is displayed on a video picture of the synthesized video.
  • Embodiments of the present disclosure provide a video synthesis device.
  • the device includes a processor and a memory.
  • the memory is configured to store instructions executable by the processor; the processor is configured to execute the instructions to implement the method for synthesizing a video as described above.
  • Embodiments of the present disclosure provide a non-transitory computer-readable storage medium including instructions that, when executed by a processor of the apparatus for synthesizing a video, cause the video synthesis apparatus to perform the method for synthesizing a video as described above.
  • Embodiments of the present disclosure provide a computer program product which, when running on the video synthesis device, causes the video synthesis device to perform the method for synthesizing a video described above.
  • FIG. 1 A is a schematic diagram illustrating a cell phone interface according to an exemplary embodiment.
  • FIG. 1 B is a schematic diagram illustrating a cell phone interface according to an exemplary embodiment.
  • FIG. 1 C is a schematic diagram of a network architecture according to an example embodiment.
  • FIG. 2 A is a flow diagram illustrating a method for synthesizing a video according to an exemplary embodiment.
  • FIG. 2 B is a flow diagram illustrating a method for synthesizing a video according to an exemplary embodiment.
  • FIG. 2 C is a flow diagram illustrating a method for synthesizing a video according to an exemplary embodiment.
  • FIG. 2 D is a flow diagram illustrating a method for synthesizing a video according to an exemplary embodiment.
  • FIG. 2 E is a flow diagram illustrating a method for synthesizing a video according to an exemplary embodiment.
  • FIG. 2 F is a flow diagram illustrating a method for synthesizing a video according to an exemplary embodiment.
  • FIG. 2 G is a flow diagram illustrating a method for synthesizing a video according to an exemplary embodiment.
  • FIG. 3 A is a schematic diagram illustrating a set of cell phone display interfaces according to an exemplary embodiment.
  • FIG. 3 B is a schematic diagram illustrating a set of cell phone display interfaces according to an exemplary embodiment.
  • FIG. 3 C is a schematic diagram illustrating a set of cell phone display interfaces according to an exemplary embodiment.
  • FIG. 3 D is a schematic diagram illustrating a set of cell phone display interfaces according to an exemplary embodiment.
  • FIG. 3 E is a schematic diagram illustrating a set of cell phone display interfaces according to an exemplary embodiment.
  • FIG. 3 F is a schematic diagram illustrating a cell phone display interface according to an exemplary embodiment.
  • FIG. 3 G is a schematic diagram illustrating a cell phone display interface according to an exemplary embodiment.
  • FIG. 3 H is a schematic diagram illustrating a set of cell phone display interfaces according to an exemplary embodiment.
  • FIG. 3 I is a schematic diagram illustrating a cell phone display interface according to an exemplary embodiment.
  • FIG. 3 J is a schematic diagram illustrating a cell phone display interface according to an exemplary embodiment.
  • FIG. 3 K is a schematic diagram illustrating a set of cell phone display interfaces according to an exemplary embodiment.
  • FIG. 4 is a block diagram illustrating an apparatus for synthesizing a video according to an exemplary embodiment.
  • FIG. 5 is a schematic diagram illustrating an apparatus for synthesizing a video according to an exemplary embodiment.
  • A/B may indicate A or B; “and/or” herein is merely an association relationship describing associated objects, and means that there may be three relations, for example, A and/or B may refers to: A exists alone, A and B exist simultaneously, and B exists alone.
  • a plurality of refers to two or more than two.
  • FIG. 1 A a schematic view of the video publishing interface provided by the present disclosure is shown, in which a text input box (specifically, a dashed box in FIG. 1 A ) is displayed, and two controls are displayed below the text input box: “@ friend” control and “# topic” control.
  • a text input box specifically, a dashed box in FIG. 1 A
  • two controls are displayed below the text input box: “@ friend” control and “# topic” control.
  • the user needs to click the “@ friend” control in the video publishing interface and selects the friend to be tagged (@), then a reminding mark in a style of “@***” is displayed in the text input box for representing the user whom is tagged.
  • a reminding mark in a style of “@***” is displayed in the text input box for representing the user whom is tagged.
  • FIG. 1 A if the user selects a friend with a name of “Kwai culture” to be tagged, the reminding mark in the style of “@ Kwai culture” is displayed in the text input box.
  • a “publish” control in a lower part of FIG. 1 A a video publishing is completed.
  • a video playing interface for the video is provided.
  • the reminding mark in the style of “@ Kwai culture” is displayed in the text below the video picture, and the reminding mark represents that when the video editing user “Zhang San 014” publishes the video, the user “Kwai culture” is reminded to watch the video.
  • the disclosure provides a video synthesis method and a video synthesis apparatus, which are used for at least solving a problem of a poor user experience of reminding friends when a video is published in the related art.
  • the implementation environment may include a server 101 and a plurality of terminal devices (e.g., a terminal device 102 , a terminal device 103 , a terminal device 104 , and a terminal device 105 ) which may be connected to the server 101 through a wired network or a wireless network.
  • terminal devices e.g., a terminal device 102 , a terminal device 103 , a terminal device 104 , and a terminal device 105 .
  • the terminal device in embodiments of the present disclosure may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) device, a Virtual Reality (VR) device, and the like, which may be installed with an instant messaging application and communicate using the instant messaging application, and a specific form of the electronic device is not limited in the embodiments of the present disclosure.
  • PDA Personal Digital Assistant
  • AR Augmented Reality
  • VR Virtual Reality
  • the server 101 may be a network device for storing video data published by a terminal device and distributing the stored video data to a terminal device playing a video. Specifically, each of the plurality of terminal devices may publish an edited video to the server 101 . In addition, when a plurality of terminal devices are to play the video, the plurality of terminal devices may access the server 101 to acquire and play the video stored in the server 101 .
  • the method for synthesizing a video provided by embodiments of the present disclosure is described below with reference to specific embodiments, and the method provided by this embodiment may be applied to one of the terminal devices in FIG. 1 C , so that when the terminal device publishes the video data, a target sharing object that needs to be reminded may be determined on a video editing interface, and when the video is published on a video publishing interface, the video data with a reminding mark displayed on a video picture is published. Specifically, the video data may be published to the server 101 in FIG. 1 C .
  • FIG. 2 A illustrates a method for synthesizing a video according to an embodiment. As illustrated in FIG. 2 A , the method includes the following blocks. The method may be implemented by a terminal device.
  • the terminal device displays a video on a video editing interface.
  • the terminal device acquires and displays a friend recommendation list in response to a video sharing instruction.
  • the friend recommendation list may include a plurality of sharing objects.
  • the terminal device generates a synthesized video based on the video and a target sharing object selected from the plurality of sharing objects.
  • a reminding mark for indicating the target sharing object is displayed on a video picture of the synthesized video.
  • the method may include the following steps S 201 -S 203 and S 205 :
  • a terminal device displays a video to be synthesized on a video editing interface.
  • the video editing interface for editing the video file is generally provided in a short video application.
  • the video editing interface displays picture content of the video to be synthesized, which needs to be edited, so that a user may correspondingly edit the picture content of the video to be synthesized.
  • the terminal device acquires and displays a friend recommendation list.
  • the video sharing instruction may be sent from a video editing user, for example, by a user interface of the terminal device.
  • the friend recommendation list is used for indicating a plurality of sharing objects.
  • the plurality of sharing objects may specifically include a plurality of friends of the video editing user.
  • the friend recommendation list may include identifications of the plurality of sharing objects.
  • the friend recommendation list includes any one or more of head portraits, nicknames, and remark names of the plurality of sharing objects described therein.
  • the friend recommendation list may include the head portraits and the nicknames of the plurality of sharing objects, or the friend recommendation list may include the head portraits and the remark names of the plurality of sharing objects, and the like.
  • the friend recommendation list may further include other information for distinguishing an identity of each of the plurality of sharing objects.
  • the video editing interface in the present disclosure includes an interface for editing content of the video data.
  • the user may edit the video picture of the video data. For example, cool and warm tones in the video picture may be adjusted, stickers may be added to the video picture, beauty effects may be added to a person in the video picture, magic expressions may be added to the person in the video picture, and the like.
  • the video editing user referred to in the present disclosure may include the user logged in through a current terminal device.
  • one of the plurality of sharing objects may have both a nickname and a remark name, while the remark name of the sharing object is usually set by the video editing user in order to distinguish the sharing object from other sharing objects, and the nickname is a name taken by the sharing object itself. Therefore, in order to enable the video editing user to distinguish each of the plurality of sharing objects, in the present disclosure, if at least one sharing object in the plurality of sharing objects indicated by the friend recommendation list includes the remark name, the identifications of the plurality of sharing objects included in the friend recommendation list specifically include: the remark name of the at least one sharing object, and the nicknames of the other sharing objects.
  • the video sharing instruction in the present disclosure may include a preset symbol, for example, “@”.
  • the video sharing instruction may be other symbols, for example, “#”, “@”, and the like.
  • the video sharing instruction may include an instruction with the preset symbol displayed by the terminal device in response to a first operation of the user on the video editing interface.
  • the video sharing instruction may be an instruction with the @ symbol displayed by the terminal device in a text input box in response to the first operation that the user inputs “@” in the text input box on the video editing interface.
  • the video sharing instruction may also be an instruction displaying a friend sticker containing the video sharing instruction by the terminal device on the video editing interface in response to a second operation of the user for a friend sticker control on the video editing interface.
  • the terminal device determines the target sharing object.
  • the selection instruction may be sent from the video editing user and include a third operation performed by the video editing user on the friend recommendation list displayed on the video editing interface.
  • the video editing interface may include one or more preset friend recommendation positions, the one or more preset friend recommendation positions are used to display the identifications of one or more sharing objects in the plurality of sharing objects indicated by the friend recommendation list.
  • the selection instruction from the video editing user to select the sharing object may be a click operation of the video editing user for a friend recommendation position in the one or more friend recommendation positions. After the terminal device detects the click operation for the friend recommendation position in the one or more friend recommendation positions, the target sharing object is determined from the plurality of sharing objects indicated by the friend recommendation list.
  • the terminal device generates a synthesized video based on the video to be synthesized and the target sharing object.
  • the reminding mark for indicating the target sharing object is displayed on a video picture of the synthesized video.
  • the reminding mark is displayed on the video picture by generating the synthesized video in which the reminding mark is displayed on the video picture.
  • the video synthesis method provided by the present disclosure adds the reminding mark on the video editing interface, and further adds the reminding mark into the video picture, so that the way of reminding the sharing object is enriched, and the user experience is improved.
  • the terminal device may display the video picture of the synthesized video on the video editing interface, so that the video editing user may preview information such as position, style, size, and the like of the reminding mark.
  • the reminding mark may specifically be: symbol @ and a target identification behind the symbol A.
  • the target identification includes an identification of the target sharing object.
  • a display location of the reminding mark on the video picture is selected in order to satisfy the user's preference.
  • the method may further include the follows.
  • the terminal device determines a target position on the video picture according to a reminding mark position determining instruction.
  • the reminding mark position determining instruction may include a fifth operation performed by the user on the target position on the video picture displayed on the video editing interface.
  • the reminding mark for indicating the target sharing object may be displayed at the target position on the video picture of the synthesized video.
  • the target position of the reminding mark on the video picture is determined according to the reminding mark position determining instruction, so that the user can set the position of the reminding mark on the video picture of the synthesized video according to his/her own preference.
  • the user experience is improved.
  • the method further includes the follows.
  • the terminal device publishes the synthesized video on a video publishing interface in response to a video publishing instruction.
  • the video publishing instruction may be sent from the video editing user and may include a fourth operation of clicking the video publishing interface by the user.
  • the fourth operation may be an operation of clicking a publish control.
  • the video publishing instruction may also be another instruction for triggering publishing of the video data in the terminal device. For example, after a function of publishing at a fixed time is set in the terminal device, the video publishing instruction may be an instruction that the terminal device determines that a preset time is up.
  • the video synthesis method in the process of editing the displayed video to be synthesized on the video editing interface, after the video sharing instruction (for example, a symbol “@” is received) is received, the friend recommendation list of the video editing user is obtained in response to the video sharing instruction, and then the target sharing object is determined in response to the selection instruction. And the reminding mark is displayed on the video picture by generating the synthesized video with the reminding mark displayed on the video picture.
  • the video sharing instruction for example, a symbol “@” is received
  • the video synthesis method provided by the present disclosure adds the reminding mark on the video editing interface, and further adds the reminding mark to the video picture, so that the way of reminding the sharing object is enriched, and the user experience is improved.
  • the friend sticker may be added to a sticker function of the video editing interface.
  • the step S 202 is triggered to respond to the video sharing instruction from the video editing user, the friend recommendation list of the video editing user is obtained and displayed.
  • the identification of the target sharing object selected in S 203 may be displayed in the friend sticker.
  • the friend sticker may be used as the reminding mark which is displayed on the video picture and used for indicating the target sharing object.
  • the step S 202 in the above embodiment may include the following steps S 202 a 1 -S 202 a 2 .
  • the terminal device displays a sticker panel on the video editing interface in response to a selection operation for the sticker control.
  • the sticker panel includes a plurality of sticker controls containing the friend sticker control.
  • FIG. 3 A (a) illustrates the video editing interface displayed on the mobile phone.
  • the video editing interface is used for editing a video.
  • a sticker may be added to the video
  • a text may be added to the video
  • a music may be added to the video
  • the video may be saved to a local storage space.
  • the video editing interface illustrated in FIG. 3 A (a) includes a sticker control, a text control, a music control, and a save control.
  • the mobile phone may display the sticker panel on the video editing interface.
  • the sticker panel includes the plurality of sticker controls including the friend sticker control.
  • the sticker panel includes a location sticker control, an @ friend sticker control, a time sticker control, and a plurality of expression sticker controls. These controls are respectively used for displaying corresponding stickers on the video picture of the video editing interface after being triggered.
  • the friend sticker control in embodiments of the present disclosure may include the @ friend sticker control in FIG. 3 A (b) described above.
  • the terminal device displays a friend sticker containing the video sharing instruction on the video editing interface in response to a sticker selection operation for the friend sticker control, and displays the friend recommendation list at a preset position.
  • the sticker selection operation for the friend sticker control may include an operation of clicking the friend sticker control.
  • a friend recommendation position may be displayed at the preset position, and the friend recommendation list is displayed at the friend recommendation position.
  • the mobile phone displays the friend sticker containing the video sharing instruction on the video editing interface in response to the operation of clicking the @ friend control by the user.
  • the video sharing instruction may include a symbol A.
  • the mobile phone displays the friend sticker 301 containing the video sharing instruction as shown in FIG. 3 A (c).
  • friend sticker 301 includes the video sharing instruction “@”.
  • the mobile phone also displays the friend recommendation position at the preset position on the video editing interface, and displays the friend recommendation list at the friend recommendation position.
  • FIG. 3 A (c) includes a plurality of friend recommendation positions 302 .
  • the friend recommendation list is displayed at the friend recommendation positions.
  • the identifications of the sharing objects in the friend recommendation list are displayed at the plurality of friend recommendation positions 302 , respectively.
  • the user may slide the plurality of friend recommendation positions 302 to left or right to find the sharing object to be selected.
  • FIG. 3 B (a) when the plurality of friend recommendation positions 302 in the mobile phone are slid to a rightmost side, there is provided one friend recommendation position “more” at the rightmost side.
  • a friend menu is displayed on the video editing interface of the mobile phone, as illustrated in FIG. 3 B (b).
  • a prompt of “selecting a friend to be concerned” is written, and the user may select an object that needs to be reminded in the friend menu.
  • the user may slide the friend menu upwardly or downwardly to check all members in the friend menu.
  • the user may click one letter in the letter set including letters from A to Z on a right side of the friend menu, and after the mobile phone detects the operation of clicking the letter by the user, the mobile phone displays the member taking the letter as the first letter in the friend menu.
  • the user may also input a nickname and/or a remark name of a friend in the search input box, and the mobile phone displays the friend matching with the content input by the user in response to the input operation of the user, so that the user may select the friend.
  • the identification “Sunny” of the target sharing object may be added to the friend sticker.
  • displaying the video data with the reminding mark on the video picture may include displaying the video data of the friend sticker on the video picture.
  • the video synthesis method further includes: in response to an editing instruction for the friend sticker from the video editing user, editing the friend sticker based on the editing instruction.
  • the editing instruction is used to change a style of the friend sticker, or rotate the friend sticker, or zoom in or out the friend sticker or drag the friend sticker.
  • the style, color, size, and the like of the friend sticker on the video editing interface may be edited, so that a visual effect of the friend sticker may better conform to the preference of the user.
  • a plurality of editing controls may be disposed outside the friend sticker on the video editing interface, and the plurality of editing controls are respectively used to implement an effect of changing the style of the friend sticker, or rotating the friend sticker, or zooming in or out the friend sticker or dragging the friend sticker.
  • FIG. 3 C (a) on the video editing interface of the mobile phone, there are provided a first control 303 , a second control 304 , and a third control 305 , respectively on the outside of the friend sticker.
  • the style of the identification “Sunny” in the friend sticker is changed.
  • changing the style of the identification “Sunny” specifically refers to changing the font of the identification “Sunny”.
  • the color, size, and shape of the friend sticker of the identification “Sunny” may also be changed.
  • the video sharing instruction may be generated by using a process of inputting text in the text input box on the video editing interface, so as to trigger the step S 202 of obtaining and displaying the friend recommendation list of the video editing user in response to the video sharing instruction from the video editing user.
  • the step S 202 specifically includes the following steps S 202 b 1 -S 202 b 2 .
  • the text input box is displayed on the video editing interface.
  • the text input operation on the video editing interface may include a long press operation of the user on the video editing interface.
  • FIG. 3 D (a) shows the video editing interface displayed on the mobile phone.
  • the video editing interface is used for editing videos.
  • a sticker may be added to the video
  • a text may be added to the video
  • a music may be added to the video
  • the video may be saved to a local storage space.
  • the video editing interface illustrated in FIG. 3 D (a) includes the “sticker” control, the “text” control, the “music” control, and the “save” control.
  • the mobile phone may display the text input box 306 at the target position long pressed by the user.
  • the text input operation includes the selection operation for the text control.
  • the mobile phone may display the text input box 306 at the target position.
  • the reminding mark position determining instruction at step S 204 may also include the operation that the user long presses the target position on the video picture, so as to determine the target position of the reminding mark on the video picture.
  • the reminder mark may specifically include the content in the text input box 306 .
  • the friend recommendation list is displayed at the preset position on the video editing interface.
  • a friend recommendation position may be displayed at the preset position, and the friend recommendation list is displayed at the friend recommendation position.
  • the video sharing instruction may be a symbol “@”.
  • a keyboard for receiving a user input may also be displayed on the video editing interface, as illustrated in FIG. 3 D (b).
  • the user may input information such as characters and symbols into the text input box 306 through the keyboard.
  • a preset video editing item 307 is also displayed above the keyboard.
  • the preset video editing item 307 includes a plurality of circular controls for determining a color of information in the text input box 306 . Each circular control corresponds to a color (it should be noted that, in FIG.
  • the friend recommendation list is displayed on the video editing interface displayed by the mobile phone.
  • the friend recommendation list is used for indicating a plurality of sharing objects.
  • the friend recommendation list may include identifications of the plurality of sharing objects.
  • the identification includes a head portrait, a nickname, the remark name, and the like.
  • a friend recommendation position is displayed at a preset position on the video editing interface displayed by the mobile phone. And the friend recommendation position is used for displaying the friend recommendation list.
  • FIG. 3 D (c) after the user clicks the “A” control, the video editing interface displayed by the mobile phone is illustrated in FIG. 3 D (d).
  • the friend recommendation positions 302 are included. With regard to a function and a use mode of the friend recommendation positions 302 and the description of the “more” control in the friend recommendation positions 302 , reference may be made to the above description of the friend recommendation positions 302 , which are not elaborated here.
  • the preset position for displaying the preset video editing item may be used to display the friend recommendation list. Specifically, before the video sharing instruction input via the text input box 306 is received, the preset video editing item is displayed at the preset position on the video editing interface, and the preset video editing item, when selected, is used for correspondingly editing the video.
  • displaying the friend recommendation position on the video editing interface may include: in response to the video sharing instruction input by the video editing user in the text input box, displaying the friend recommendation list instead of the preset video editing item at the preset position on the video editing interface.
  • the preset video editing item 307 is displayed at the preset position on the video editing interface displayed by the mobile phone. Then, when the mobile phone detects the operation that the user clicks the video sharing instruction “A” on the keyboard, the interface illustrated in FIG. 3 D (d) is further displayed. The friend recommendation list 302 instead of the preset video editing item 307 is displayed at the preset position on the video editing interface.
  • the friend recommendation list includes the identification of a user with whom the video is shared by the video editing user within a preset time period.
  • N is a positive integer greater than or equal to 1.
  • first N ⁇ 1 identifications in the friend recommendation list are displayed at first N ⁇ 1 friend recommendation positions, and remaining identifications behind the first N ⁇ 1 identifications in the friend recommendation list are displayed when a last friend recommendation position is triggered.
  • N 8 on the video editing interface, there are at most 8 friend recommendation positions for displaying the identifications included in the friend recommendation list.
  • the identifications may be completely displayed at the friend recommendation positions.
  • the first 7 friend recommendation positions are used for displaying the identifications in the friend recommendation list.
  • the 8 th friend recommendation position may be triggered to display another window, and the remaining identifications other than the first 7 identifications are displayed in the window.
  • the user may search for the sharing object to be selected by sliding the friend recommendation positions 302 to left or right.
  • the plurality of friend recommendation positions 302 in the mobile phone are slid to the rightmost side, as illustrated in FIG. 3 B (a)
  • the friend menu is displayed on the video editing interface of the mobile phone, as illustrated in FIG. 3 B (b).
  • the prompt of “selecting a friend to be concerned” is written, and the user may select an object that needs to be reminded in the friend menu.
  • the user may slide the friend menu upwardly or downwardly to check all members in the friend menu.
  • the user may click one letter in the letter set including letters from A to Z on a right side of the friend menu, and after the mobile phone detects the operation of clicking the letter by the user, the mobile phone displays the member taking the letter as the first letter in the friend menu.
  • the user may also input the nickname and/or the remark name of the friend in the search input box, and the mobile phone displays the friend matching with the content input by the user in response to the input operation of the user, so that the user may select the friend.
  • the identifications in the friend recommendation list are arranged according to a duration of a time of a shared video to a current time, from the shortest to the longest.
  • the time of the shared video may include the time when the video editing user shares a forwarded video with the user corresponding to the identification, or may also be the time when the video editing user tags (“@”) other users.
  • step S 202 in response to the video sharing instruction from the video editing user, obtaining and displaying the friend recommendation list of the video editing user includes the follows.
  • the video editing user may input the friend screening condition after the video sharing instruction.
  • the identifications in the friend recommendation list are adjusted to enable the identifications in the friend recommendation list to satisfy the friend screening condition.
  • receiving the friend screening condition input after the video sharing instruction may include inputting the friend screening condition in the friend sticker in the above embodiment.
  • receiving the friend screening condition input after the video sharing instruction may further include inputting the friend screening condition in the text input box in the above embodiment.
  • FIG. 3 E (a) after the user sequentially inputs “5”, “u”, “n” in the text input box 306 , objects having nicknames “Sun Liu” and “Sunny” are displayed at the friend recommendation positions 302 .
  • the reminding mark is used to link to a page corresponding to the target sharing object.
  • FIG. 3 F a schematic diagram of a playing interface of a published video is shown.
  • the video picture includes the reminding mark used for indicating the sharing object.
  • the reminding mark includes “@ Sunny” and “@ Yang Yang” in the text input box.
  • the terminal device displays the page of the sharing object corresponding to the reminding mark.
  • the page may be a home page of the sharing object or a profile page in the short video application.
  • the method provided by the present disclosure further includes: acquiring the selection instruction for the plurality of sharing objects from the video editing user.
  • the selection instruction includes the identification for indicating the sharing object and a preset input instruction input after the video sharing instruction, or the selection operation for the identification displayed at the friend recommendation position.
  • the identification used for indicating the sharing object and the preset input instruction input after the video sharing instruction specifically includes: the identification of the sharing object input in the friend sticker provided in the above embodiment, and the preset input instruction input after the input identification of the sharing object.
  • the preset input instruction may be a space instruction.
  • FIG. 3 G Illustratively, as illustrated in FIG. 3 G ; after “Sunny” is entered in the friend sticker 301 , a space is entered. A text of “A Sunny” is displayed in friend sticker 301 indicating that the user with the nickname of Sunny has been selected.
  • the present disclosure may further include: displaying the identification of the target sharing object in the friend sticker in a display mode different from that of other input information. Displaying the identification of the target sharing object in the friend sticker in the display mode different from the display mode of other input information may include the follows.
  • the identification of the target sharing object is underlined. For example, “Sunny” in friend sticker 301 in FIG. 3 G is underlined.
  • the selection instruction may further include the selection operation for the identification displayed at the friend recommendation position.
  • FIG. 3 H (a) when the user clicks the identification “Sunny” displayed at the friend recommendation positions 302 , the mobile phone also underlines “Sunny” in the text input box 306 in response to the user's click operation, as illustrated in FIG. 3 H (b).
  • the preset video editing item instead of the friend recommendation list is displayed at the preset position on the video editing interface.
  • FIG. 3 I after the user inputs the video sharing instruction “A” in the text input box 306 , the user inputs “S”, “u”, “n” and two space instructions in sequence. Then as illustrated in FIG. 3 I , at the preset position on the video editing interface, the preset video editing item 307 is displayed.
  • the method provided in the present disclosure further includes: displaying prompting information in response to the identification of the target sharing object displayed in the friend sticker or in response to the identification of the target sharing object displayed in the text input box.
  • a prompt of “the user being tagged (“@”) may forward this utterance” is displayed.
  • the characters in the friend sticker or text input box are smaller in font size as the number of characters is larger.
  • the font of the characters in FIG. 3 K (a) is larger than the font of the characters in FIG. 3 K (b).
  • the method further includes the follows.
  • the terminal device sends a sharing object prompting instruction to a server.
  • the sharing object prompting instruction is used for instructing the server to send prompting information corresponding to the synthesized video to a target terminal device; the target terminal device includes the terminal device corresponding to the target sharing object.
  • the sharing object prompting instruction is sent to the server, such that the server sends the prompt information to the target terminal device. Therefore, the target terminal device outputs corresponding information after receiving the prompt information, so that the user of the target terminal device may timely know that the synthesized video published by the video editing user has the reminding mark mentioning the user.
  • the target terminal device may display the prompting information in the form of “*** has @ you in the utterance” on the display interface after receiving the prompting information from the server, so as to achieve the effect of prompting the user.
  • the field “***” may be the identification such as the nickname, the remark name, or the like of the video editing user.
  • the target terminal device may also execute a task of forwarding the synthesized video when receiving an operation that the user clicks the prompting information on the display interface.
  • the target terminal device displays the video editing interface of the synthesized video.
  • the target terminal device may correspondingly edit the synthesized video according to an operation instruction from the user, and may also modify the reminding mark on the video picture of the synthesized video, or add another reminding mark, or the like.
  • the target terminal device may also publish the synthesized video after the editing.
  • the target sharing object may be considered as a publisher of the synthesized video after the editing.
  • the synthesized video after the editing is published as a work of the target sharing object, and the synthesized video after the editing is displayed in a work column of the target sharing object.
  • steps S 201 to S 206 With regard to the steps of modifying the reminding mark on the video picture of the synthesized video, adding other reminding marks, and publishing the synthesized video after the editing, reference may be made to the contents of steps S 201 to S 206 .
  • the certain sharing object for example, the friend of the video editing user
  • the text for reminding the sharing object on the video publishing interface
  • the video sharing instruction for example, a symbol “@” is received
  • the friend recommendation list of the video editing user is obtained in response to the video sharing instruction, and then the target sharing object is determined in response to the selection instruction.
  • the reminding mark is displayed on the video picture by generating the synthesized video with the reminding mark displayed on the video picture.
  • the video synthesis method provided by the present disclosure adds the reminding mark on the video editing interface, and further adds the reminding mark into the video picture, so that the way of reminding the sharing object is enriched, and the user experience is improved.
  • the flow chart or any process or method described herein in other manners may represent a module, segment, or portion of code that comprises one or more executable instructions to implement the specified logic function(s) or that comprises one or more executable instructions of the steps of the progress.
  • the flow chart shows a specific order of execution, it is understood that the order of execution may differ from that which is depicted. For example, the order of execution of two or more boxes may be scrambled relative to the order shown.
  • FIG. 4 is a schematic diagram illustrating an apparatus for synthesizing a video according to an exemplary embodiment.
  • the apparatus may be a terminal device.
  • the apparatus 40 includes a displaying module 401 , a recommending module 402 , a determining module 403 and a video synthesis module 404 .
  • the displaying module 401 is configured to display a video to be synthesized on a video editing interface.
  • the recommending module 402 is configured to acquire and display a friend recommendation list in response to a video sharing instruction.
  • the friend recommendation list is used for indicating a plurality of sharing objects.
  • the determining module 403 is configured to determine a target sharing object in response to a selection instruction for the plurality of sharing objects.
  • the video synthesis module 404 is configured to generate a synthesized video based on the video to be synthesized and the target sharing object. A reminding mark for indicating the target sharing object is displayed on a video picture of the synthesized video.
  • the video editing interface includes a sticker control.
  • the recommending module 402 is configured to display a sticker panel in response to a selection operation for the sticker control on the video editing interface.
  • the sticker panel includes a plurality of sticker controls containing a friend sticker control.
  • the recommending module 402 is configured to display a friend sticker including the video sharing instruction on the video editing interface and display the friend recommendation list at a preset position on the video editing interface in response to a sticker selection operation for the friend sticker control.
  • the apparatus further includes a sticker editing module 405 .
  • the sticker editing module 405 is configured to edit the friend sticker in response to an editing instruction for the friend sticker.
  • the editing instruction is used to change a style of the friend sticker, or rotate the friend sticker, or zoom in or out the friend sticker or drag the friend sticker.
  • the recommending module 402 is configured to display a text input box on the video editing interface, in response to a text input operation on the video editing interface.
  • the recommending module 402 is configured to display the friend recommendation list at a preset position on the video editing interface in response to the video sharing instruction input in the text input box.
  • the displaying module 401 is further configured to before the video sharing instruction input by the video editing user in the text input box is received, display a preset video editing item at the preset position on the video editing interface.
  • the preset video editing item is used for editing the video to be synthesized when selected.
  • the recommending module 402 is configured to display the friend recommendation list instead of the preset video editing item in response to the video sharing instruction input in the text input box.
  • the text input operation on the video editing interface includes a long press operation on the video editing interface; or, the video editing interface includes a text control, and the text input operation on the video editing interface includes a selection operation for the text control.
  • the friend recommendation list includes an identification of a user with whom a video is shared by the video editing user within a preset time period; the recommending module 402 is configured to displaying all identifications in the friend recommendation list at the preset positions in response to that a number of the identifications in the friend recommendation list is less than or equal to a number N of the preset positions, in which N is a positive integer greater than or equal to 1; the recommending module 402 is configured to displaying first N ⁇ 1 identifications in the friend recommendation list at first N ⁇ 1 preset positions and displaying remaining identifications behind the first N ⁇ 1 identifications when a last preset position is triggered, in response to that the number of the identifications in the friend recommendation list is larger than the number of the preset positions.
  • the identifications in the friend recommendation list are arranged according to a duration of a time when a shared video is shared to a current time, from the shortest to the longest.
  • the recommending module 402 is configured to receive a friend screening condition; the recommending module 402 is further configured to adjust the identifications in the friend recommendation list to enable the identifications in the friend recommendation list to satisfy the friend screening condition.
  • the reminding mark is used for linking to a page corresponding to the target sharing object.
  • the apparatus further includes an acquiring module.
  • the acquiring module 406 is configured to, before the target sharing object is determined in response to the selection instruction for the plurality of sharing objects from the video editing user, acquire the selection instruction for the plurality of sharing objects from the video editing user; the selection instruction for the plurality of sharing objects includes a preset input instruction of inputting an identification for indicting a sharing object, or, the selection instruction for the plurality of sharing objects includes a selection operation for an identification displayed at the friend recommendation position.
  • the apparatus 40 further includes a publishing module 407 .
  • the publishing module 407 is configured to, after the synthesized video is generated based on the video to be synthesized and the target sharing object, publish the synthesized video on a video publishing interface in response to a video publishing instruction.
  • the apparatus 40 further includes a prompting module 408 .
  • the prompting module 408 is configured to, send a sharing object prompting instruction to a server in response to the video publishing instruction; in which the sharing object prompting instruction is used for instructing the server to send prompting information corresponding to the synthesized video to a target terminal device; the target terminal device includes a terminal device corresponding to the target sharing object.
  • FIG. 5 is a possible schematic diagram illustrating the video synthesis apparatus according to the above embodiment.
  • the video synthesis device 50 includes a processor 501 and a memory 502 .
  • the video synthesis device 50 illustrated in FIG. 50 may implement all of the functions of the video synthesis apparatus 40 described above.
  • the functions of the respective modules in the video synthesis apparatus 40 described above may be implemented in the processor 501 of the video synthesis device 50 .
  • the memory module of the video synthesis apparatus 40 corresponds to the memory 502 of the video synthesis device 50 .
  • the processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), an image signal processor (ISP), a controller, a memory, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU), and the like.
  • the different processing units may be independent devices or may be integrated in one or more processors.
  • the memory 502 may include one or more computer readable storage mediums, which may be non-transitory.
  • the memory 502 may also include a high-speed random access memory, as well as a non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices.
  • the non-transitory computer readable storage medium in memory 502 is used to store at least one instruction for executing by processor 501 to implement the video synthesis method provided by the method embodiments of the present disclosure.
  • the video synthesis device 50 may also optionally include a peripheral interface 503 and at least one peripheral device.
  • the processor 501 , the memory 502 , and the peripheral interface 503 may be connected by a bus or a signal line.
  • Various peripheral devices may be connected to the peripheral interface 503 via the bus, the signal line, or a circuit board.
  • the peripheral device includes at least one of a radio frequency circuit 504 , a touch screen 505 , a camera 506 , an audio circuit 507 , a positioning component 508 , and a power supply 509 .
  • the peripheral interface 503 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 501 and the memory 502 .
  • the processor 501 , the memory 502 , and the peripheral interface 503 are integrated on a same chip or circuit board; in some other embodiments, any one or two of the processor 501 , the memory 502 , and the peripheral interface 503 may be implemented on individual chips or circuit boards, which is not limited in this embodiment.
  • the radio frequency circuit 504 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 504 communicates with communication networks and other communication devices via electromagnetic signals.
  • the radio frequency circuit 504 converts an electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into the electrical signal.
  • the radio frequency circuit 404 includes an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like.
  • the radio frequency circuit 504 may communicate with other video synthesis devices via at least one wireless communication protocol.
  • the wireless communication protocols include, but are not limited to a metropolitan area network, various generation mobile communication networks (2G, 3G, 4G, and 5G), a wireless local area network, and/or a Wi-Fi (Wireless Fidelity) network.
  • the radio frequency circuit 504 may further include NFC (Near Field Communication) related circuits, which are not limited in the present disclosure.
  • the display screen 505 is used to display a UI (User Interface).
  • the UI may include a graphic, a text, an icon, a video, and any combination thereof.
  • the display screen 505 is a touch display screen, the display screen 505 also has an ability to capture touch signals on or over a surface of the display screen 505 .
  • the touch signal may be input to the processor 501 as a control signal for processing.
  • the display screen 505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard.
  • the display screen 505 may be one front panel for setting the video synthesis device 50 ; the display screen 505 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
  • the camera assembly 506 is used to capture images or videos.
  • the camera assembly 506 includes a front camera and a rear camera.
  • the front camera is disposed on the front panel of the video synthesis device
  • the rear camera is disposed on a rear surface of the video synthesis device.
  • the audio circuit 507 may include a microphone and a speaker.
  • the microphone is used for collecting sound waves of the user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 501 for processing or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication.
  • a plurality of microphones may be provided at different portions of the video synthesis device 50 , respectively.
  • the microphone may also be an array microphone or an omni-directional acquisition microphone.
  • the speaker is used to convert the electrical signals from the processor 501 or the radio frequency circuit 504 into the sound waves.
  • the loudspeaker can be a traditional film loudspeaker and can also be a piezoelectric ceramic loudspeaker. When the speaker is the piezoelectric ceramic speaker, the speaker can be used for purposes such as converting the electric signals into the sound wave audible to a human being, or converting the electric signals into the sound wave inaudible to the human being to measure a distance.
  • the audio circuit 507 may also include a headphone jack.
  • the positioning component 508 is used to locate the current geographic location of the video synthesis device 50 to implement navigation or LBS (Location Based Service).
  • the Positioning component 508 may be a positioning component based on the Global Positioning System (GPS) in the United States, the Beidou System in China, the Grignard System in Russia, or the Galileo System in the European Union.
  • GPS Global Positioning System
  • a power supply 509 is used to power various components in the video synthesis device 50 .
  • the power supply 509 may be an alternating current power source, a direct current power source, a disposable battery or a rechargeable battery.
  • the rechargeable battery may support wired or wireless charging.
  • the rechargeable battery may also be used to support a quick charging technology.
  • the video synthesis device 50 also includes one or more sensors 510 .
  • the one or more sensors 510 include, but are not limited to: an acceleration sensor, a gyroscope sensor, a pressure sensor, a fingerprint sensor, an optical sensor, and a proximity sensor.
  • the acceleration sensor may detect a magnitude of acceleration in three coordinate axes of a coordinate system established by the video synthesis device 50 .
  • the gyroscope sensor may detect an organism direction and a turned angle of video synthesis device 50 , and the gyroscope sensor may gather a 3D action of the user for the video synthesis device 50 with the acceleration sensor in coordination.
  • the pressure sensors may be disposed on a side frame of the video synthesis device 50 and/or on a lower layer of the touch screen 505 . When the pressure sensor is provided on the side frame of the video synthesis device 50 , the use's holding signal with respect to the video synthesis device 50 may be detected.
  • the fingerprint sensor is used for collecting a fingerprint of the user.
  • the optical sensor is used for collecting an intensity of ambient light.
  • a proximity sensor also called a distance sensor, is usually provided on the front panel of the video synthesis device 50 . The proximity sensor is used to determine a distance between the user and the front of the video synthesis device 50 .
  • the present disclosure also provides a computer-readable storage medium, where instructions are stored, and when the instructions in the storage medium are executed by a processor of the video synthesis device, the video synthesis device is enabled to execute the video synthesis method provided in Embodiment 1 or Embodiment 2 of the present disclosure.
  • Embodiments of the present disclosure further provide a computer program product including instructions, which when run on the video synthesis apparatus, cause the video synthesis apparatus to perform the video synthesis method provided in Embodiment 1 or Embodiment 2 of the present disclosure.
  • the logic and/or step described in other manners herein or shown in the flow chart, for example, a particular sequence table of executable instructions for realizing the logical function may be specifically achieved in any computer readable medium to be used by the instruction execution system, device or equipment (such as the system based on computers, the system comprising processors or other systems capable of obtaining the instruction from the instruction execution system, device and equipment and executing the instruction), or to be used in combination with the instruction execution system, device and equipment.
  • the computer readable medium may be any device adaptive for including, storing, communicating, propagating or transferring programs to be used by or in combination with the instruction execution system, device or equipment.
  • the computer readable medium comprise but are not limited to: an electronic connection (an electronic device) with one or more wires, a portable computer enclosure (a magnetic device), a random access memory (RAM), a read only memory (ROM), an erasable programmable read-only memory (EPROM or a flash memory), an optical fiber device and a portable compact disk read-only memory (CDROM).
  • the computer readable medium may even be a paper or other appropriate medium capable of printing programs thereon, this is because, for example, the paper or other appropriate medium may be optically scanned and then edited, decrypted or processed with other appropriate methods when necessary to obtain the programs in an electric manner, and then the programs may be stored in the computer memories.
  • each part of the present disclosure may be realized by the hardware, software, firmware or their combination.
  • a plurality of steps or methods may be realized by the software or firmware stored in the memory and executed by the appropriate instruction execution system.
  • the steps or methods may be realized by one or a combination of the following techniques known in the art: a discrete logic circuit having a logic gate circuit for realizing a logic function of a data signal, an application-specific integrated circuit having an appropriate combination logic gate circuit, a programmable gate array (PGA), a field programmable gate array (FPGA), etc.
  • each function cell of the embodiments of the present disclosure may be integrated in a processing module, or these cells may be separate physical existence, or two or more cells are integrated in a processing module.
  • the integrated module may be realized in a form of hardware or in a form of software function modules. When the integrated module is realized in a form of software function module and is sold or used as a standalone product, the integrated module may be stored in a computer readable storage medium.
  • the storage medium mentioned above may be read-only memories, magnetic disks, CD, etc.
  • the apparatus disclosed in several embodiments provided by the present disclosure can be realized in any other manner.
  • the apparatus embodiments described above can be merely exemplary, for example, the units are just divided according to logic functions.
  • the units can be divided in other manners, for example, multiple units or components can be combined or integrated into another system, or some features can be omitted or not executed.
  • the mutual coupling or direct coupling or communication connection described or discussed can be via some interfaces, and indirect coupling or communication connection between devices or units may be electrical, mechanical or of other forms.
  • the units illustrated as separate components can be or not be separated physically, and components described as units can be or not be physical units, i.e., can be located at one place, or can be distributed onto multiple network units. It is possible to select some or all of the units according to actual needs, for realizing the objective of embodiments of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present disclosure provides a method and an apparatus for synthesizing a video, and a storage medium. The method is implemented as follows. A video to be synthesized is displayed on a video editing interface. In response to a video sharing instruction, a friend recommendation list is acquired and displayed. The friend recommendation list is used for indicating a plurality of sharing objects. A synthesized video is generated based on the video and the target sharing object. A reminding mark for indicating the target sharing object is displayed on a video picture of the synthesized video.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • The present application is a Continuation application of U.S. application Ser. No. 17/030,959, which is based on and claim priority under 35 U.S.C. § 119 to Chinese Application No. 201911194514.4, filed with the China National Intellectual Property Administration on Nov. 28, 2019, the entire content of which is incorporated herein by reference.
  • FIELD
  • The present disclosure relates to a technical field of short videos, and particularly, to a method and an apparatus for synthesizing a video.
  • BACKGROUND
  • Currently, short video applications are used by an increasing number of people. When a video editing user publishes a video through a short video application, some friends are usually selected when the video is published to prompt the friends to watch the video.
  • SUMMARY
  • The disclosure provides a method and an apparatus for synthesizing a video. The technical solution of the present disclosure will be explained as follows.
  • Embodiments of the present disclosure provide a method for synthesizing a video, the method includes: displaying the video on a video editing interface; acquiring a friend recommendation list in response to a video sharing instruction; in which the friend recommendation list is used for indicating a plurality of sharing objects; displaying the friend recommendation list and generating a synthesized video based on the video and a target sharing object selected from the plurality of sharing objects, wherein a reminding mark for indicating the target sharing object is displayed on a video picture of the synthesized video.
  • Embodiments of the present disclosure provide an apparatus for synthesizing a video. The apparatus includes a displaying module, a recommending module and a video synthesis module.
  • The displaying module is configured to display the video on a video editing interface. The recommending module is configured to acquire a friend recommendation list and display the friend recommendation list in response to a video sharing instruction. The friend recommendation list is used for indicating a plurality of sharing objects. The video synthesis module is configured to generate a synthesized video based on the video to be synthesized and a target sharing object selected from the plurality of sharing objects. A reminding mark for indicating the target sharing object is displayed on a video picture of the synthesized video.
  • Embodiments of the present disclosure provide a video synthesis device. The device includes a processor and a memory. The memory is configured to store instructions executable by the processor; the processor is configured to execute the instructions to implement the method for synthesizing a video as described above.
  • Embodiments of the present disclosure provide a non-transitory computer-readable storage medium including instructions that, when executed by a processor of the apparatus for synthesizing a video, cause the video synthesis apparatus to perform the method for synthesizing a video as described above.
  • Embodiments of the present disclosure provide a computer program product which, when running on the video synthesis device, causes the video synthesis device to perform the method for synthesizing a video described above.
  • It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the present disclosure and are not to be construed as limiting the present disclosure.
  • FIG. 1A is a schematic diagram illustrating a cell phone interface according to an exemplary embodiment.
  • FIG. 1B is a schematic diagram illustrating a cell phone interface according to an exemplary embodiment.
  • FIG. 1C is a schematic diagram of a network architecture according to an example embodiment.
  • FIG. 2A is a flow diagram illustrating a method for synthesizing a video according to an exemplary embodiment.
  • FIG. 2B is a flow diagram illustrating a method for synthesizing a video according to an exemplary embodiment.
  • FIG. 2C is a flow diagram illustrating a method for synthesizing a video according to an exemplary embodiment.
  • FIG. 2D is a flow diagram illustrating a method for synthesizing a video according to an exemplary embodiment.
  • FIG. 2E is a flow diagram illustrating a method for synthesizing a video according to an exemplary embodiment.
  • FIG. 2F is a flow diagram illustrating a method for synthesizing a video according to an exemplary embodiment.
  • FIG. 2G is a flow diagram illustrating a method for synthesizing a video according to an exemplary embodiment.
  • FIG. 3A is a schematic diagram illustrating a set of cell phone display interfaces according to an exemplary embodiment.
  • FIG. 3B is a schematic diagram illustrating a set of cell phone display interfaces according to an exemplary embodiment.
  • FIG. 3C is a schematic diagram illustrating a set of cell phone display interfaces according to an exemplary embodiment.
  • FIG. 3D is a schematic diagram illustrating a set of cell phone display interfaces according to an exemplary embodiment.
  • FIG. 3E is a schematic diagram illustrating a set of cell phone display interfaces according to an exemplary embodiment.
  • FIG. 3F is a schematic diagram illustrating a cell phone display interface according to an exemplary embodiment.
  • FIG. 3G is a schematic diagram illustrating a cell phone display interface according to an exemplary embodiment.
  • FIG. 3H is a schematic diagram illustrating a set of cell phone display interfaces according to an exemplary embodiment.
  • FIG. 3I is a schematic diagram illustrating a cell phone display interface according to an exemplary embodiment.
  • FIG. 3J is a schematic diagram illustrating a cell phone display interface according to an exemplary embodiment.
  • FIG. 3K is a schematic diagram illustrating a set of cell phone display interfaces according to an exemplary embodiment.
  • FIG. 4 is a block diagram illustrating an apparatus for synthesizing a video according to an exemplary embodiment.
  • FIG. 5 is a schematic diagram illustrating an apparatus for synthesizing a video according to an exemplary embodiment.
  • DETAILED DESCRIPTION
  • In order to make the technical solution of the present disclosure better understood, the technical solution in embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
  • It should be noted that the terms “first”, “second” and the like in the specification and claim of the present disclosure and in the foregoing drawings are used for distinguishing between similar elements and not necessarily for describing a certain sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that embodiments of the present disclosure described herein are capable of operation in other sequences than those illustrated or described herein. Implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of the apparatus and method consistent with certain aspects of the present disclosure, as detailed in the appended claims.
  • In addition, in the description of embodiments of the present disclosure, “I” indicates an inclusive meaning unless otherwise specified, for example, A/B may indicate A or B; “and/or” herein is merely an association relationship describing associated objects, and means that there may be three relations, for example, A and/or B may refers to: A exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of embodiments of the present disclosure, “a plurality of” refers to two or more than two.
  • In the existing short video applications, when the user needs to remind the friends to watch the video to be published, the user needs to launch a video publishing interface, and then adds the friends whom the user wants to tag (@) in the video publishing interface. For example, as illustrated in FIG. 1A, a schematic view of the video publishing interface provided by the present disclosure is shown, in which a text input box (specifically, a dashed box in FIG. 1A) is displayed, and two controls are displayed below the text input box: “@ friend” control and “# topic” control. The user needs to click the “@ friend” control in the video publishing interface and selects the friend to be tagged (@), then a reminding mark in a style of “@***” is displayed in the text input box for representing the user whom is tagged. As illustrated in FIG. 1A, if the user selects a friend with a name of “Kwai culture” to be tagged, the reminding mark in the style of “@ Kwai culture” is displayed in the text input box. Then, after the user clicks a “publish” control in a lower part of FIG. 1A, a video publishing is completed.
  • After the video publishing is completed, as illustrated in FIG. 1B, a video playing interface for the video is provided. The reminding mark in the style of “@ Kwai culture” is displayed in the text below the video picture, and the reminding mark represents that when the video editing user “Zhang San 014” publishes the video, the user “Kwai culture” is reminded to watch the video.
  • In the prior art, when the friend is reminded, the friend required to be reminded only may be added in the video publishing interface. And no change can be made to the style of the reminding mark representing @ friend. A user experience is poor.
  • The disclosure provides a video synthesis method and a video synthesis apparatus, which are used for at least solving a problem of a poor user experience of reminding friends when a video is published in the related art.
  • First, an application scenario of the technical solution provided by the present disclosure is introduced.
  • Referring to FIG. 1C, a schematic diagram of an implementation environment involved in a method for synthesizing a video provided by the embodiment of the present disclosure is shown. As illustrated in FIG. 1C, the implementation environment may include a server 101 and a plurality of terminal devices (e.g., a terminal device 102, a terminal device 103, a terminal device 104, and a terminal device 105) which may be connected to the server 101 through a wired network or a wireless network.
  • Exemplarily, the terminal device in embodiments of the present disclosure may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) device, a Virtual Reality (VR) device, and the like, which may be installed with an instant messaging application and communicate using the instant messaging application, and a specific form of the electronic device is not limited in the embodiments of the present disclosure.
  • The server 101 may be a network device for storing video data published by a terminal device and distributing the stored video data to a terminal device playing a video. Specifically, each of the plurality of terminal devices may publish an edited video to the server 101. In addition, when a plurality of terminal devices are to play the video, the plurality of terminal devices may access the server 101 to acquire and play the video stored in the server 101.
  • The method for synthesizing a video provided by embodiments of the present disclosure is described below with reference to specific embodiments, and the method provided by this embodiment may be applied to one of the terminal devices in FIG. 1C, so that when the terminal device publishes the video data, a target sharing object that needs to be reminded may be determined on a video editing interface, and when the video is published on a video publishing interface, the video data with a reminding mark displayed on a video picture is published. Specifically, the video data may be published to the server 101 in FIG. 1C.
  • FIG. 2A illustrates a method for synthesizing a video according to an embodiment. As illustrated in FIG. 2A, the method includes the following blocks. The method may be implemented by a terminal device.
  • At S201, the terminal device displays a video on a video editing interface.
  • At S202, the terminal device acquires and displays a friend recommendation list in response to a video sharing instruction.
  • In some embodiments, the friend recommendation list may include a plurality of sharing objects.
  • At S205, the terminal device generates a synthesized video based on the video and a target sharing object selected from the plurality of sharing objects.
  • A reminding mark for indicating the target sharing object is displayed on a video picture of the synthesized video.
  • In some embodiments, as illustrated in FIG. 2B, the method may include the following steps S201-S203 and S205:
  • At S201, a terminal device displays a video to be synthesized on a video editing interface.
  • For example, in order to edit a video file, the video editing interface for editing the video file is generally provided in a short video application. The video editing interface displays picture content of the video to be synthesized, which needs to be edited, so that a user may correspondingly edit the picture content of the video to be synthesized.
  • At S202, in response to a video sharing instruction, the terminal device acquires and displays a friend recommendation list.
  • In some embodiments, the video sharing instruction may be sent from a video editing user, for example, by a user interface of the terminal device. The friend recommendation list is used for indicating a plurality of sharing objects.
  • The plurality of sharing objects may specifically include a plurality of friends of the video editing user. The friend recommendation list may include identifications of the plurality of sharing objects. For example, the friend recommendation list includes any one or more of head portraits, nicknames, and remark names of the plurality of sharing objects described therein. For example, the friend recommendation list may include the head portraits and the nicknames of the plurality of sharing objects, or the friend recommendation list may include the head portraits and the remark names of the plurality of sharing objects, and the like. Certainly, the friend recommendation list may further include other information for distinguishing an identity of each of the plurality of sharing objects.
  • The video editing interface in the present disclosure includes an interface for editing content of the video data. Specifically, on the video editing interface, the user may edit the video picture of the video data. For example, cool and warm tones in the video picture may be adjusted, stickers may be added to the video picture, beauty effects may be added to a person in the video picture, magic expressions may be added to the person in the video picture, and the like.
  • It should be noted that, the video editing user referred to in the present disclosure may include the user logged in through a current terminal device.
  • In some cases, one of the plurality of sharing objects may have both a nickname and a remark name, while the remark name of the sharing object is usually set by the video editing user in order to distinguish the sharing object from other sharing objects, and the nickname is a name taken by the sharing object itself. Therefore, in order to enable the video editing user to distinguish each of the plurality of sharing objects, in the present disclosure, if at least one sharing object in the plurality of sharing objects indicated by the friend recommendation list includes the remark name, the identifications of the plurality of sharing objects included in the friend recommendation list specifically include: the remark name of the at least one sharing object, and the nicknames of the other sharing objects.
  • In addition, the video sharing instruction in the present disclosure may include a preset symbol, for example, “@”. Of course, the video sharing instruction may be other symbols, for example, “#”, “@”, and the like.
  • In the present disclosure, the video sharing instruction may include an instruction with the preset symbol displayed by the terminal device in response to a first operation of the user on the video editing interface. For example, the video sharing instruction may be an instruction with the @ symbol displayed by the terminal device in a text input box in response to the first operation that the user inputs “@” in the text input box on the video editing interface. For another example, the video sharing instruction may also be an instruction displaying a friend sticker containing the video sharing instruction by the terminal device on the video editing interface in response to a second operation of the user for a friend sticker control on the video editing interface.
  • At S203, in response to a selection instruction for the plurality of sharing objects, the terminal device determines the target sharing object.
  • The selection instruction may be sent from the video editing user and include a third operation performed by the video editing user on the friend recommendation list displayed on the video editing interface.
  • Specifically, the video editing interface may include one or more preset friend recommendation positions, the one or more preset friend recommendation positions are used to display the identifications of one or more sharing objects in the plurality of sharing objects indicated by the friend recommendation list. The selection instruction from the video editing user to select the sharing object may be a click operation of the video editing user for a friend recommendation position in the one or more friend recommendation positions. After the terminal device detects the click operation for the friend recommendation position in the one or more friend recommendation positions, the target sharing object is determined from the plurality of sharing objects indicated by the friend recommendation list.
  • At S205, the terminal device generates a synthesized video based on the video to be synthesized and the target sharing object.
  • The reminding mark for indicating the target sharing object is displayed on a video picture of the synthesized video.
  • In the present disclosure, the reminding mark is displayed on the video picture by generating the synthesized video in which the reminding mark is displayed on the video picture. Compared with the method for adding the reminding mark on the video publishing interface in the prior art, the video synthesis method provided by the present disclosure adds the reminding mark on the video editing interface, and further adds the reminding mark into the video picture, so that the way of reminding the sharing object is enriched, and the user experience is improved.
  • Specifically, after the synthesized video is generated in step S205, the terminal device may display the video picture of the synthesized video on the video editing interface, so that the video editing user may preview information such as position, style, size, and the like of the reminding mark. The reminding mark may specifically be: symbol @ and a target identification behind the symbol A. The target identification includes an identification of the target sharing object.
  • In one implementation, a display location of the reminding mark on the video picture is selected in order to satisfy the user's preference. As illustrated in FIG. 2C, before step S205 is performed, the method may further include the follows.
  • At S204, the terminal device determines a target position on the video picture according to a reminding mark position determining instruction.
  • Specifically, the reminding mark position determining instruction may include a fifth operation performed by the user on the target position on the video picture displayed on the video editing interface.
  • Further, in the synthesized video generated in step S205, the reminding mark for indicating the target sharing object may be displayed at the target position on the video picture of the synthesized video.
  • In the present disclosure, before the video data is published, the target position of the reminding mark on the video picture is determined according to the reminding mark position determining instruction, so that the user can set the position of the reminding mark on the video picture of the synthesized video according to his/her own preference. The user experience is improved.
  • Further, in one implementation, as illustrated in FIG. 2C, in the method provided by the present disclosure, after the terminal device generates the synthesized video based on the video to be synthesized and the target sharing object at step S205, the method further includes the follows.
  • At S206, the terminal device publishes the synthesized video on a video publishing interface in response to a video publishing instruction.
  • The video publishing instruction may be sent from the video editing user and may include a fourth operation of clicking the video publishing interface by the user. The fourth operation may be an operation of clicking a publish control. In addition, the video publishing instruction may also be another instruction for triggering publishing of the video data in the terminal device. For example, after a function of publishing at a fixed time is set in the terminal device, the video publishing instruction may be an instruction that the terminal device determines that a preset time is up.
  • In the prior art, when the video is published, if a certain sharing object (for example, a friend of the video editing user) is to be reminded to watch the video, it is only possible to add a text for reminding the sharing object on the video publishing interface (for example, a text of “@ friend” is input on the video publishing interface), so that when the video is played, information for reminding the sharing object may only be displayed at a preset text display position. In the video synthesis method provided by the present disclosure, in the process of editing the displayed video to be synthesized on the video editing interface, after the video sharing instruction (for example, a symbol “@” is received) is received, the friend recommendation list of the video editing user is obtained in response to the video sharing instruction, and then the target sharing object is determined in response to the selection instruction. And the reminding mark is displayed on the video picture by generating the synthesized video with the reminding mark displayed on the video picture. Compared with the method for adding the reminding mark on the video publishing interface in the prior art, the video synthesis method provided by the present disclosure adds the reminding mark on the video editing interface, and further adds the reminding mark to the video picture, so that the way of reminding the sharing object is enriched, and the user experience is improved.
  • In one possible approach, the friend sticker may be added to a sticker function of the video editing interface. When the terminal device receives the operation of selecting the friend sticker by the user, the step S202 is triggered to respond to the video sharing instruction from the video editing user, the friend recommendation list of the video editing user is obtained and displayed. Moreover, the identification of the target sharing object selected in S203 may be displayed in the friend sticker. And when the video is published, the friend sticker may be used as the reminding mark which is displayed on the video picture and used for indicating the target sharing object. Based on the above consideration, as illustrated in FIG. 2D, the step S202 in the above embodiment may include the following steps S202 a 1-S202 a 2.
  • At S202 a 1, the terminal device displays a sticker panel on the video editing interface in response to a selection operation for the sticker control. The sticker panel includes a plurality of sticker controls containing the friend sticker control.
  • Illustratively, when the terminal device is a mobile phone, an implementation effect of each step in the present disclosure will be described below by taking the mobile phone as an example. FIG. 3A (a) illustrates the video editing interface displayed on the mobile phone. The video editing interface is used for editing a video. For example, a sticker may be added to the video, a text may be added to the video, a music may be added to the video, and the video may be saved to a local storage space. For example, the video editing interface illustrated in FIG. 3A (a) includes a sticker control, a text control, a music control, and a save control. When different controls are clicked, the functions of adding the sticker to the video, adding the text to the video, adding the music to the video and storing the video in the local storage space are correspondingly realized. In diagram (a) of FIG. 3A, after the mobile phone detects an operation of clicking the sticker control by the user, as illustrated in diagram (b) of FIG. 3B, the mobile phone may display the sticker panel on the video editing interface. The sticker panel includes the plurality of sticker controls including the friend sticker control. For example, in the diagram (b) in FIG. 3A, the sticker panel includes a location sticker control, an @ friend sticker control, a time sticker control, and a plurality of expression sticker controls. These controls are respectively used for displaying corresponding stickers on the video picture of the video editing interface after being triggered.
  • For example, the friend sticker control in embodiments of the present disclosure may include the @ friend sticker control in FIG. 3A (b) described above.
  • At S202 a 2, the terminal device displays a friend sticker containing the video sharing instruction on the video editing interface in response to a sticker selection operation for the friend sticker control, and displays the friend recommendation list at a preset position.
  • Specifically, the sticker selection operation for the friend sticker control may include an operation of clicking the friend sticker control. In an embodiment, a friend recommendation position may be displayed at the preset position, and the friend recommendation list is displayed at the friend recommendation position.
  • For example, as illustrated in FIG. 3A (b), after the user clicks the @ friend control therein, the mobile phone displays the friend sticker containing the video sharing instruction on the video editing interface in response to the operation of clicking the @ friend control by the user.
  • The video sharing instruction may include a symbol A.
  • For example, after the mobile phone detects the operation of clicking the @ friend control by the user as shown in FIG. 3A (b), the mobile phone displays the friend sticker 301 containing the video sharing instruction as shown in FIG. 3A (c). It can be seen that friend sticker 301 includes the video sharing instruction “@”. And the mobile phone also displays the friend recommendation position at the preset position on the video editing interface, and displays the friend recommendation list at the friend recommendation position. Specifically, FIG. 3A (c) includes a plurality of friend recommendation positions 302. And the friend recommendation list is displayed at the friend recommendation positions. Specifically, as illustrated in FIG. 3A (c), the identifications of the sharing objects in the friend recommendation list are displayed at the plurality of friend recommendation positions 302, respectively.
  • In FIG. 3A (c), the user may slide the plurality of friend recommendation positions 302 to left or right to find the sharing object to be selected. Specifically, as illustrated in FIG. 3B (a), when the plurality of friend recommendation positions 302 in the mobile phone are slid to a rightmost side, there is provided one friend recommendation position “more” at the rightmost side. When the user clicks the friend recommendation position “more”, a friend menu is displayed on the video editing interface of the mobile phone, as illustrated in FIG. 3B (b). In the friend menu, a prompt of “selecting a friend to be concerned” is written, and the user may select an object that needs to be reminded in the friend menu. In addition, the user may slide the friend menu upwardly or downwardly to check all members in the friend menu. In addition, the user may click one letter in the letter set including letters from A to Z on a right side of the friend menu, and after the mobile phone detects the operation of clicking the letter by the user, the mobile phone displays the member taking the letter as the first letter in the friend menu. In addition, after the user selects a search input box marked with a word “search” in the friend menu, as illustrated in FIG. 3B (c), the user may also input a nickname and/or a remark name of a friend in the search input box, and the mobile phone displays the friend matching with the content input by the user in response to the input operation of the user, so that the user may select the friend.
  • For example, as illustrated in FIG. 3C (a), after determining the target sharing object in response to the selection instruction for selecting the sharing object, the identification “Sunny” of the target sharing object may be added to the friend sticker. Furthermore, in a subsequent step, displaying the video data with the reminding mark on the video picture may include displaying the video data of the friend sticker on the video picture.
  • In one implementation, the video synthesis method further includes: in response to an editing instruction for the friend sticker from the video editing user, editing the friend sticker based on the editing instruction. The editing instruction is used to change a style of the friend sticker, or rotate the friend sticker, or zoom in or out the friend sticker or drag the friend sticker.
  • Specifically, in the present disclosure, the style, color, size, and the like of the friend sticker on the video editing interface may be edited, so that a visual effect of the friend sticker may better conform to the preference of the user.
  • Specifically, in the present disclosure, a plurality of editing controls may be disposed outside the friend sticker on the video editing interface, and the plurality of editing controls are respectively used to implement an effect of changing the style of the friend sticker, or rotating the friend sticker, or zooming in or out the friend sticker or dragging the friend sticker.
  • For example, as illustrated in FIG. 3C(a), on the video editing interface of the mobile phone, there are provided a first control 303, a second control 304, and a third control 305, respectively on the outside of the friend sticker. When the mobile phone detects an operation of clicking the first control 303 by the user, as illustrated in FIG. 3C (b), the style of the identification “Sunny” in the friend sticker is changed. Here, changing the style of the identification “Sunny” specifically refers to changing the font of the identification “Sunny”. In some implementations, the color, size, and shape of the friend sticker of the identification “Sunny” may also be changed.
  • In another feasible implementation, based on the video synthesis method of the embodiment, the video sharing instruction may be generated by using a process of inputting text in the text input box on the video editing interface, so as to trigger the step S202 of obtaining and displaying the friend recommendation list of the video editing user in response to the video sharing instruction from the video editing user. Based on the above consideration, as illustrated in FIG. 2D, the step S202 specifically includes the following steps S202 b 1-S202 b 2.
  • At S202 b 1, in response to a text input operation on the video editing interface, the text input box is displayed on the video editing interface.
  • Specifically, the text input operation on the video editing interface may include a long press operation of the user on the video editing interface.
  • For example, FIG. 3D (a) shows the video editing interface displayed on the mobile phone. The video editing interface is used for editing videos. For example, a sticker may be added to the video, a text may be added to the video, a music may be added to the video, and the video may be saved to a local storage space. For example, the video editing interface illustrated in FIG. 3D (a) includes the “sticker” control, the “text” control, the “music” control, and the “save” control. When the different controls are clicked, the functions of adding the sticker to the video, adding the text to the video, adding the music to the video and storing the video in the local storage space are correspondingly realized. In FIG. 3D (a), after the mobile phone detects the operation of long pressing the target position on the video picture by the user, as illustrated in FIG. 3D (b), the mobile phone may display the text input box 306 at the target position long pressed by the user.
  • Or, when the text control is included in the video editing interface, the text input operation includes the selection operation for the text control.
  • For example, in FIG. 3D (a), when the user clicks the “text” control, as illustrated in FIG. 3D (b), the mobile phone may display the text input box 306 at the target position.
  • In addition, it should be noted that, in the above embodiment, the reminding mark position determining instruction at step S204 may also include the operation that the user long presses the target position on the video picture, so as to determine the target position of the reminding mark on the video picture. Of course, in some scenarios, when the text input box 306 includes the identification of the target sharing object, the reminder mark may specifically include the content in the text input box 306.
  • At S202 b 2, in response to the video sharing instruction input in the text input box, the friend recommendation list is displayed at the preset position on the video editing interface. In an embodiment, a friend recommendation position may be displayed at the preset position, and the friend recommendation list is displayed at the friend recommendation position.
  • Specifically, the video sharing instruction may be a symbol “@”.
  • For example, after the terminal device displays the text input box on the video editing interface, a keyboard for receiving a user input may also be displayed on the video editing interface, as illustrated in FIG. 3D (b). At this time, the user may input information such as characters and symbols into the text input box 306 through the keyboard. In addition, a preset video editing item 307 is also displayed above the keyboard. Specifically, the preset video editing item 307 includes a plurality of circular controls for determining a color of information in the text input box 306. Each circular control corresponds to a color (it should be noted that, in FIG. 3D (b), different shades in the plurality of circular controls are used to represent different colors, and in the following figures, the colors are also distinguished in this way, which will not elaborated below), and when the user clicks one of the circular controls, the information in the text input box 306 changes to the corresponding color. Then, in FIG. 3D (b), the user clicks the “123” control in the keyboard, and may switch the keyboard.
  • After the mobile phone receives the operation of clicking the “123” control in FIG. 3D (b) by the user, the shape of the keyboard on the video editing interface displayed by the mobile phone is illustrated in FIG. 3D (c).
  • In FIG. 3D (c), after the user clicks the “@” control in the keyboard on the video editing interface, the friend recommendation list is displayed on the video editing interface displayed by the mobile phone. The friend recommendation list is used for indicating a plurality of sharing objects. Specifically, the friend recommendation list may include identifications of the plurality of sharing objects. Specifically, the identification includes a head portrait, a nickname, the remark name, and the like.
  • In one implementation, a friend recommendation position is displayed at a preset position on the video editing interface displayed by the mobile phone. And the friend recommendation position is used for displaying the friend recommendation list. For example, in FIG. 3D(c), after the user clicks the “A” control, the video editing interface displayed by the mobile phone is illustrated in FIG. 3D (d). The friend recommendation positions 302 are included. With regard to a function and a use mode of the friend recommendation positions 302 and the description of the “more” control in the friend recommendation positions 302, reference may be made to the above description of the friend recommendation positions 302, which are not elaborated here.
  • In one implementation, in order to fully utilize display resources of the terminal device, when the friend recommendation list is displayed, the preset position for displaying the preset video editing item may be used to display the friend recommendation list. Specifically, before the video sharing instruction input via the text input box 306 is received, the preset video editing item is displayed at the preset position on the video editing interface, and the preset video editing item, when selected, is used for correspondingly editing the video. Further, at step S201 b 2, in response to the video sharing instruction input by the video editing user in the text input box, displaying the friend recommendation position on the video editing interface may include: in response to the video sharing instruction input by the video editing user in the text input box, displaying the friend recommendation list instead of the preset video editing item at the preset position on the video editing interface.
  • For example, in FIG. 3D (c), the preset video editing item 307 is displayed at the preset position on the video editing interface displayed by the mobile phone. Then, when the mobile phone detects the operation that the user clicks the video sharing instruction “A” on the keyboard, the interface illustrated in FIG. 3D (d) is further displayed. The friend recommendation list 302 instead of the preset video editing item 307 is displayed at the preset position on the video editing interface.
  • In some embodiments, in the present disclosure, the friend recommendation list includes the identification of a user with whom the video is shared by the video editing user within a preset time period.
  • In some embodiments, in response to that a number of the identifications in the friend recommendation list is less than or equal to a number N of the friend recommendation positions, all the identifications in the friend recommendation list are displayed at the friend recommendation positions, N is a positive integer greater than or equal to 1.
  • In some embodiments, in response to that the number of the identifications in the friend recommendation list is larger than the number of the friend recommendation positions, first N−1 identifications in the friend recommendation list are displayed at first N−1 friend recommendation positions, and remaining identifications behind the first N−1 identifications in the friend recommendation list are displayed when a last friend recommendation position is triggered.
  • For example, assuming that N is 8, on the video editing interface, there are at most 8 friend recommendation positions for displaying the identifications included in the friend recommendation list. When the number of the identifications in the friend recommendation list is less than 8 (including 8), the identifications may be completely displayed at the friend recommendation positions. And when the number of the identifications in the friend recommendation list exceeds 8, the first 7 friend recommendation positions are used for displaying the identifications in the friend recommendation list. And the 8th friend recommendation position may be triggered to display another window, and the remaining identifications other than the first 7 identifications are displayed in the window.
  • For example, in FIG. 3A (c), the user may search for the sharing object to be selected by sliding the friend recommendation positions 302 to left or right. Specifically, when the plurality of friend recommendation positions 302 in the mobile phone are slid to the rightmost side, as illustrated in FIG. 3B (a), there is provided one friend recommendation position “more” at the rightmost side. When the user clicks the friend recommendation position “more”, the friend menu is displayed on the video editing interface of the mobile phone, as illustrated in FIG. 3B (b). In the friend menu, the prompt of “selecting a friend to be concerned” is written, and the user may select an object that needs to be reminded in the friend menu. In addition, the user may slide the friend menu upwardly or downwardly to check all members in the friend menu. In addition, the user may click one letter in the letter set including letters from A to Z on a right side of the friend menu, and after the mobile phone detects the operation of clicking the letter by the user, the mobile phone displays the member taking the letter as the first letter in the friend menu. In addition, after the user selects a search input box marked with the word “search” in the friend menu, as illustrated in FIG. 3B (c), the user may also input the nickname and/or the remark name of the friend in the search input box, and the mobile phone displays the friend matching with the content input by the user in response to the input operation of the user, so that the user may select the friend.
  • In one implementation, the identifications in the friend recommendation list are arranged according to a duration of a time of a shared video to a current time, from the shortest to the longest.
  • For example, in FIG. 3A (c), among the 7 identifications respectively displayed at the friend recommendation positions 302, “Xue Bao” is the first user with whom the video is shared most frequently by the video editing user, “Zhang San” is the second one, “Li Si” is the third one, and then “Wang Wu” is the fifth one, and so on.
  • It should be noted that, in the present disclosure, the time of the shared video may include the time when the video editing user shares a forwarded video with the user corresponding to the identification, or may also be the time when the video editing user tags (“@”) other users.
  • In one implementation, as illustrated in FIG. 2F, at step S202, in response to the video sharing instruction from the video editing user, obtaining and displaying the friend recommendation list of the video editing user includes the follows.
  • At S202 c 1, a friend screening condition is received.
  • In an embodiment, the video editing user may input the friend screening condition after the video sharing instruction.
  • At S202 c 2, in response to the friend screening condition, the identifications in the friend recommendation list are adjusted to enable the identifications in the friend recommendation list to satisfy the friend screening condition.
  • Specifically, receiving the friend screening condition input after the video sharing instruction may include inputting the friend screening condition in the friend sticker in the above embodiment.
  • In addition, receiving the friend screening condition input after the video sharing instruction may further include inputting the friend screening condition in the text input box in the above embodiment.
  • Illustratively, as illustrated in FIG. 3E (a), after the user sequentially inputs “5”, “u”, “n” in the text input box 306, objects having nicknames “Sun Liu” and “Sunny” are displayed at the friend recommendation positions 302.
  • When the characters input by the user cannot be matched with any reminding object, only the “more” control is displayed at the friend recommendation positions 302, as illustrated in FIG. 3E (b).
  • In one implementation, in the present disclosure, the reminding mark is used to link to a page corresponding to the target sharing object. Illustratively, as illustrated in FIG. 3F, a schematic diagram of a playing interface of a published video is shown. The video picture includes the reminding mark used for indicating the sharing object. Specifically, the reminding mark includes “@ Sunny” and “@ Yang Yang” in the text input box.
  • When the user watching the video clicks the reminding mark on the video picture, the terminal device displays the page of the sharing object corresponding to the reminding mark. Specifically, the page may be a home page of the sharing object or a profile page in the short video application.
  • In one implementation, at step S202, before determining the target sharing object in response to the selection instruction from the video editing user for plurality of sharing objects, the method provided by the present disclosure further includes: acquiring the selection instruction for the plurality of sharing objects from the video editing user.
  • The selection instruction includes the identification for indicating the sharing object and a preset input instruction input after the video sharing instruction, or the selection operation for the identification displayed at the friend recommendation position.
  • The identification used for indicating the sharing object and the preset input instruction input after the video sharing instruction specifically includes: the identification of the sharing object input in the friend sticker provided in the above embodiment, and the preset input instruction input after the input identification of the sharing object.
  • Specifically, the preset input instruction may be a space instruction.
  • Illustratively, as illustrated in FIG. 3G; after “Sunny” is entered in the friend sticker 301, a space is entered. A text of “A Sunny” is displayed in friend sticker 301 indicating that the user with the nickname of Sunny has been selected.
  • In the present disclosure, in order to distinguish the selected object, after the target sharing object is determined, the present disclosure may further include: displaying the identification of the target sharing object in the friend sticker in a display mode different from that of other input information. Displaying the identification of the target sharing object in the friend sticker in the display mode different from the display mode of other input information may include the follows. The identification of the target sharing object is underlined. For example, “Sunny” in friend sticker 301 in FIG. 3G is underlined.
  • In another implementation, in the present disclosure, the selection instruction may further include the selection operation for the identification displayed at the friend recommendation position.
  • For example, in FIG. 3H (a), when the user clicks the identification “Sunny” displayed at the friend recommendation positions 302, the mobile phone also underlines “Sunny” in the text input box 306 in response to the user's click operation, as illustrated in FIG. 3H (b).
  • In another implementation, in the present disclosure, when information input after the video sharing instruction is not the identification of the sharing object, if the preset input instruction is input, the preset video editing item instead of the friend recommendation list is displayed at the preset position on the video editing interface.
  • For example, in FIG. 3I, after the user inputs the video sharing instruction “A” in the text input box 306, the user inputs “S”, “u”, “n” and two space instructions in sequence. Then as illustrated in FIG. 3I, at the preset position on the video editing interface, the preset video editing item 307 is displayed.
  • In one implementation, when it is determined that the video editing user publishes video data with the reminding mark displayed on the video picture for the first time, the method provided in the present disclosure further includes: displaying prompting information in response to the identification of the target sharing object displayed in the friend sticker or in response to the identification of the target sharing object displayed in the text input box.
  • Illustratively, in FIG. 3J, above the text input box 306, a prompt of “the user being tagged (“@”) may forward this utterance” is displayed.
  • In one implementation, in the present disclosure, the characters in the friend sticker or text input box are smaller in font size as the number of characters is larger.
  • Illustratively, as in FIG. 3K, the font of the characters in FIG. 3K (a) is larger than the font of the characters in FIG. 3K (b).
  • In one implementation, as illustrated in FIG. 2G, after the terminal device acquires the video publishing instruction from the video editing user, the method further includes the follows.
  • At S207, in response to the video publishing instruction, the terminal device sends a sharing object prompting instruction to a server.
  • The sharing object prompting instruction is used for instructing the server to send prompting information corresponding to the synthesized video to a target terminal device; the target terminal device includes the terminal device corresponding to the target sharing object.
  • According to the present disclosure, when the synthesized video is published, the sharing object prompting instruction is sent to the server, such that the server sends the prompt information to the target terminal device. Therefore, the target terminal device outputs corresponding information after receiving the prompt information, so that the user of the target terminal device may timely know that the synthesized video published by the video editing user has the reminding mark mentioning the user.
  • For example, the target terminal device may display the prompting information in the form of “*** has @ you in the utterance” on the display interface after receiving the prompting information from the server, so as to achieve the effect of prompting the user. The field “***” may be the identification such as the nickname, the remark name, or the like of the video editing user.
  • In addition, after the target terminal device displays the prompting information on the display interface, the target terminal device may also execute a task of forwarding the synthesized video when receiving an operation that the user clicks the prompting information on the display interface.
  • Specifically, after receiving the operation of clicking the prompting information on the display interface by the user, in response to the operation of clicking the prompting information on the display interface by the user, the target terminal device displays the video editing interface of the synthesized video. On the video editing interface, the target terminal device may correspondingly edit the synthesized video according to an operation instruction from the user, and may also modify the reminding mark on the video picture of the synthesized video, or add another reminding mark, or the like. After the editing of the synthesized video is completed, the target terminal device may also publish the synthesized video after the editing.
  • When the target terminal device publishes the synthesized video after the editing, the target sharing object may be considered as a publisher of the synthesized video after the editing. For example, when the target terminal device publishes the synthesized video after the editing, the synthesized video after the editing is published as a work of the target sharing object, and the synthesized video after the editing is displayed in a work column of the target sharing object.
  • Specifically, with regard to the steps of modifying the reminding mark on the video picture of the synthesized video, adding other reminding marks, and publishing the synthesized video after the editing, reference may be made to the contents of steps S201 to S206.
  • In the prior art, when the video is published, if the certain sharing object (for example, the friend of the video editing user) is to be reminded to watch the video, it is only possible to add the text for reminding the sharing object on the video publishing interface (for example, the text of @ friend is input on the video publishing interface), so that when the video is played, the information for reminding the sharing object may only be displayed at the preset text display position. In the video synthesis method provided by the present disclosure, in the process of editing the displayed video to be synthesized on the video editing interface, after the video sharing instruction (for example, a symbol “@” is received) is received, the friend recommendation list of the video editing user is obtained in response to the video sharing instruction, and then the target sharing object is determined in response to the selection instruction. And the reminding mark is displayed on the video picture by generating the synthesized video with the reminding mark displayed on the video picture. Compared with the method of adding the reminding mark on the video publishing interface in the prior art, the video synthesis method provided by the present disclosure adds the reminding mark on the video editing interface, and further adds the reminding mark into the video picture, so that the way of reminding the sharing object is enriched, and the user experience is improved.
  • It will be understood that, the flow chart or any process or method described herein in other manners may represent a module, segment, or portion of code that comprises one or more executable instructions to implement the specified logic function(s) or that comprises one or more executable instructions of the steps of the progress. Although the flow chart shows a specific order of execution, it is understood that the order of execution may differ from that which is depicted. For example, the order of execution of two or more boxes may be scrambled relative to the order shown.
  • Embodiment 2
  • FIG. 4 is a schematic diagram illustrating an apparatus for synthesizing a video according to an exemplary embodiment. Specifically, the apparatus may be a terminal device. Referring to FIG. 14 , the apparatus 40 includes a displaying module 401, a recommending module 402, a determining module 403 and a video synthesis module 404.
  • The displaying module 401 is configured to display a video to be synthesized on a video editing interface.
  • The recommending module 402 is configured to acquire and display a friend recommendation list in response to a video sharing instruction. The friend recommendation list is used for indicating a plurality of sharing objects.
  • The determining module 403 is configured to determine a target sharing object in response to a selection instruction for the plurality of sharing objects.
  • The video synthesis module 404 is configured to generate a synthesized video based on the video to be synthesized and the target sharing object. A reminding mark for indicating the target sharing object is displayed on a video picture of the synthesized video.
  • In some embodiments, the video editing interface includes a sticker control.
  • The recommending module 402 is configured to display a sticker panel in response to a selection operation for the sticker control on the video editing interface. The sticker panel includes a plurality of sticker controls containing a friend sticker control.
  • The recommending module 402 is configured to display a friend sticker including the video sharing instruction on the video editing interface and display the friend recommendation list at a preset position on the video editing interface in response to a sticker selection operation for the friend sticker control.
  • In some embodiments, the apparatus further includes a sticker editing module 405.
  • The sticker editing module 405 is configured to edit the friend sticker in response to an editing instruction for the friend sticker. The editing instruction is used to change a style of the friend sticker, or rotate the friend sticker, or zoom in or out the friend sticker or drag the friend sticker.
  • In some embodiments, the recommending module 402 is configured to display a text input box on the video editing interface, in response to a text input operation on the video editing interface.
  • The recommending module 402 is configured to display the friend recommendation list at a preset position on the video editing interface in response to the video sharing instruction input in the text input box.
  • In some embodiments, the displaying module 401 is further configured to before the video sharing instruction input by the video editing user in the text input box is received, display a preset video editing item at the preset position on the video editing interface. The preset video editing item is used for editing the video to be synthesized when selected.
  • The recommending module 402 is configured to display the friend recommendation list instead of the preset video editing item in response to the video sharing instruction input in the text input box.
  • In some embodiments, the text input operation on the video editing interface includes a long press operation on the video editing interface; or, the video editing interface includes a text control, and the text input operation on the video editing interface includes a selection operation for the text control.
  • In some embodiments, the friend recommendation list includes an identification of a user with whom a video is shared by the video editing user within a preset time period; the recommending module 402 is configured to displaying all identifications in the friend recommendation list at the preset positions in response to that a number of the identifications in the friend recommendation list is less than or equal to a number N of the preset positions, in which N is a positive integer greater than or equal to 1; the recommending module 402 is configured to displaying first N−1 identifications in the friend recommendation list at first N−1 preset positions and displaying remaining identifications behind the first N−1 identifications when a last preset position is triggered, in response to that the number of the identifications in the friend recommendation list is larger than the number of the preset positions.
  • In some embodiments, the identifications in the friend recommendation list are arranged according to a duration of a time when a shared video is shared to a current time, from the shortest to the longest.
  • In some embodiments, the recommending module 402 is configured to receive a friend screening condition; the recommending module 402 is further configured to adjust the identifications in the friend recommendation list to enable the identifications in the friend recommendation list to satisfy the friend screening condition.
  • In some embodiments, the reminding mark is used for linking to a page corresponding to the target sharing object.
  • In some embodiments, the apparatus further includes an acquiring module.
  • The acquiring module 406 is configured to, before the target sharing object is determined in response to the selection instruction for the plurality of sharing objects from the video editing user, acquire the selection instruction for the plurality of sharing objects from the video editing user; the selection instruction for the plurality of sharing objects includes a preset input instruction of inputting an identification for indicting a sharing object, or, the selection instruction for the plurality of sharing objects includes a selection operation for an identification displayed at the friend recommendation position.
  • In some embodiments, the apparatus 40 further includes a publishing module 407.
  • The publishing module 407 is configured to, after the synthesized video is generated based on the video to be synthesized and the target sharing object, publish the synthesized video on a video publishing interface in response to a video publishing instruction.
  • In some embodiments, the apparatus 40 further includes a prompting module 408.
  • The prompting module 408 is configured to, send a sharing object prompting instruction to a server in response to the video publishing instruction; in which the sharing object prompting instruction is used for instructing the server to send prompting information corresponding to the synthesized video to a target terminal device; the target terminal device includes a terminal device corresponding to the target sharing object.
  • With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, which will not be elaborated here.
  • When the video synthesis apparatus is the terminal device, FIG. 5 is a possible schematic diagram illustrating the video synthesis apparatus according to the above embodiment. As illustrated in FIG. 5 , the video synthesis device 50 includes a processor 501 and a memory 502.
  • It is understood that, the video synthesis device 50 illustrated in FIG. 50 may implement all of the functions of the video synthesis apparatus 40 described above. The functions of the respective modules in the video synthesis apparatus 40 described above may be implemented in the processor 501 of the video synthesis device 50. The memory module of the video synthesis apparatus 40 corresponds to the memory 502 of the video synthesis device 50.
  • The processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), an image signal processor (ISP), a controller, a memory, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU), and the like. The different processing units may be independent devices or may be integrated in one or more processors.
  • The memory 502 may include one or more computer readable storage mediums, which may be non-transitory. The memory 502 may also include a high-speed random access memory, as well as a non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, the non-transitory computer readable storage medium in memory 502 is used to store at least one instruction for executing by processor 501 to implement the video synthesis method provided by the method embodiments of the present disclosure.
  • In some embodiments, the video synthesis device 50 may also optionally include a peripheral interface 503 and at least one peripheral device. The processor 501, the memory 502, and the peripheral interface 503 may be connected by a bus or a signal line. Various peripheral devices may be connected to the peripheral interface 503 via the bus, the signal line, or a circuit board. Specifically, the peripheral device includes at least one of a radio frequency circuit 504, a touch screen 505, a camera 506, an audio circuit 507, a positioning component 508, and a power supply 509.
  • The peripheral interface 503 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 501 and the memory 502. In some embodiments, the processor 501, the memory 502, and the peripheral interface 503 are integrated on a same chip or circuit board; in some other embodiments, any one or two of the processor 501, the memory 502, and the peripheral interface 503 may be implemented on individual chips or circuit boards, which is not limited in this embodiment.
  • The radio frequency circuit 504 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 504 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 504 converts an electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into the electrical signal. Optionally, the radio frequency circuit 404 includes an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like. The radio frequency circuit 504 may communicate with other video synthesis devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to a metropolitan area network, various generation mobile communication networks (2G, 3G, 4G, and 5G), a wireless local area network, and/or a Wi-Fi (Wireless Fidelity) network. In some embodiments, the radio frequency circuit 504 may further include NFC (Near Field Communication) related circuits, which are not limited in the present disclosure.
  • The display screen 505 is used to display a UI (User Interface). The UI may include a graphic, a text, an icon, a video, and any combination thereof. When the display screen 505 is a touch display screen, the display screen 505 also has an ability to capture touch signals on or over a surface of the display screen 505. The touch signal may be input to the processor 501 as a control signal for processing. At this point, the display screen 505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 505 may be one front panel for setting the video synthesis device 50; the display screen 505 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
  • The camera assembly 506 is used to capture images or videos. Optionally, the camera assembly 506 includes a front camera and a rear camera. Generally, the front camera is disposed on the front panel of the video synthesis device, and the rear camera is disposed on a rear surface of the video synthesis device. The audio circuit 507 may include a microphone and a speaker. The microphone is used for collecting sound waves of the user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 501 for processing or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For a purpose of stereo sound capture or noise reduction, a plurality of microphones may be provided at different portions of the video synthesis device 50, respectively. The microphone may also be an array microphone or an omni-directional acquisition microphone. The speaker is used to convert the electrical signals from the processor 501 or the radio frequency circuit 504 into the sound waves. The loudspeaker can be a traditional film loudspeaker and can also be a piezoelectric ceramic loudspeaker. When the speaker is the piezoelectric ceramic speaker, the speaker can be used for purposes such as converting the electric signals into the sound wave audible to a human being, or converting the electric signals into the sound wave inaudible to the human being to measure a distance. In some embodiments, the audio circuit 507 may also include a headphone jack.
  • The positioning component 508 is used to locate the current geographic location of the video synthesis device 50 to implement navigation or LBS (Location Based Service). The Positioning component 508 may be a positioning component based on the Global Positioning System (GPS) in the United States, the Beidou System in China, the Grignard System in Russia, or the Galileo System in the European Union.
  • A power supply 509 is used to power various components in the video synthesis device 50. The power supply 509 may be an alternating current power source, a direct current power source, a disposable battery or a rechargeable battery. When power supply 509 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support a quick charging technology.
  • In some embodiments, the video synthesis device 50 also includes one or more sensors 510. The one or more sensors 510 include, but are not limited to: an acceleration sensor, a gyroscope sensor, a pressure sensor, a fingerprint sensor, an optical sensor, and a proximity sensor.
  • The acceleration sensor may detect a magnitude of acceleration in three coordinate axes of a coordinate system established by the video synthesis device 50. The gyroscope sensor may detect an organism direction and a turned angle of video synthesis device 50, and the gyroscope sensor may gather a 3D action of the user for the video synthesis device 50 with the acceleration sensor in coordination. The pressure sensors may be disposed on a side frame of the video synthesis device 50 and/or on a lower layer of the touch screen 505. When the pressure sensor is provided on the side frame of the video synthesis device 50, the use's holding signal with respect to the video synthesis device 50 may be detected. The fingerprint sensor is used for collecting a fingerprint of the user. The optical sensor is used for collecting an intensity of ambient light. A proximity sensor, also called a distance sensor, is usually provided on the front panel of the video synthesis device 50. The proximity sensor is used to determine a distance between the user and the front of the video synthesis device 50.
  • The present disclosure also provides a computer-readable storage medium, where instructions are stored, and when the instructions in the storage medium are executed by a processor of the video synthesis device, the video synthesis device is enabled to execute the video synthesis method provided in Embodiment 1 or Embodiment 2 of the present disclosure.
  • Embodiments of the present disclosure further provide a computer program product including instructions, which when run on the video synthesis apparatus, cause the video synthesis apparatus to perform the video synthesis method provided in Embodiment 1 or Embodiment 2 of the present disclosure.
  • The logic and/or step described in other manners herein or shown in the flow chart, for example, a particular sequence table of executable instructions for realizing the logical function, may be specifically achieved in any computer readable medium to be used by the instruction execution system, device or equipment (such as the system based on computers, the system comprising processors or other systems capable of obtaining the instruction from the instruction execution system, device and equipment and executing the instruction), or to be used in combination with the instruction execution system, device and equipment. As to the specification, “the computer readable medium” may be any device adaptive for including, storing, communicating, propagating or transferring programs to be used by or in combination with the instruction execution system, device or equipment. More specific examples of the computer readable medium comprise but are not limited to: an electronic connection (an electronic device) with one or more wires, a portable computer enclosure (a magnetic device), a random access memory (RAM), a read only memory (ROM), an erasable programmable read-only memory (EPROM or a flash memory), an optical fiber device and a portable compact disk read-only memory (CDROM). In addition, the computer readable medium may even be a paper or other appropriate medium capable of printing programs thereon, this is because, for example, the paper or other appropriate medium may be optically scanned and then edited, decrypted or processed with other appropriate methods when necessary to obtain the programs in an electric manner, and then the programs may be stored in the computer memories.
  • It should be understood that each part of the present disclosure may be realized by the hardware, software, firmware or their combination. In the above embodiments, a plurality of steps or methods may be realized by the software or firmware stored in the memory and executed by the appropriate instruction execution system. For example, if it is realized by the hardware, likewise in another embodiment, the steps or methods may be realized by one or a combination of the following techniques known in the art: a discrete logic circuit having a logic gate circuit for realizing a logic function of a data signal, an application-specific integrated circuit having an appropriate combination logic gate circuit, a programmable gate array (PGA), a field programmable gate array (FPGA), etc.
  • Those skilled in the art shall understand that all or parts of the steps in the above exemplifying method of the present disclosure may be achieved by commanding the related hardware with programs. The programs may be stored in a computer readable storage medium, and the programs comprise one or a combination of the steps in the method embodiments of the present disclosure when run on a computer.
  • In addition, each function cell of the embodiments of the present disclosure may be integrated in a processing module, or these cells may be separate physical existence, or two or more cells are integrated in a processing module. The integrated module may be realized in a form of hardware or in a form of software function modules. When the integrated module is realized in a form of software function module and is sold or used as a standalone product, the integrated module may be stored in a computer readable storage medium.
  • The storage medium mentioned above may be read-only memories, magnetic disks, CD, etc.
  • It should be understood that, the apparatus disclosed in several embodiments provided by the present disclosure can be realized in any other manner. For example, the apparatus embodiments described above can be merely exemplary, for example, the units are just divided according to logic functions. In practical implementation, the units can be divided in other manners, for example, multiple units or components can be combined or integrated into another system, or some features can be omitted or not executed. In addition, the mutual coupling or direct coupling or communication connection described or discussed can be via some interfaces, and indirect coupling or communication connection between devices or units may be electrical, mechanical or of other forms.
  • The units illustrated as separate components can be or not be separated physically, and components described as units can be or not be physical units, i.e., can be located at one place, or can be distributed onto multiple network units. It is possible to select some or all of the units according to actual needs, for realizing the objective of embodiments of the present disclosure.
  • Other embodiments of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. The present disclosure is intended to cover any variations, uses, or adaptations of the present disclosure following, in general, the principles of the present disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the present disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the present disclosure being indicated by the following claims.
  • It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and illustrated in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (20)

What is claimed is:
1. A method for synthesizing a video, comprising:
displaying a video on a video editing interface for editing content of video data;
acquiring a friend recommendation list in response to a video sharing instruction from a user, wherein the friend recommendation list is used for indicating a plurality of sharing objects;
displaying a preset video editing item at a preset position on the video editing interface, wherein the preset video editing item is used for editing the video when selected;
displaying the friend recommendation list at the preset position on the video editing interface by replacing the preset video editing item, in response to the video sharing instruction; and
generating a synthesized video based on the video and a target sharing object selected from the plurality of sharing objects, wherein the synthesized video comprises a reminding mark for indicating the target sharing object displayed on a video picture of the synthesized video.
2. The method of claim 1, wherein said displaying the friend recommendation list comprises:
displaying a sticker panel in response to a selection operation for a sticker control on the video editing interface, wherein the sticker panel comprises a friend sticker control; and
displaying a friend sticker on the video editing interface and displaying the friend recommendation list at the preset position on the video editing interface, in response to a sticker selection operation for the friend sticker control, wherein the friend sticker contains the video sharing instruction.
3. The method of claim 2, further comprising:
editing the friend sticker in response to an editing instruction for the friend sticker; wherein the editing instruction is used to change a style of the friend sticker, or rotate the friend sticker, or zoom in or out the friend sticker or drag the friend sticker.
4. The method of claim 1, wherein said displaying the friend recommendation list comprises:
displaying a text input box on the video editing interface in response to a text input operation on the video editing interface;
displaying the friend recommendation list at the preset position on the video editing interface in response to the video sharing instruction input in the text input box.
5. The method of claim 2, wherein said displaying the friend recommendation list comprises:
displaying all identifications in the friend recommendation list at the preset positions in response to that a number of the identifications is less than or equal to a number N of the preset positions, in which N is a positive integer greater than or equal to 1, wherein the identification indicates a user with whom a video is shared within a preset time period; and
displaying first N−1 identifications in the friend recommendation list at first N−1 preset positions and displaying remaining identifications behind the first N−1 identifications when a last preset position is triggered, in response to that the number of the identifications is larger than the number of the preset positions.
6. The method of claim 5, wherein the identifications are arranged according to a duration from a time when a shared video is shared to a current time, from the shortest to the longest.
7. The method of claim 1, wherein said displaying the friend recommendation list comprises:
receiving a friend screening condition; and
adjusting identifications in the friend recommendation list to enable the identifications to satisfy the friend screening condition, wherein the identification indicates a user with whom a video is shared within a preset time period.
8. The method of claim 1, wherein the reminding mark is used for linking to a page corresponding to the target sharing object.
9. The method of claim 1, further comprising:
acquiring the selection instruction for the plurality of sharing objects;
in which the selection instruction comprises a preset input instruction of inputting an identification for indicting a sharing object, or, the selection instruction comprises a selection operation for an identification contained in the friend recommendation list.
10. The method of claim 1, further comprising:
publishing the synthesized video on a video publishing interface in response to a video publishing instruction.
11. The method of claim 10, further comprising:
sending a sharing object prompting instruction to a server in response to the video publishing instruction; in which the sharing object prompting instruction is used for instructing the server to send prompting information corresponding to the synthesized video to a target terminal device; the target terminal device comprises a terminal device corresponding to the target sharing object.
12. The method of claim 1, further comprising:
displaying prompting information that the target sharing object is able to forward the synthesized video.
13. An apparatus for synthesizing a video, comprising:
a processor,
a memory, configured to store instructions executable by the processor;
wherein the processor is configured to execute the instructions to implement the video synthesis method comprising:
displaying a video on a video editing interface for editing content of video data;
acquiring a friend recommendation list in response to a video sharing instruction from a user, wherein the friend recommendation list is used for indicating a plurality of sharing objects;
displaying a preset video editing item at a preset position on the video editing interface, wherein the preset video editing item is used for editing the video when selected;
displaying the friend recommendation list at the preset position on the video editing interface by replacing the preset video editing item, in response to the video sharing instruction; and
generating a synthesized video based on the video and a target sharing object selected from the plurality of sharing objects, wherein the synthesized video comprises a reminding mark for indicating the target sharing object displayed on a video picture of the synthesized video.
14. The apparatus of claim 13, wherein said displaying the friend recommendation list comprises:
displaying a sticker panel in response to a selection operation for a sticker control on the video editing interface, wherein the sticker panel comprises a friend sticker control; and
displaying a friend sticker on the video editing interface and displaying the friend recommendation list at the preset position on the video editing interface, in response to a sticker selection operation for the friend sticker control, wherein the friend sticker contains the video sharing instruction.
15. The apparatus of claim 14, wherein the method further comprises:
editing the friend sticker in response to an editing instruction for the friend sticker; wherein the editing instruction is used to change a style of the friend sticker, or rotate the friend sticker, or zoom in or out the friend sticker or drag the friend sticker.
16. The apparatus of claim 13, wherein said displaying the friend recommendation list comprises:
displaying a text input box on the video editing interface in response to a text input operation on the video editing interface; and
displaying the friend recommendation list at the preset position on the video editing interface in response to the video sharing instruction input in the text input box.
17. The apparatus of claim 14, wherein said displaying the friend recommendation list comprises:
displaying all identifications in the friend recommendation list at the preset positions in response to that a number of the identifications is less than or equal to a number N of the preset positions, in which N is a positive integer greater than or equal to 1, wherein the identification indicates a user with whom a video is shared within a preset time period; and
displaying first N−1 identifications in the friend recommendation list at first N−1 preset positions and displaying remaining identifications behind the first N−1 identifications when a last preset position is triggered, in response to that the number of the identifications is larger than the number of the preset positions.
18. The apparatus of claim 13, wherein said displaying the friend recommendation list comprises:
receiving a friend screening condition; and
adjusting identifications in the friend recommendation list to enable the identifications to satisfy the friend screening condition, wherein the identification indicates a user with whom a video is shared within a preset time period.
19. The apparatus of claim 13, wherein the processor is further configured to perform:
displaying prompting information that the target sharing object is able to forward the synthesized video.
20. A non-transitory computer-readable storage medium, comprising instructions that, when executed by a processor of a video synthesis apparatus, cause the apparatus to perform the video synthesis method comprising:
displaying a video on a video editing interface for editing content of video data;
acquiring a friend recommendation list in response to a video sharing instruction from a user, wherein the friend recommendation list is used for indicating a plurality of sharing objects;
displaying a preset video editing item at a preset position on the video editing interface, wherein the preset video editing item is used for editing the video when selected;
displaying the friend recommendation list at the preset position on the video editing interface by replacing the preset video editing item, in response to the video sharing instruction; and
generating a synthesized video based on the video and a target sharing object selected from the plurality of sharing objects, wherein the synthesized video comprises a reminding mark for indicating the target sharing object displayed on a video picture of the synthesized video.
US17/969,174 2019-11-28 2022-10-19 Method and apparatus for synthesizing video Abandoned US20230038810A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/969,174 US20230038810A1 (en) 2019-11-28 2022-10-19 Method and apparatus for synthesizing video

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201911194514.4A CN110868639B (en) 2019-11-28 2019-11-28 Video synthesis method and device
CN201911194514.4 2019-11-28
US17/030,959 US11509973B2 (en) 2019-11-28 2020-09-24 Method and apparatus for synthesizing video
US17/969,174 US20230038810A1 (en) 2019-11-28 2022-10-19 Method and apparatus for synthesizing video

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US17/030,959 Continuation US11509973B2 (en) 2019-11-28 2020-09-24 Method and apparatus for synthesizing video

Publications (1)

Publication Number Publication Date
US20230038810A1 true US20230038810A1 (en) 2023-02-09

Family

ID=69657244

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/030,959 Active 2040-10-18 US11509973B2 (en) 2019-11-28 2020-09-24 Method and apparatus for synthesizing video
US17/969,174 Abandoned US20230038810A1 (en) 2019-11-28 2022-10-19 Method and apparatus for synthesizing video

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US17/030,959 Active 2040-10-18 US11509973B2 (en) 2019-11-28 2020-09-24 Method and apparatus for synthesizing video

Country Status (2)

Country Link
US (2) US11509973B2 (en)
CN (1) CN110868639B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11297224B2 (en) * 2019-09-30 2022-04-05 Snap Inc. Automated eyewear device sharing system
CN111339557A (en) * 2020-02-20 2020-06-26 北京字节跳动网络技术有限公司 Online document display method, device, equipment and medium
CN111447239B (en) * 2020-04-13 2023-07-04 抖音视界有限公司 Video stream playing control method, device and storage medium
CN112035687B (en) * 2020-08-28 2022-06-14 北京字节跳动网络技术有限公司 Method and device for issuing multimedia content, electronic equipment and storage medium
CN112040330B (en) * 2020-09-09 2021-12-07 北京字跳网络技术有限公司 Video file processing method and device, electronic equipment and computer storage medium
CN112153288B (en) * 2020-09-25 2023-10-13 北京字跳网络技术有限公司 Method, apparatus, device and medium for distributing video or image
CN113038236A (en) * 2021-03-17 2021-06-25 北京字跳网络技术有限公司 Video processing method and device, electronic equipment and storage medium
CN113365133B (en) * 2021-06-02 2022-10-18 北京字跳网络技术有限公司 Video sharing method, device, equipment and medium
CN113420172B (en) * 2021-07-16 2024-08-02 北京达佳互联信息技术有限公司 Picture sharing method and device, computer equipment and medium
CN113727024B (en) * 2021-08-30 2023-07-25 北京达佳互联信息技术有限公司 Method, device, electronic equipment and storage medium for generating multimedia information
CN113741757B (en) * 2021-09-16 2023-10-17 北京字跳网络技术有限公司 Method and device for displaying reminding information, electronic equipment and storage medium
CN114095776B (en) * 2021-10-18 2022-10-21 荣耀终端有限公司 Screen recording method and electronic equipment
CN113946254B (en) * 2021-11-01 2023-10-20 北京字跳网络技术有限公司 Content display method, device, equipment and medium
CN116149516A (en) * 2021-11-17 2023-05-23 北京字节跳动网络技术有限公司 Data processing method, device, electronic equipment and storage medium
CN116170549A (en) * 2021-11-25 2023-05-26 北京字跳网络技术有限公司 Video processing method and device
USD1031747S1 (en) * 2022-05-09 2024-06-18 Capital One Services, Llc Display screen with an animated graphical user interface
CN114979054B (en) * 2022-05-13 2024-06-18 维沃移动通信有限公司 Video generation method, device, electronic equipment and readable storage medium
CN115314754A (en) * 2022-06-17 2022-11-08 网易(杭州)网络有限公司 Display control method and device of interactive control and electronic equipment
CN115941841A (en) * 2022-12-06 2023-04-07 北京字跳网络技术有限公司 Associated information display method, device, equipment, storage medium and program product
CN117079169B (en) * 2023-10-18 2023-12-22 一站发展(北京)云计算科技有限公司 Map scene adaptation method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190205015A1 (en) * 2017-12-29 2019-07-04 Facebook, Inc. Systems and methods for generating and sharing content

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106373170A (en) * 2016-08-31 2017-02-01 北京云图微动科技有限公司 Video making method and video making device
CN106446056A (en) * 2016-09-05 2017-02-22 奇异牛科技(深圳)有限公司 System and method for defining tags based on pictures of mobile terminal
CN108289057B (en) * 2017-12-22 2020-01-21 北京达佳互联信息技术有限公司 Video editing method and device and intelligent mobile terminal
CN108667633A (en) * 2018-05-10 2018-10-16 北京达佳互联信息技术有限公司 A kind of short video sharing method and apparatus
CN108900791B (en) * 2018-07-19 2019-10-18 北京微播视界科技有限公司 A kind of video distribution method, apparatus, equipment and storage medium
CN109547841B (en) * 2018-12-20 2020-02-07 北京微播视界科技有限公司 Short video data processing method and device and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190205015A1 (en) * 2017-12-29 2019-07-04 Facebook, Inc. Systems and methods for generating and sharing content

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Mehvish (19 December 2018) "A guide to Instagram Story Stickers with Their Meaning". https:/Avww.guidingtech.com/ instagram-story-stickers-meaning (Year: 2018) *

Also Published As

Publication number Publication date
CN110868639B (en) 2021-03-12
US20210168473A1 (en) 2021-06-03
US11509973B2 (en) 2022-11-22
CN110868639A (en) 2020-03-06

Similar Documents

Publication Publication Date Title
US11509973B2 (en) Method and apparatus for synthesizing video
US11363327B2 (en) Method for displaying virtual item, terminal and storage medium
CN111447074B (en) Reminding method, device, equipment and medium in group session
CN111476911A (en) Virtual image implementation method and device, storage medium and terminal equipment
CN110061900B (en) Message display method, device, terminal and computer readable storage medium
CN112947823A (en) Session processing method, device, equipment and storage medium
CN109451343A (en) Video sharing method, apparatus, terminal and storage medium
CN108270794B (en) Content distribution method, device and readable medium
US20170123758A1 (en) Voice control device, voice control method and program
CN112764608B (en) Message processing method, device, equipment and storage medium
CN110209316B (en) Category label display method, device, terminal and storage medium
CN110109608B (en) Text display method, text display device, text display terminal and storage medium
CN113411680B (en) Multimedia resource playing method, device, terminal and storage medium
CN110334352A (en) Guidance information display methods, device, terminal and storage medium
CN110798327B (en) Message processing method, device and storage medium
CN110147503B (en) Information issuing method and device, computer equipment and storage medium
WO2022028241A1 (en) Preview cover generation method and electronic device
CN111539795A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110929159B (en) Resource release method, device, equipment and medium
CN113190307A (en) Control adding method, device, equipment and storage medium
CN114302160A (en) Information display method, information display device, computer equipment and medium
CN111953852B (en) Call record generation method, device, terminal and storage medium
CN112100437A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113727024B (en) Method, device, electronic equipment and storage medium for generating multimedia information
CN114466100B (en) Method, device and system for adapting accessory theme

Legal Events

Date Code Title Description
AS Assignment

Owner name: BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LI, QIAN;REEL/FRAME:061470/0692

Effective date: 20200901

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION