WO2023061414A1 - Procédé et appareil de génération de fichiers, et dispositif électronique - Google Patents

Procédé et appareil de génération de fichiers, et dispositif électronique Download PDF

Info

Publication number
WO2023061414A1
WO2023061414A1 PCT/CN2022/124926 CN2022124926W WO2023061414A1 WO 2023061414 A1 WO2023061414 A1 WO 2023061414A1 CN 2022124926 W CN2022124926 W CN 2022124926W WO 2023061414 A1 WO2023061414 A1 WO 2023061414A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
video
input
file
thumbnail
Prior art date
Application number
PCT/CN2022/124926
Other languages
English (en)
Chinese (zh)
Inventor
方泽沺
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2023061414A1 publication Critical patent/WO2023061414A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/16File or folder operations, e.g. details of user interfaces specifically adapted to file systems
    • G06F16/168Details of user interfaces specifically adapted to file systems, e.g. browsing and visualisation, 2d or 3d GUIs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/176Support for shared access to files; File sharing support

Definitions

  • the present application belongs to the technical field of video processing, and in particular relates to a video-based file generation method, device and electronic equipment.
  • the purpose of the embodiments of the present application is to provide a file generation method, device, and electronic device, which can solve the problem of cumbersome operations in the related art when intercepting part of the video or image in the video.
  • the embodiment of the present application provides a method for generating a file, wherein the method includes:
  • a target file In response to the first input, output a target file, the target file includes a target video frame, and the target video frame is a video image frame corresponding to the target thumbnail.
  • the embodiment of the present application provides a file generating device, wherein the device includes:
  • the first receiving module is configured to receive the user's first input of target thumbnails in at least two video image thumbnails, the at least two video image thumbnails are thumbnails of at least two video image frames in the first video Sketch map;
  • the first output module is configured to output a target file in response to the first input, the target file includes a target video frame, and the target video frame is a video image frame corresponding to the target thumbnail.
  • an embodiment of the present application provides an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored in the memory and operable on the processor, and the program or instruction is The processor implements the steps of the method described in the first aspect when executed.
  • an embodiment of the present application provides a readable storage medium, on which a program or an instruction is stored, and when the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented .
  • the embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions, so as to implement the first aspect the method described.
  • an embodiment of the present application provides a computer program product, the program product is stored in a non-volatile storage medium, and the program product is executed by at least one processor to implement the computer program product described in the first aspect. method steps.
  • the embodiment of the present application provides a device for generating a file, and the device is configured to execute the method as described in the first aspect.
  • the first input from the user on target thumbnails in at least two video image thumbnails is received, and the at least two video image thumbnails are thumbnails of at least two video image frames in the first video ;
  • output a target file the target file includes a target video frame, and the target video frame is a video image frame corresponding to the target thumbnail.
  • the user can determine and output the target file including the video image frame corresponding to the above-mentioned target thumbnail by first inputting the target thumbnail in at least two video image thumbnails corresponding to the first video, Therefore, it is convenient to intercept video clips or images from the first video, thus solving the problem of cumbersome operations in the related art when intercepting part of video or images in the video.
  • Fig. 1 is a flow chart of the steps of the file generation method provided by the embodiment of the present application.
  • Fig. 2 is a schematic diagram of the chat interface of the social software in the embodiment of the present application.
  • FIG. 3 is a schematic diagram of a first interface for selecting a first video in an embodiment of the present application
  • Fig. 4 is a schematic diagram of the interface after the first video is directly shared in the embodiment of the present application.
  • FIG. 5 is a schematic diagram of an interface for sharing the first video currently shot in the embodiment of the present application.
  • FIG. 6 is a schematic diagram of a second interface for selecting a first video in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a third interface for selecting a first video in an embodiment of the present application.
  • Fig. 8 is a schematic diagram of the interface display of the selected first video in the embodiment of the present application.
  • FIG. 9 is a schematic diagram of an interface display for determining a target contact in an embodiment of the present application.
  • Fig. 10 is a first display schematic diagram of the video editing interface in the embodiment of the present application.
  • Fig. 11 is a second display schematic diagram of the video editing interface in the embodiment of the present application.
  • Fig. 12 is a third schematic display of the video editing interface in the embodiment of the present application.
  • Fig. 13 is a schematic diagram of the first interface of the object to be shared in the embodiment of the present application.
  • Fig. 14 is a fourth display schematic diagram of the video editing interface in the embodiment of the present application.
  • Fig. 15 is a schematic diagram of the second interface of the object to be shared in the embodiment of the present application.
  • Fig. 16 is a schematic diagram of the animation production interface in the embodiment of the present application.
  • Fig. 17 is a schematic diagram of the animation browsing interface in the embodiment of the present application.
  • Fig. 18 is a schematic diagram of a static picture wall in the embodiment of the present application.
  • Fig. 19 is a schematic diagram of the combination effect of the target video and the target image in the embodiment of the present application.
  • Fig. 20 is a schematic structural diagram of a file generation device provided by an embodiment of the present application.
  • Fig. 21 is a structural block diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 22 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 1 shows a flowchart of steps of a method for generating a file provided by an embodiment of the present application, wherein the method may include steps 100-200.
  • the method is applied to the main device, and the main device is an electronic device, and the electronic device may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a vehicle-mounted electronic device, a wearable device, an ultra mobile personal computer (ultra -mobile personal computer (UMPC), netbook or personal digital assistant (personal digital assistant, PDA) and other mobile electronic devices, or personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc.
  • UMPC ultra mobile personal computer
  • PDA personal digital assistant
  • Non-mobile electronic devices as long as the electronic devices can share and transmit files.
  • Step 100 Receive a user's first input of target thumbnails in at least two video image thumbnails, the at least two video image thumbnails being thumbnails of at least two video image frames in a first video.
  • the first video is a video to be shared, which can be a video saved in a photo album in advance, or a video recorded in real time, for example, a video obtained by starting a camera of an electronic device on a social chat interface for on-site shooting;
  • the first input is an operation of selecting a target thumbnail in the thumbnails corresponding to at least two video image frames included in the first video, specifically, it may be the user’s click input on the target thumbnail, or a voice instruction input by the user, or
  • the specific gesture input by the user may be specifically determined according to actual usage requirements, which is not limited in this embodiment of the present application.
  • the specific gesture in the embodiment of the present application can be any one of a single-click gesture, a sliding gesture, a drag gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, and a double-click gesture;
  • the click input in the example may be a single click input, a double click input, or any number of click inputs, etc., and may also be a long press input or a short press input.
  • Step 200 In response to the first input, output a target file, the target file includes a target video frame, and the target video frame is a video image frame corresponding to the target thumbnail.
  • the first input is the selection operation of the target thumbnail
  • the file generation method receives the user's first input of target thumbnails in at least two video image thumbnails, and the at least two video image thumbnails are at least two video image frames in the first video thumbnail image; in response to the first input, output a target file, the target file includes a target video frame, and the target video frame is a video image frame corresponding to the target thumbnail.
  • the user can determine and output the target file including the video image frame corresponding to the above-mentioned target thumbnail by first inputting the target thumbnail in at least two video image thumbnails corresponding to the first video, Therefore, it is convenient to intercept video clips or images from the first video, thus solving the problem of cumbersome operations in the related art when intercepting part of video or images in the video.
  • the method provided in the embodiment of the present application is to intercept the target file from the first video that needs to be shared with the target contact, wherein the target contact can be a contact in the current chat interface of the social software, or after the target file is generated , and then open the confirmed contact by opening the social software.
  • FIG. 2 shows a schematic diagram of a chat interface of social software.
  • FIG. 6 shows a schematic diagram of an interface in which the user selects the first video to share through the album interface.
  • the file generation method provided in the embodiment of the present application further includes step 101 before the above step 100:
  • Step 101 Display at least two video image thumbnails of a first video, where the number of the at least two video image thumbnails is smaller than the number of video image frames of the first video.
  • the thumbnails corresponding to at least two video image thumbnails of the first video are displayed first, so that the user can select the desired target thumbnail, thereby determine the corresponding video image frame, and then generate the corresponding target file.
  • step 101 includes steps 1011 to 1012.
  • Step 1011 according to the duration of the first video and the number of display frames, determine the average interval duration.
  • the display frame number is the frame number of the video image thumbnail that needs to be displayed for the user to select the target file.
  • the display frame number can be a fixed value, or can be dynamically adjusted according to the length of the first video.
  • the interval between video image frames corresponding to the video image thumbnails to be displayed can be determined, that is, the above average interval duration.
  • Step 1012 Determine the first video image frames to be displayed from the video image frames included in the first video in sequence according to the average interval duration, and display the thumbnails of each of the first video image frames.
  • the video image frame corresponding to the corresponding video time point is selected as the video image displayed to the user at intervals of the above-mentioned average interval duration frame, and then generate and display thumbnails of that video image frame.
  • the above specific implementation manner can cover and display the thumbnails of the video image frames included in the first video in a wide range, so that it is convenient for the user to select the desired video image frame, and then accurately intercept the desired target file.
  • the time interval between at least two video image frames corresponding to the at least two video image thumbnails may also be different, that is, the time interval between two adjacent video image frames is different, which may be based on
  • the key information is to obtain the at least two video image frames. Specifically, for example, by identifying whether a video image frame contains a specific object (such as a human face), etc., the thumbnail of the video image frame containing the specific object can be displayed.
  • the first input includes a first sub-input and a second sub-input; the above step 200 includes steps 201 - 202 .
  • both the first sub-input and the second sub-input are operations for selecting a target thumbnail from each thumbnail, which may specifically be the user’s click input on the target thumbnail, or a voice command input by the user, or
  • the specific gesture input by the user may be specifically determined according to actual usage requirements, which is not limited in this embodiment of the present application.
  • Step 201 when receiving the first sub-input from the user on the first target thumbnail in the at least two video image thumbnails and the second sub-input on the second target thumbnail, output A target video, where the target video includes a video segment between the video frame corresponding to the first target thumbnail and the video frame corresponding to the second target thumbnail.
  • the first sub-input is for the first target thumbnail and the second sub-input is for the second target thumbnail
  • the first selection operation selects the first target thumbnail
  • the second The second selection operation selects the second target thumbnail, indicating that it is necessary to select between the video image frame (first video frame) corresponding to the first target thumbnail and the video image frame (second video frame) corresponding to the second target thumbnail
  • the video segment between the first video frame and the second video frame is determined as a target video, and the target video is output.
  • Step 202 When receiving the first sub-input and the second sub-input of the third target thumbnail in the at least two video image thumbnails from the user, output a target image, the target image is The video image frame corresponding to the third target thumbnail.
  • clicking the first target thumbnail for the first time is the selection start
  • clicking the second target thumbnail again is considered the selection end
  • the video corresponding to the thumbnail is selected
  • the image frame is used as the target file; otherwise, the video content between the video frame corresponding to the first target thumbnail and the video frame corresponding to the second target thumbnail is selected as the target file.
  • the last selected operation can be canceled by gesture operations such as sliding up or down, or can be canceled by clicking the corresponding thumbnail again, so that the user can correct the selection .
  • the method provided in the embodiment of the present application further includes step 102 after the above step 100 .
  • Step 102 Generate first history editing information in response to the first input.
  • the first historical editing information records the target thumbnail corresponding to the target file, so it can be used to indicate to generate the corresponding target file according to the first video;
  • the method of intercepting the corresponding target file in the video is convenient for the user to quickly intercept the corresponding target file from the first video according to the historical editing information.
  • the above-mentioned first historical editing information may also include text information input by the user in the editing area, that is, the above-mentioned text information may be used as a note name of the target file.
  • the above-mentioned text information may be used as a note name of the target file.
  • the method provided in the embodiment of the present application further includes step 301 to step 302 after the above step 200 .
  • Step 301 Receive a fifth input from a user.
  • the fifth input is the operation of sharing the output target file, which can be the user's click input on the target file, or a voice command input by the user, or a specific gesture input by the user, which can be specified according to the actual use Requirements are determined, which is not limited in this embodiment of the application.
  • Step 302 Send the target file to a second target contact in response to the fifth input.
  • the above-mentioned second target contact may be a contact in the current chat interface of the social software, for example, through the chat interface with Xiao Ming triggering the window for outputting the above-mentioned target file, then Xiao Ming is the second target contact;
  • the above-mentioned second target contact may also be the contact that is determined by opening the social software after opening the first video and entering the window for outputting the above-mentioned target file.
  • the target file after the target file is output, the target file can be quickly sent to the second target contact by performing the fifth input on the target file, so as to achieve the effect of quickly intercepting the target file in the first video for sharing.
  • the file generation method provided in the embodiment of the present application further includes steps 401 to 403 .
  • Step 401 Display a historical editing record window of the first video, where the historical editing record window includes at least one piece of historical editing information, and the historical editing information is used to indicate that a corresponding target file is generated according to the first video.
  • the historical editing information is the record that the user selects the video image thumbnail from the first video to generate the corresponding target file before this operation.
  • the historical editing information does not store the extracted content additionally, but only records the operation information, which is convenient for downloading. shares.
  • Step 402 Receive a third input from the user on the first target editing information in the at least one piece of historical editing information.
  • the third input is the operation of selecting the required first target editing information from at least one piece of historical editing information, and sharing the first target file corresponding to the first target editing information, which can be used for the user to edit the first target.
  • the click input of information, or the voice command input by the user, or the specific gesture input by the user can be determined according to actual usage requirements, which is not limited in this embodiment of the present application.
  • Step 403 in response to the third input, sending the first target file corresponding to the first target editing information to the first target contact
  • this step when the above-mentioned third input is received, it means that the user wishes to share the first object file generated by using the first object editing information, and thus generates the corresponding first object file based on the first object editing information , and send the first target file to the first target contact.
  • the above-mentioned first target contact may be a contact in the current chat interface of the social software, for example, through the chat interface with Xiao Ming triggering to enter the above-mentioned history editing record window of the first video, then Xiao Ming is the first target contact.
  • the above-mentioned first target contact may also be a contact determined by opening social software after opening the first video and entering the above-mentioned historical editing record window to select the first target editing information.
  • a historical editing record window including at least one piece of historical editing information used to indicate the generation of the corresponding target file according to the first video, it is convenient for the user to quickly intercept the corresponding target file from the first video according to the historical editing information.
  • the method provided in the embodiment of the present application further includes step 404 to step 405 .
  • Step 404 receiving a fourth input from the user on the second target editing information in the at least one piece of historical editing information.
  • the fourth input is to confirm that the target file corresponding to the second target edit information needs to be previewed, re-edit or delete the edit processing operation such as the second target edit information, and the above-mentioned fourth input can be that the user edits the second target
  • the click input of information, or the voice command input by the user, or the specific gesture input by the user, can be determined according to the actual use requirements, which is not limited in the embodiment of the present application
  • the fourth input may be a click operation on the preview control, edit control or delete control corresponding to the second target edit information in the historical edit record window.
  • Step 405 in response to the fourth input, execute a first editing process, the first editing process includes at least one of the following: displaying the file information of the second target file corresponding to the second target editing information; deleting the The second target edit information; update the second target edit information.
  • the first editing process corresponding to the fourth input is executed, that is, the file information of the second object file corresponding to the second object editing information is displayed, the second object editing information is deleted, and the second object editing information is updated. at least one of the .
  • the above-mentioned file information includes the remark name of the second target file, the thumbnail of the corresponding video image in the first video, attribute information whether it belongs to a video or a picture, and the like.
  • the video editing interface 34 includes a historical editing record window 341 of the first video, a preview interface 342, a remark naming column 343 and a video single-frame image interface 344; wherein , a plurality of pieces of historical editing information 3411 are displayed in the historical editing record window 341, and the user can directly share the historical editing information by directly checking the historical editing information option, and can also preview, re-edit and delete the historical editing information; the preview interface 342 can Carry out the preview playback of target file;
  • the user can avoid too much historical editing information to be distinguished by remark naming, namely by inputting the editing information to the remark naming column 343 in Fig. 10, and using the editing information as this time Remarks corresponding to the editing operation;
  • video single-frame image interface 344 displays several thumbnails of the first video at intervals, and the interval between the thumbnails of two frames of video is proportional to the duration of the first video.
  • the selected target file can be used as the object to be shared by clicking on the first preset control in the video editing interface, or by performing the first preset gesture. And generate the corresponding historical editing information to record the above selection operation information, and automatically check it by default.
  • the first preset control is a control for confirming that the selected target file content is used as the object to be shared
  • the first preset gesture is a gesture for confirming that the selected target file is used as the object to be shared.
  • the second target file corresponding to the corresponding second target edit information can be previewed respectively, Or re-edit or delete the second target edit information.
  • At least one of displaying the file information of the second target file, deleting the second target editing information, and updating the second target editing information can be performed, so as to better meet the actual needs of the user.
  • the file generation method provided in the embodiment of the present application further includes steps 501 to 502.
  • Step 501 Receive a user's second input on a fourth target thumbnail among the at least two video image thumbnails.
  • the time interval between video image frames displaying thumbnails is proportional to the video duration, that is, the interval between two video image frames will vary with the target video duration. Long and long, it is easy to show that the time span of two adjacent video image frames is too large, so that the user cannot quickly locate the desired video position.
  • the second input that is, the second input is an operation of determining that the fourth target thumbnail and the thumbnails between video image thumbnails adjacent to the fourth target thumbnail need to be expanded.
  • the above-mentioned second input can be the user’s click input on the fourth target thumbnail, or a voice command input by the user, or a specific gesture input by the user, which can be specifically determined according to actual usage requirements.
  • the embodiment of the present application There is no limit to this.
  • the above-mentioned second input may specifically be a long-press operation on any thumbnail among the thumbnails of the displayed video images, that is, a long-press operation on the fourth target thumbnail.
  • Step 502 in response to the second input, display video image thumbnails between the fourth target thumbnail and the fifth target thumbnail.
  • the fifth target thumbnail is a video image thumbnail adjacent to the fourth target thumbnail before receiving the second input.
  • the user when receiving the user's second input on the fourth target thumbnail, it means that the user needs to display the video image frame corresponding to the fourth target thumbnail and the fifth target thumbnail currently adjacent to the fourth target thumbnail Other video image frames between the corresponding video image frames, thus displaying video image thumbnails between the fourth target thumbnail and the fifth target thumbnail;
  • the fifth target thumbnail can be the video before the fourth target thumbnail
  • the image thumbnail can also be the video image thumbnail behind the fourth target thumbnail;
  • the expanded video image thumbnail can be between the video image frame corresponding to the fourth target thumbnail and the video image frame corresponding to the fifth target thumbnail Thumbnails of all video frames, or thumbnails of some video frames.
  • step 503 is also included:
  • Step 503. In the case of receiving the seventh input of the fourth target thumbnail, hide the video image thumbnail between the fourth target thumbnail and the fifth target thumbnail.
  • the seventh input is to cancel the operation of displaying the video image thumbnail between the fourth target thumbnail and the fifth target thumbnail, and the seventh input can be the user's click input on the fourth target thumbnail, or
  • the voice command input by the user, or the specific gesture input by the user, may be specifically determined according to actual usage requirements, which is not limited in this embodiment of the present application.
  • the above-mentioned seventh input may specifically be a long press operation on any thumbnail in the thumbnails of the displayed video images again, that is, a long press operation on the fourth target thumbnail again, or a second long press operation on the fifth video frame. Long press to operate.
  • the image or video segment desired by the user when the image or video segment desired by the user is an image between the thumbnails of two frames of video images, by performing the second input on the fourth target thumbnail, it can trigger the opening of the displayed fourth target thumbnail.
  • Thumbnails Thumbnails of other video images between adjacent video image thumbnails, which is convenient for users to select desired images or video clips.
  • the method provided in the embodiment of the present application further includes step 601 to step 602 after the above step 200 .
  • Step 601. Receive a sixth input from the user on the target file.
  • the sixth input is the operation of personalized editing of the target file, which can be the user's click input on the target file, or a voice command input by the user, or a specific gesture input by the user, and can be specifically determined according to actual use. Requirements are determined, which is not limited in this embodiment of the application.
  • Step 602. In response to the sixth input, execute a second editing process, where the second editing process includes at least one of the following:
  • the target file includes a third target file and a fourth target file, updating the location information of the third target file and the fourth target file;
  • the target file includes a fifth target file and a sixth target file
  • file synthesis is performed on the fifth target file and the sixth target file.
  • the personalized editing process corresponding to the sixth input on the target file for example, when the target file includes the third target file and the fourth target file, update the third target file and the fourth target file.
  • the location information of the fourth target file realizes sorting processing; such as adding animation, special effects, subtitles, adding filters, borders, backgrounds, sound effects, etc. to the target file; or including the fifth target file and the sixth target file in the target file In the case of object files, the fifth object file and the sixth object file are combined.
  • the display effect of the target files is optimized, thereby further enhancing the user's sharing experience.
  • an animation production function is added to the sharing and sending interface shown in FIG. 13 to obtain the sharing and sending interface as shown in FIG. 15 .
  • the user selects multiple pictures or video clips, he can sort the selected pictures and video clips by swiping up or down; in addition, he can also select the desired video clips or pictures by long pressing, and click " Animation" control triggers the above step 602, and enters the animation interface shown in FIG. 16 .
  • the user can combine the selected target video and target image, and can also add static background, dynamic background, photo frame, animation special effects, background music, etc.
  • click the "OK" control to save The image obtains a composite image, that is, obtains the target object to be shared, and jumps to the animation browsing interface shown in Figure 17.
  • Figure 17 is an animation browsing interface, the user can browse the generated synthetic image 36, if not satisfied, click the "Cancel” control to return to the image editing interface, if satisfied, click the "Send” control to trigger the fifth input above, and directly share it with the target contact people.
  • the synthesis process can be to combine multiple selected target images, and make a static picture wall by adding background pictures, photo frames, adjusting picture sizes, and rotating lights, as shown in Figure 18; you can also further set the order of appearance, Make it into a dynamic picture album by increasing the appearance of pictures, ending animation, adding background music, etc.
  • the synthesizing process can also be to combine multiple selected target videos, by adjusting the sequence of videos, adding transition animations between videos, adding background music, etc., recombining multiple extracted videos into one video for sharing.
  • the compositing process can also be to combine multiple selected target videos and target images, by adding animation effects to the target images, as a transition animation between multiple target videos or an end animation of the video, it can also be played on the target video
  • the specific period of time is exhibited at some locations to generate synthetic images. The specific effect can be found in Figure 19.
  • the execution subject may be an electronic device, or a file generation module in the electronic device for executing the method for generating a loading file.
  • the electronic device executes the method for generating a loading file as an example to illustrate the method for generating a file provided in the embodiment of the present application.
  • FIG. 20 shows a schematic structural diagram of a file generating device provided by the embodiment of the present application.
  • the file generating device 200 provided by the embodiment of the present application includes:
  • the first receiving module 201 is configured to receive a user's first input of target thumbnails in at least two video image thumbnails, and the at least two video image thumbnails are at least two video image frames in the first video thumbnail;
  • the first output module 202 is configured to output a target file in response to the first input, the target file includes a target video frame, and the target video frame is a video image frame corresponding to the target thumbnail.
  • the user can determine and output the video image frame corresponding to the above-mentioned target thumbnail through the first input of the target thumbnail in the at least two video image thumbnails corresponding to the first video.
  • the target file so as to conveniently realize the interception of video clips or images from the first video, thus solving the problem of cumbersome operations in the related technology when intercepting part of videos or images in the video.
  • the first input includes a first sub-input and a second sub-input
  • the first output module 202 includes:
  • the first input unit is configured to receive the user's first sub-input of the first target thumbnail in the at least two video image thumbnails and the second sub-input of the second target thumbnail.
  • output target video, described target video comprises the video clip between the video frame corresponding to the first target thumbnail and the video frame corresponding to the second target thumbnail;
  • the second input unit is configured to output the target image when receiving the first sub-input and the second sub-input of the third target thumbnail in the at least two video image thumbnails from the user, so
  • the target image is a video image frame corresponding to the third target thumbnail.
  • the device further includes:
  • the first display module is used to display at least two video image thumbnails of the first video before receiving the user's first input of the target thumbnails in the at least two video image thumbnails, the at least two video image thumbnails
  • the number of image thumbnails is less than the number of video image frames of the first video.
  • the thumbnails corresponding to at least two video image frames of the first video are displayed first, so that the user can select a desired target thumbnail, thereby determining the corresponding video image frame, and then generating a corresponding target file.
  • the device further includes:
  • a second receiving module configured to receive a user's second input on a fourth target thumbnail among the at least two video image thumbnails
  • a second display module configured to display video image thumbnails between the fourth target thumbnail and the fifth target thumbnail in response to the second input
  • the fifth target thumbnail is a video image thumbnail adjacent to the fourth target thumbnail before receiving the second input.
  • the expansion and display of the thumbnail of the fourth target can be triggered.
  • Thumbnails Thumbnails of other video images between adjacent video image thumbnails, which is convenient for users to select desired images or video clips.
  • the device further includes:
  • the third display module is configured to display the historical editing record window of the first video, the historical editing record window includes at least one piece of historical editing information, and the historical editing information is used to indicate that a corresponding target is generated according to the first video document;
  • a third receiving module configured to receive a third input from a user on the first target editing information in the at least one piece of historical editing information
  • the first sending module is configured to send the first target file corresponding to the first target editing information to the first target contact in response to the third input.
  • a historical editing record window including at least one piece of historical editing information used to indicate the generation of the corresponding target file according to the first video, it is convenient for the user to quickly intercept the corresponding target file from the first video according to the historical editing information.
  • the device further includes:
  • a fourth receiving module configured to receive a user's fourth input on the second target editing information in the at least one piece of historical editing information
  • the first editing module is configured to execute a first editing process in response to the fourth input, and the first editing process includes at least one of the following: displaying the file of the second target file corresponding to the second target editing information information; delete the second target edit information; update the second target edit information.
  • At least one of displaying the file information of the second target file, deleting the second target editing information, and updating the second target editing information can be performed, so as to better meet the actual needs of the user.
  • the device further includes:
  • the second output module is configured to, after receiving a user's first input on target thumbnails in at least two video image thumbnails, generate first historical editing information in response to the first input.
  • the manner in which the user intercepts the corresponding target file from the first video can be stored, so that the user can quickly extract the corresponding target file from the first video according to the historical editing information.
  • the device further includes:
  • the fifth receiving module is used to receive the fifth input of the user after outputting the target file
  • a second sending module configured to send the target file to a second target contact in response to the fifth input.
  • the target file after the target file is output, the target file can be quickly sent to the second target contact by performing the fifth input on the target file, so as to achieve the effect of quickly intercepting the target file in the first video for sharing.
  • the device further includes:
  • a sixth receiving module configured to receive a sixth input from the user on the target file after outputting the target file
  • the second editing module is configured to execute a second editing process in response to the sixth input, and the second editing process includes at least one of the following:
  • the target file includes a third target file and a fourth target file, updating the location information of the third target file and the fourth target file;
  • the target file includes a fifth target file and a sixth target file
  • file synthesis is performed on the fifth target file and the sixth target file.
  • the display effect of the target files is optimized, thereby further enhancing the user's sharing experience.
  • the file generation apparatus 200 in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal.
  • the device may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle electronic device, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant).
  • non-mobile electronic devices can be servers, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., this application Examples are not specifically limited.
  • Network Attached Storage NAS
  • personal computer personal computer, PC
  • television television
  • teller machine or self-service machine etc.
  • the file generation device 200 in the embodiment of the present application may be a device with an operating system.
  • the operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in this embodiment of the present application.
  • the file generation device 200 provided in the embodiment of the present application can implement various processes implemented by the file generation device in the method embodiments in FIGS. 1 to 19 . To avoid repetition, details are not repeated here.
  • the embodiment of the present application further provides an electronic device 210, as shown in FIG. 21 , including a processor 2110, a memory 2109, and programs or instructions stored in the memory 2109 and operable on the processor 2110,
  • an electronic device 210 including a processor 2110, a memory 2109, and programs or instructions stored in the memory 2109 and operable on the processor 2110,
  • the program or instruction is executed by the processor, each process of the above-mentioned embodiment of the file generation method can be realized, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
  • the electronic device 210 in the embodiment of the present application includes the above-mentioned mobile electronic device and non-mobile electronic device.
  • FIG. 22 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 220 includes but is not limited to: a radio frequency unit 2201, a network module 2202, an audio output unit 2203, an input unit 2204, a sensor 2205, a display unit 2206, a user input unit 2207, an interface unit 2208, a memory 2209, and a processor 2210, etc. part.
  • the electronic device 220 can also include a power supply (such as a battery) for supplying power to various components, and the power supply can be logically connected to the processor 2210 through the power management system, so that the management of charging, discharging, and function can be realized through the power management system. Consumption management and other functions.
  • a power supply such as a battery
  • the structure of the electronic device shown in FIG. 22 does not constitute a limitation to the electronic device.
  • the electronic device may include more or fewer components than shown in the figure, or combine some components, or arrange different components, which will not be repeated here. .
  • the user input unit 2207 is configured to receive the user's first input of target thumbnails in at least two video image thumbnails, the at least two video image thumbnails are thumbnails of at least two video image frames in the first video Sketch map;
  • the processor 2210 is configured to output a target file in response to the first input, where the target file includes a target video frame, and the target video frame is a video image frame corresponding to the target thumbnail.
  • the electronic device receives the user's first input of target thumbnails in at least two video image thumbnails, and the at least two video image thumbnails are at least two video image frames in the first video thumbnail image; in response to the first input, output a target file, the target file includes a target video frame, and the target video frame is a video image frame corresponding to the target thumbnail.
  • the user can determine and output the target file including the video image frame corresponding to the above-mentioned target thumbnail by first inputting the target thumbnail in at least two video image thumbnails corresponding to the first video, Therefore, it is convenient to intercept video clips or images from the first video, thus solving the problem of cumbersome operations in the related art when intercepting part of video or images in the video.
  • the first input includes a first sub-input and a second sub-input; the processor 2210 is specifically configured to receive the user's input of the first target thumbnail in the at least two video image thumbnails.
  • the target video is output, and the target video includes the video image frame corresponding to the first target thumbnail and the second target The video segment between the video image frames corresponding to the thumbnail; in the case of receiving the first sub-input and the second sub-input of the third target thumbnail in the at least two video image thumbnails from the user , outputting a target image, where the target image is a video image frame corresponding to the third target thumbnail.
  • the display unit 2206 is configured to display at least two video image thumbnails of the first video before receiving the user's first input on the target thumbnails in the at least two video image thumbnails, the at least The number of thumbnails of the two video images is smaller than the number of video image frames of the first video.
  • the thumbnails corresponding to at least two video image frames of the first video are displayed first, so that the user can select a desired target thumbnail, thereby determining the corresponding video image frame, and then generating a corresponding target file.
  • the user input unit 2207 is further configured to receive a second input from the user on the fourth target thumbnail among the at least two video image thumbnails;
  • the display unit 2206 is further configured to display video image thumbnails between the fourth target thumbnail and the fifth target thumbnail in response to the second input; wherein, the fifth target thumbnail is for receiving the A video image thumbnail adjacent to the fourth target thumbnail before the second input.
  • the expansion and display of the thumbnail of the fourth target can be triggered.
  • Thumbnails Thumbnails of other video images between adjacent video image thumbnails, which is convenient for users to select desired images or video clips.
  • the display unit 2206 is further configured to display a historical editing record window of the first video, the historical editing record window includes at least one piece of historical editing information, and the historical editing information is used to indicate that according to the first video Generate the corresponding target file;
  • the user input unit 2207 is further configured to receive a third input from the user on the first target editing information in the at least one piece of historical editing information;
  • the processor 2210 is further configured to, in response to the third input, send the first target file corresponding to the first target editing information to the first target contact.
  • a historical editing record window including at least one piece of historical editing information used to indicate the generation of the corresponding target file according to the first video, it is convenient for the user to quickly intercept the corresponding target file from the first video according to the historical editing information.
  • the user input unit 2207 is further configured to receive a fourth input from the user on the second target editing information in the at least one piece of historical editing information;
  • the processor 2210 is further configured to execute a first editing process in response to the fourth input, and the first editing process includes at least one of the following: displaying the file of the second target file corresponding to the second target editing information information; delete the second target edit information; update the second target edit information.
  • At least one of displaying the file information of the second target file, deleting the second target editing information, and updating the second target editing information can be performed, so as to better meet the actual needs of the user.
  • the processor 2210 is further configured to, after receiving a user's first input on target thumbnails in at least two video image thumbnails, generate first history editing information in response to the first input.
  • the manner in which the user intercepts the corresponding target file from the first video can be stored, so that the user can quickly extract the corresponding target file from the first video according to the historical editing information.
  • the user input unit 2207 is also configured to receive a fifth input from the user after outputting the target file;
  • the processor 2210 is further configured to send the target file to a second target contact in response to the fifth input.
  • the target file after the target file is output, the target file can be quickly sent to the second target contact by performing the fifth input on the target file, so as to achieve the effect of quickly intercepting the target file in the first video for sharing.
  • the user input unit 2207 is further configured to receive a sixth input from the user on the target file after outputting the target file;
  • the processor 2210 is further configured to execute a second editing process in response to the sixth input, where the second editing process includes at least one of the following:
  • the target file includes a third target file and a fourth target file, updating the location information of the third target file and the fourth target file;
  • the target file includes a fifth target file and a sixth target file
  • file synthesis is performed on the fifth target file and the sixth target file.
  • the display effect of the target files is optimized, thereby further enhancing the user's sharing experience.
  • the input unit 2204 may include a graphics processor (Graphics Processing Unit, GPU) 22041 and a microphone 22042, and the graphics processor 22041 is used for the image capture device (such as the image data of the still picture or video obtained by the camera) for processing.
  • the display unit 2206 may include a display panel 22061, and the display panel 22061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 2207 includes a touch panel 22071 and other input devices 22072 . Touch panel 22071, also called touch screen.
  • the touch panel 22071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 22072 may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • the memory 2209 can be used to store software programs as well as various data, including but not limited to application programs and operating systems.
  • Processor 2210 may integrate an application processor and a modem processor, wherein the application processor mainly processes operating systems, user interfaces, and application programs, and the modem processor mainly processes wireless communications. It can be understood that the foregoing modem processor may not be integrated into the processor 2210 .
  • the embodiment of the present application also provides a readable storage medium, on which a program or instruction is stored, and when the program or instruction is executed by a processor, each process of the above-mentioned embodiment of the file generation method is realized, and the same To avoid repetition, the technical effects will not be repeated here.
  • the processor is the processor in the electronic device described in the above embodiments.
  • the readable storage medium includes computer readable storage medium, such as computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the above-mentioned file generation method embodiment
  • the chip includes a processor and a communication interface
  • the communication interface is coupled to the processor
  • the processor is used to run programs or instructions to implement the above-mentioned file generation method embodiment
  • chips mentioned in the embodiments of the present application may also be called system-on-chip, system-on-chip, system-on-a-chip, or system-on-a-chip.
  • the term “comprising”, “comprising” or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or apparatus comprising a set of elements includes not only those elements, It also includes other elements not expressly listed, or elements inherent in the process, method, article, or device. Without further limitations, an element defined by the phrase “comprising a " does not preclude the presence of additional identical elements in the process, method, article, or apparatus comprising that element.
  • the scope of the methods and devices in the embodiments of the present application is not limited to performing functions in the order shown or discussed, and may also include performing functions in a substantially simultaneous manner or in reverse order according to the functions involved. Functions are performed, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé et un appareil de génération de fichiers, et un dispositif électronique et un support de stockage lisible, qui appartiennent au domaine technique du partage et de la transmission de fichiers. Le procédé consiste à : recevoir, d'un utilisateur, une première entrée pour une vignette cible parmi au moins deux vignettes d'image vidéo, lesdites au moins deux vignettes d'image vidéo étant des vignettes d'au moins deux trames d'image vidéo dans une première vidéo (100) ; et en réponse à la première entrée, délivrer en sortie un fichier cible, le fichier cible comprenant une trame vidéo cible, et la trame vidéo cible étant une trame d'image vidéo correspondant à la vignette cible (200).
PCT/CN2022/124926 2021-10-15 2022-10-12 Procédé et appareil de génération de fichiers, et dispositif électronique WO2023061414A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111212486.1 2021-10-15
CN202111212486.1A CN113918522A (zh) 2021-10-15 2021-10-15 一种文件生成方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2023061414A1 true WO2023061414A1 (fr) 2023-04-20

Family

ID=79241356

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/124926 WO2023061414A1 (fr) 2021-10-15 2022-10-12 Procédé et appareil de génération de fichiers, et dispositif électronique

Country Status (2)

Country Link
CN (1) CN113918522A (fr)
WO (1) WO2023061414A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113918522A (zh) * 2021-10-15 2022-01-11 维沃移动通信有限公司 一种文件生成方法、装置及电子设备
CN114679546A (zh) * 2022-03-31 2022-06-28 维沃移动通信有限公司 一种显示方法及其装置、电子设备和可读存储介质
CN114928761B (zh) * 2022-05-07 2024-04-12 维沃移动通信有限公司 视频分享方法、装置及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007110566A (ja) * 2005-10-14 2007-04-26 Sharp Corp 動画編集装置、および動画編集方法
US20150194186A1 (en) * 2014-01-08 2015-07-09 Lg Electronics Inc. Mobile terminal and controlling method thereof
US20160048313A1 (en) * 2014-08-18 2016-02-18 KnowMe Systems, Inc. Scripted digital media message generation
CN109905780A (zh) * 2019-03-30 2019-06-18 山东云缦智能科技有限公司 一种视频片段分享方法和智能机顶盒
CN110933509A (zh) * 2019-12-09 2020-03-27 北京字节跳动网络技术有限公司 一种信息发布的方法、装置、电子设备及存储介质
CN113242464A (zh) * 2021-01-28 2021-08-10 维沃移动通信有限公司 视频编辑方法、装置
CN113918522A (zh) * 2021-10-15 2022-01-11 维沃移动通信有限公司 一种文件生成方法、装置及电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007110566A (ja) * 2005-10-14 2007-04-26 Sharp Corp 動画編集装置、および動画編集方法
US20150194186A1 (en) * 2014-01-08 2015-07-09 Lg Electronics Inc. Mobile terminal and controlling method thereof
US20160048313A1 (en) * 2014-08-18 2016-02-18 KnowMe Systems, Inc. Scripted digital media message generation
CN109905780A (zh) * 2019-03-30 2019-06-18 山东云缦智能科技有限公司 一种视频片段分享方法和智能机顶盒
CN110933509A (zh) * 2019-12-09 2020-03-27 北京字节跳动网络技术有限公司 一种信息发布的方法、装置、电子设备及存储介质
CN113242464A (zh) * 2021-01-28 2021-08-10 维沃移动通信有限公司 视频编辑方法、装置
CN113918522A (zh) * 2021-10-15 2022-01-11 维沃移动通信有限公司 一种文件生成方法、装置及电子设备

Also Published As

Publication number Publication date
CN113918522A (zh) 2022-01-11

Similar Documents

Publication Publication Date Title
US20220342519A1 (en) Content Presentation and Interaction Across Multiple Displays
WO2023061414A1 (fr) Procédé et appareil de génération de fichiers, et dispositif électronique
CN112153288B (zh) 用于发布视频或图像的方法、装置、设备和介质
TWI592021B (zh) 生成視頻的方法、裝置及終端
KR102013331B1 (ko) 듀얼 카메라를 구비하는 휴대 단말기의 이미지 합성 장치 및 방법
CN108334371B (zh) 编辑对象的方法和装置
US20120249575A1 (en) Display device for displaying related digital images
WO2008085751A1 (fr) Création d'illustration numérique basée sur des métadonnées d'un fichier de contenus
CN111343074B (zh) 一种视频处理方法、装置和设备以及存储介质
TW201545042A (zh) 暫態使用者介面元素
CN112672061B (zh) 视频拍摄方法、装置、电子设备及介质
WO2019242274A1 (fr) Procédé et dispositif de traitement de contenu
WO2023030306A1 (fr) Procédé et appareil d'édition vidéo, et dispositif électronique
WO2023072083A1 (fr) Procédé de traitement de fichier et dispositif électronique
WO2023040896A1 (fr) Procédé et appareil de partage de contenu et dispositif électronique
CN102799384A (zh) 进行外景截图的方法、客户端及系统
CN112954046A (zh) 信息发送方法、信息发送装置和电子设备
CN113986574A (zh) 评论内容的生成方法、装置、电子设备和存储介质
CN113988021A (zh) 内容互动方法、装置、电子设备及存储介质
WO2023155858A1 (fr) Procédé et appareil d'édition de documents
WO2023179539A1 (fr) Procédé et appareil de montage vidéo, et dispositif électronique
WO2023155874A1 (fr) Procédé et appareil de gestion d'icône d'application, et dispositif électronique
WO2023016476A1 (fr) Procédé et dispositif de capture d'écran
CN113810538B (zh) 视频编辑方法和视频编辑装置
CN115037874A (zh) 拍照方法、装置和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22880350

Country of ref document: EP

Kind code of ref document: A1