CN108881742B - Video generation method and terminal equipment - Google Patents

Video generation method and terminal equipment Download PDF

Info

Publication number
CN108881742B
CN108881742B CN201810690884.6A CN201810690884A CN108881742B CN 108881742 B CN108881742 B CN 108881742B CN 201810690884 A CN201810690884 A CN 201810690884A CN 108881742 B CN108881742 B CN 108881742B
Authority
CN
China
Prior art keywords
images
input
image
screen
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810690884.6A
Other languages
Chinese (zh)
Other versions
CN108881742A (en
Inventor
杨其豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810690884.6A priority Critical patent/CN108881742B/en
Publication of CN108881742A publication Critical patent/CN108881742A/en
Application granted granted Critical
Publication of CN108881742B publication Critical patent/CN108881742B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging

Abstract

The embodiment of the invention provides a video generation method and terminal equipment, which are applied to the technical field of communication and are used for solving the problem that the process of editing pictures and generating videos by the terminal equipment is complicated. The scheme is applied to terminal equipment comprising a first screen and a second screen, and comprises the following steps: receiving a first input of a user in a state where a first image is displayed on a first screen; displaying an image list on the second screen in response to the first input, wherein the image list comprises M images; receiving a second input of the user; generating a target video according to the N images in the image list in response to a second input; wherein the M images include a first image, the N images are the same as or different from the M images, and M, N are integers greater than 1. The scheme is particularly applied to the process of selecting images and generating videos by the terminal equipment.

Description

Video generation method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a video generation method and terminal equipment.
Background
With the development of communication technology, the intelligent degree of terminal equipment such as mobile phones and tablet computers is continuously improved so as to meet various requirements of users. For example, users have increasingly high requirements for the convenience of the process of editing pictures into videos by terminal devices.
Typically, pictures are saved in a gallery of the terminal device, and video editing functions to enable editing of pictures into video may be provided by a third party application in the terminal device. Specifically, the user can control the terminal device to select a picture to be edited from pictures stored in the terminal device by using a third-party application program, and edit the picture to be edited into a video.
The method has the problems that in the process of controlling the terminal equipment to select a plurality of pictures to be edited by a user, the user needs to perform selection operation on each picture respectively, namely the user needs to control the terminal equipment to acquire the plurality of pictures to be edited through multiple selection operations. Or, the user can only control the terminal device to acquire one picture group (the picture group comprises a plurality of pictures) through one-time selection operation, and some pictures in the plurality of pictures may not be selected by the user. Therefore, the process of selecting the picture by the terminal device is complicated, namely the process of editing the picture by the terminal device to generate the video is complicated.
Disclosure of Invention
The embodiment of the invention provides a video generation method and terminal equipment, and aims to solve the problem that the process of editing pictures and generating videos by the terminal equipment is complex.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a video generation method, which is applied to a terminal device, where the terminal device includes a first screen and a second screen, and the method includes: receiving a first input of a user in a state where a first image is displayed on a first screen; displaying an image list on the second screen in response to the first input, wherein the image list comprises M images; receiving a second input of the user; generating a target video according to the N images in the image list in response to a second input; wherein the M images include a first image, the N images are the same as or different from the M images, and M, N are integers greater than 1.
In a second aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes a first screen and a second screen, and the terminal device further includes: the device comprises a receiving module, a display module and a generating module; the receiving module is used for receiving a first input of a user in a state that a first image is displayed on a first screen; the display module is used for responding to the first input received by the receiving module and displaying an image list on the second screen, wherein the image list comprises M images; the receiving module is also used for receiving a second input of the user; the generating module is used for responding to the second input received by the receiving module and generating a target video according to the N images in the image list; wherein the M images include a first image, the N images are the same as or different from the M images, and M, N are integers greater than 1.
In a third aspect, an embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, and when executed by the processor, the computer program implements the steps of the video generation method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the video generation method according to the first aspect.
In an embodiment of the present invention, a terminal device includes a first screen and a second screen. The terminal device receives a first input from a user in a state where a first image is displayed on a first screen, and displays an image list including M images on a second screen in response to the first input, the M images being acquired by the terminal device at least from the first image, the M images including the first image. The terminal device receives a second input from the user and may generate the target video from the N images in the image list in response to the second input. Wherein the N images are the same as or different from the M images, and M, N are integers greater than 1. Based on the scheme, the terminal device receives the first input in the state that the first screen displays the first image, and can select M images at least according to the first image without receiving the selection input of the user for each image in the plurality of images, so that the M images are added to the image list displayed on the second screen. Thus, the user can select N images that are the same as or different from the M images from the image list and edit the N images into the target video. In addition, the user can view the first image to be edited in the video displayed in the second screen while browsing the first image on the first screen, so that the user can select the image required by the user. Therefore, the step of controlling the terminal equipment to select the image by the user can be simplified, so that the steps of editing the image and generating the video by the terminal equipment are simplified.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an operation provided by an embodiment of the present invention;
fig. 3 is a schematic flowchart of a video generation method according to an embodiment of the present invention;
fig. 4 is one of schematic diagrams of display contents of a terminal device according to an embodiment of the present invention;
fig. 5 is a second schematic diagram of the display content of the terminal device according to the embodiment of the present invention;
fig. 6 is a third schematic diagram of the display content of the terminal device according to the embodiment of the present invention;
fig. 7 is a fourth schematic diagram illustrating content displayed by a terminal device according to an embodiment of the present invention;
fig. 8 is a fifth schematic diagram of the display content of the terminal device according to the embodiment of the present invention;
fig. 9 is a sixth schematic view of the display content of the terminal device according to the embodiment of the present invention;
fig. 10 is a seventh schematic diagram of the display content of the terminal device according to the embodiment of the present invention;
fig. 11 is a schematic structural diagram of a possible terminal device according to an embodiment of the present invention;
fig. 12 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. "plurality" means two or more than two.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first input and the second input, etc. are for distinguishing different inputs, rather than for describing a particular order of inputs.
According to the video generation method provided by the embodiment of the invention, the terminal equipment receives the first input in the state that the first screen displays the first image, and can select M images at least according to the first image without receiving the selection input of the user aiming at each image in the plurality of images, so that the M images are added into the image list displayed on the second screen. Thus, the user can select N images that are the same as or different from the M images from the image list and edit the N images into the target video. Therefore, the step of controlling the terminal equipment to select the image by the user can be simplified, so that the steps of editing the image and generating the video by the terminal equipment are simplified.
The terminal device in the embodiment of the invention can be a mobile terminal device and can also be a non-mobile terminal device. The mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile terminal device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
It should be noted that, in the video generation method provided in the embodiment of the present invention, the execution main body may be a terminal device (including a mobile terminal device and a non-mobile terminal device), or a Central Processing Unit (CPU) of the terminal device, or a control module in the terminal device for executing the video generation method. The following describes a video generation method provided by an embodiment of the present invention, taking a terminal device as an example to execute the video generation method.
The terminal device in the embodiment of the present invention may be a terminal device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment applied to the video generation method provided by the embodiment of the present invention, taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application. For example, applications such as a system setup application, a system chat application, and a system camera application. And the third-party setting application, the third-party camera application, the third-party chatting application and other application programs.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the video generation method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the video generation method may operate based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can implement the video generation method provided by the embodiment of the invention by running the software program in the android operating system.
Clockwise, anticlockwise, up, down, left, right and the like in the embodiments of the present invention are exemplarily illustrated by taking user input on the display screen of the terminal device as an example, that is, clockwise, anticlockwise, up, down, left, right and the like are in terms of user input on the display screen of the terminal device with respect to the terminal device or the display screen of the terminal device.
Illustratively, taking the user sliding in all directions in the area where the identifier of the Application (APP) is located as an example, as shown in fig. 2, on the display screen of the terminal device, 20 indicates that the user slides clockwise in the area where the identifier of the APP is located, 21 indicates that the user slides counterclockwise in the area where the identifier of the APP is located, 22 indicates that the user slides upward in the area where the identifier of the APP is located, 23 indicates that the user slides downward in the area where the identifier of the APP is located, 24 indicates that the user slides leftward in the area where the identifier of the APP is located, and 25 indicates that the user slides rightward in the area where the identifier of the APP is located.
The video generation method provided by the embodiment of the present invention is described in detail below with reference to a flowchart of the video generation method shown in fig. 3, and the method is applied to a terminal device, where the terminal device includes a first screen and a second screen. Wherein, although the logical order of the video generation methods provided by embodiments of the present invention is shown in a method flow diagram, in some cases, the steps shown or described may be performed in an order different than here. For example, the video generation method shown in fig. 3 may include steps 301 to 304:
step 301, the terminal device receives a first input of a user in a state that the first image is displayed on the first screen.
Among them, a gallery application (such as a system gallery application) in the terminal device stores a plurality of images (i.e., pictures).
Optionally, in the embodiment of the present invention, the interface displayed on the first screen of the terminal device may be an interface provided by the gallery application program and used for browsing images, and the interface may maximally display one image. For example, the user may control the terminal device to display the first image on the first screen.
Specifically, the first input is used to trigger the terminal device to start selecting an image.
It should be noted that the screen (including the first screen and the second screen) of the terminal device provided in the embodiment of the present invention may be a touch screen, and the touch screen may be configured to receive an input from a user and display a content corresponding to the input to the user in response to the input. The first input may be a touch screen input, a fingerprint input, a gravity input, a key input, or the like. The touch screen input is input such as press input, long press input, slide input, click input, and hover input (input by a user near the touch screen) of a touch screen of the terminal device by the user. The fingerprint input is input by a user to a sliding fingerprint, a long-press fingerprint, a single-click fingerprint, a double-click fingerprint and the like of a fingerprint identifier of the terminal equipment. The gravity input is input such as shaking of the terminal equipment in a specific direction, shaking of the terminal equipment for a specific number of times and the like. The key input corresponds to a single-click input, a double-click input, a long-press input, a combination key input, and the like of the user for a key such as a power key, a volume key, a Home key, and the like of the terminal device. Specifically, the operation mode of the first input is not particularly limited in the embodiment of the present invention, and may be any achievable operation mode.
Illustratively, the click input may be a single click input, a double click input, or any number of click inputs. The slide input may be a slide input in any direction, such as an upward slide, a downward slide, a leftward slide, or a rightward slide.
Alternatively, the first input (denoted as input 1) may include a sub-input (denoted as sub-input Ta) of the user on the first screen, and a sub-input (denoted as sub-input Tb) of the user on the second screen. It will be appreciated that the sub-inputs Ta and Tb of the first input described above operate in the same or different ways.
It should be noted that the absolute value of the time difference between the sub input Ta and the sub input Tb received by the terminal device is within the preset range.
The preset range is greater than or equal to 0 and less than or equal to t1, where t1 may be 1s, 2s, or other values, which is determined by practical circumstances, and the embodiments of the present invention are not limited. The terminal equipment receives the sub input Ta and receives the sub input Tb without any sequence, and can receive the sub input Ta first and then receive the sub input Tb; the sub input Tb may be received first, and then the sub input Ta may be received, which is not limited in the embodiment of the present invention.
Exemplarily, as shown in fig. 4, a schematic diagram of displaying content for a terminal device according to an embodiment of the present invention is provided. The terminal device shown in fig. 4 includes a screen 41 and a screen 42, where the screen 41 and the screen 42 are the first screen and the second screen, respectively. Specifically, the screen 41 shown in (4a) in fig. 4 displays an image P1, the image P1 may be the first image, and in the case where nothing is displayed on the screen 42, the terminal device receives the sub input Ta on the user screen 41 and the sub input Tb on the screen 42. The sub-input Ta is a sliding input from the bottom to the top and from the right to the left from the bottom to the left of the screen 41, and the sliding track of the sliding input is an arc. The sub input Tb is a sliding input from bottom to top and from left to right from the bottom edge to the right edge of the screen 42, and the sliding track of the sliding input is an arc.
Step 302, responding to the first input, the terminal device displays an image list on the second screen, wherein the image list comprises M images.
Specifically, the M images are acquired by the terminal device at least according to the first image.
Wherein the M images include a first image, and M is an integer greater than 1.
Illustratively, when the screen 41 shown in (4a) in fig. 4 displays the image P1 and the terminal device receives the first input including the sub input Ta and the sub input Tb shown in (4a) in fig. 4, the terminal device may display the display contents as shown in (4b) in fig. 4. In the case where the image P1 is displayed in the panel 41 in (4b) in fig. 4, 5 images of the image P2, the image P3, the image P1, the image P4, and the image P5 are included in the image list displayed in the panel 42 in (4b) in fig. 4, when M is equal to 5. Among these 5 images, image P1 is included, that is, the first image is included in the M images.
Optionally, in the embodiment of the present invention, the resolution of the display of the same image on the first screen and the second screen by the terminal device is the same or different. For example, an image is maximally displayed on a first screen and displayed as a thumbnail of the image on a second screen, i.e., an image has a higher resolution on the first screen than on the second screen.
Optionally, in the embodiment of the present invention, the image displayed on the first screen may be a local image stored in a gallery application program in the terminal device, or a cloud image previewed in the gallery application program or a network-side image.
Step 303, the terminal device receives a second input of the user.
The second input may trigger the user control terminal device to input the displayed M images to determine that multiple images of the video need to be generated, and edit the multiple images into the input of the video. For example, the trigger terminal device edits the image P2, the image P3, the image P1, the image P4, and the image P5 displayed in the screen 42 in (4b) in fig. 4 as input of video.
And 304, responding to the second input, and generating the target video by the terminal equipment according to the N images in the image list.
Wherein the N images are the same as or different from the M images, and N is an integer greater than 1.
Illustratively, the terminal device edits the image P2, the image P3, the image P1, the image P4, and the image P5 displayed in the screen 42 in (4b) in fig. 4 as input of video. Wherein the N images are identical to the M images.
In the prior art, in the process that a user controls a terminal device to use a video editing function of a third-party application program, the user may trigger the terminal device to display a selection interface for selecting an image in the third application program, and control the terminal device to select the image to be edited on the selection interface. And then, the user can trigger the terminal device to quit displaying the selection interface and display an editing interface for editing the image to be edited in the third application program so as to control the terminal device to edit the image to be edited into a video. When a plurality of third-party application programs with video editing functions are installed in the terminal equipment, the input of the video editing functions triggered by the plurality of third-party application program users to the terminal equipment may be different. Therefore, in the prior art, when the user control terminal uses the video editing function of the third-party application program, the user control terminal needs to learn the use steps of the video editing function of each third-party application program in a time-consuming and labor-consuming manner.
It can be understood that, in the video generation method provided in the embodiment of the present invention, the gallery application (e.g., the system gallery application) in the terminal device has a video editing function. During the process that the terminal device uses the video editing function of the gallery application program, the user can trigger the terminal device to select the image to be edited (such as the above M images) through a specific input (such as the first input). At this time, the interface of the gallery application program is normally displayed on the first screen of the terminal device, the image list displayed on the second screen includes the image to be edited selected by the user, and the editing interface for editing the image to be edited is displayed. Therefore, the terminal equipment can select the image to be edited while browsing the image by using the gallery application program.
It is conceivable that a user may have a need to edit some of the images displayed by the gallery application into a video while browsing the images using the gallery application of the terminal device. In the prior art, a terminal device exits a gallery application program and enters a third-party application program, so that a user browses and selects an image to be edited again. In the embodiment of the invention, when the terminal equipment uses the gallery application program to browse the images, the user can select the images to be edited. Therefore, the operation of the user in the process of controlling the terminal equipment to generate the video can be simplified, and the display effect of the image to be edited selected by the user in the process can be improved. Therefore, the user experience of the user in the process of editing the image into the video by using the terminal equipment is improved.
It should be noted that, in the video generation method provided in the embodiment of the present invention, the terminal device includes a first screen and a second screen. The terminal device receives a first input from a user in a state where a first image is displayed on a first screen, and displays an image list including M images on a second screen in response to the first input, the M images being acquired by the terminal device at least from the first image, the M images including the first image. The terminal device receives a second input from the user and may generate the target video from the N images in the image list in response to the second input. Wherein the N images are the same as or different from the M images, and M, N are integers greater than 1. Based on the scheme, the terminal device receives the first input in the state that the first screen displays the first image, and can select M images at least according to the first image without receiving the selection input of the user for each image in the plurality of images, so that the M images are added to the image list displayed on the second screen. Thus, the user can select N images that are the same as or different from the M images from the image list and edit the N images into the target video. In addition, the user can view the first image to be edited in the video displayed in the second screen while browsing the first image on the first screen, so that the user can select the image required by the user. Therefore, the step of controlling the terminal equipment to select the image by the user can be simplified, so that the steps of editing the image and generating the video by the terminal equipment are simplified.
In a possible implementation manner, according to the video generation method provided by the embodiment of the present invention, a user may control a terminal device to update the number or the arrangement order of M images to obtain N images, where the N images are different from the M images. Specifically, the second input includes a first sub-input, and the first sub-input is used to trigger updating of M images in the image list into N images. Step 304 provided by the embodiment of the present invention may be replaced with step 304':
and step 304', the terminal equipment updates the M images in the image list into N images and generates a target video according to the N images.
For example, the operation mode of the first sub-input may refer to the description of the operation mode of the first input in the foregoing embodiment, and the description of the embodiment of the present invention is omitted.
The first sub-input may be used to add or reduce the images to be edited, that is, add images to the M images, or delete images from the M images.
Specifically, the user may control the terminal device to edit the finally selected image to be edited (i.e., the N images) into one video, so as to obtain the target video.
It should be noted that, with the video generation method provided in the embodiment of the present invention, the terminal device may update and display the image to be edited newly selected by the user on the second screen while displaying the image in the gallery application on the first screen. Namely, the user does not need to control the terminal equipment to push out the gallery display application program and then display the editing interface for editing the image. In this way, the step of selecting the image by the user through the terminal device can be further simplified, so that the steps of editing the image and generating the video through the terminal device can be further simplified.
In a possible implementation manner, in the video generation method provided by the embodiment of the present invention, the second input further includes N-1 second sub-inputs, where the N-1 second sub-inputs are used to determine an order of combining the N images into a frame of the video.
Specifically, the "generating the target video from N images" in the above step 304 may be realized by the step 304':
step 304', the terminal synthesizes N images in the order of frames to generate a target video.
Specifically, each second sub-input is used to arrange the order of the images to obtain the order of the frames of the images in the target video to be generated. Wherein each second sub-input may be an input of the user at any position on the second screen of the terminal device.
It should be noted that, with the video generation method provided in the embodiment of the present invention, a user can arrange the arrangement order of the selected images to be edited (such as the above N images) by controlling the terminal device according to a requirement, so that the order of frames in the target video meets the requirement of the user. The N-1 sub-inputs can be input at any position on the second screen of the terminal equipment by a user, and the images in the N images do not need to be dragged, so that the complicated operation of the dragging process of the user is avoided. Therefore, the steps of editing the image and generating the video by the terminal equipment are further simplified.
In a possible implementation manner, in the video generating method provided by the embodiment of the present invention, a pointer identifier is further displayed on the image list, and the image displayed on the first screen is an image indicated by the pointer identifier. The second sub-input is further used to trigger moving the pointer identification from a first position of the second image on the image list to a second position of the first image on the image list and updating the second image displayed on the first screen to the first image.
It is understood that the pointer on the image list identifies the indicated image as the image selected by the user.
Specifically, the pointer displayed on the second screen is identified to be located at a position of an image on the image list, that is, a position corresponding to the image.
Alternatively, the pointer is identified at a position corresponding to an image, which may be above the image or near the image, such as the left side of the image or the right side of the image.
Optionally, each second sub-input may be a sliding input with a sliding track of a circle on the second screen by the user, and the sub-input is marked as the first type sub-input. When the sliding track is a clockwise circle, the terminal device takes the image indicated by the current pointer identification as a previous frame image and takes the next image as a next frame image; when the sliding track is a counterclockwise circle, the terminal device takes the image indicated by the current pointer identifier as a previous frame image and takes the previous frame image as a next frame image.
In addition, each second sub-input may also be a sub-input including one input (denoted as input Tc) of the user on the first screen and another input (denoted as input Td) on the second screen, and the sub-inputs are denoted as a second type of sub-input. Illustratively, the input Td is a user's press input on the second screen, and the input Tc is a user's slide input on the first screen. Specifically, in the process that the finger of the user performs the second-type sub-input on the first screen and the second screen of the terminal device, the terminal device may move the target identifier to a position corresponding to the operation along with the operation of the finger of the user, and use the image as a current frame image, that is, a next frame image of a previous frame image.
Note that the absolute value of the time difference between the time when the terminal device receives the input Tc and the time when the terminal device receives the input Td is within a preset range.
Wherein the terminal device may use the image indicated by the current pointer identification (e.g., image P1) as the first frame image when the terminal device receives the first one of the N-1 second sub inputs, which is typically the first type of sub input.
Exemplarily, as shown in fig. 5, a schematic diagram of displaying content for a terminal device according to an embodiment of the present invention is provided. The right side of the image P3 shown by the screen 42 in (5a) in fig. 5 shows the object marker 43. At this time, the image P3 may be the above-described second image. The terminal device may receive an input T2 as shown by the screen 42 in (5a) in fig. 5, the input T2 may be the first and second sub-inputs among the above-mentioned N-1 sub-inputs, and the input T2 is a slide input whose slide trajectory is a clockwise circle. Subsequently, after the terminal device receives the input T2, display content as shown in (5b) in fig. 5 may be displayed.
In (5b) in fig. 5, an image P4 is displayed in the screen 41, and the pointer sign 43 displayed in the screen 41 is located on the right side of the image P1. The terminal apparatus moves the pointer identification from the position corresponding to the image P1 to the position corresponding to the image P4, and updates the image P3 displayed on the first screen to the image P1. At this time, the image P3 is the second image, and the image P1 is the first image. Specifically, the terminal device takes the image P3 as the first frame image of the video, and takes the image P1 as the second frame image of the video.
Subsequently, as shown in fig. 6, a schematic diagram of displaying content for a terminal device according to an embodiment of the present invention is provided. In fig. 5, in conjunction with this, the target mark 43 is displayed on the right side of the image P4 shown by the screen 42 in (6a) in fig. 6. The screen 41 shown in fig. 6 (6a) receives a slide input (input Tc) from the user from bottom to top and from right to left from the bottom edge to the left edge of the first screen, the slide trajectory of which is a circular arc, and receives a press input (input Td) for the user on the second screen. In particular, the user may control the terminal device to move object marker 43 on screen 42 from the right side of image P1 to the right side of image P2. Subsequently, as shown in (6b) in fig. 6, the object marker 43 in the screen 42 is displayed on the right side of the image P2. Specifically, the terminal device regards the image P2 as the third frame image of the video.
Similarly, the terminal device may take the image P4 as the fourth frame image of the video and the image P5 as the fifth frame image of the video. As such, the sequence of frames of images in the target video is image P3, image P1, image P2, image P4, and image P5.
It should be noted that, in the video generation method provided in the embodiment of the present invention, the terminal device may control the pointer identifier to move on the second screen, so as to determine the sequence of frames of images in the video to be generated. Meanwhile, the image displayed on the first screen may be changed as the pointer identification moves. In this way, the user is facilitated to arrange the sequence of the N images according to the requirement, and the maximized display result of the selected image (namely, the image indicated by the pointer identification) is viewed through the first screen in the process of arranging the sequence of the N images.
In a possible implementation manner, in the video generating method provided in the embodiment of the present invention, before the step 302, the method may further include the step 305:
step 305, the terminal device obtains M images at least according to the first image.
Optionally, M may be fixed values preset by a user in the terminal device, where the M images include the first image. Wherein M may be modified by the user before making the first input.
Alternatively, step 305 may be implemented as step 305'.
In step 305', the terminal device obtains M images according to the attribute information of the first image, where M is a preset value, and the arrangement order of the M images is associated with the attribute information of the M images.
For example, the attribute information of the image may be information of a data size, a name (e.g., an initial of the name), a saving time, a saving path, a shooting location, and the like of the image. For example, the arrangement order of the images may be an attribute order of data from small to large, an order of first letters of names from first to last, an order of saving time from first to last, an order of saving first letters of paths from first to last, and an order of photographing first letters of places from first to last.
Alternatively, the M images may be composed of several images with attribute information before the first image and several images with attribute information after the first image, centered on the first image, and the first image. Alternatively, the M images may be composed of M-1 images before the first image and the first image for the attribute information. Alternatively, the M images may be composed of M-1 images before the first image and the first image with the attribute information.
Alternatively, the above step 305 may be implemented by step 305'.
Step 305', the terminal apparatus acquires the M images in the order of the attribute information of the M images based on the attribute information of the first image and the first input.
Wherein the value of M is associated with the input parameter of the first input; the input parameter includes at least one of an input duration, a length of an input sliding trajectory, and an input pressure value.
Illustratively, the longer the duration of the first input in the embodiment of the present invention is, the larger the value of M determined by the terminal device is.
Similarly, in the embodiment of the present invention, the value of the input sliding track length of the first input and the value of the input pressure value pair M may refer to the above description of the duration of the first input, and the embodiment of the present invention is not described again.
It should be noted that, in the video generation method provided in the embodiment of the present invention, the terminal device may obtain a fixed number of M images including the first image. Therefore, the steps of editing the image and generating the video by the terminal equipment are further simplified.
In a possible implementation manner, in the video generation method provided by the embodiment of the present invention, the first sub-input is specifically used to trigger the removal of the image indicated by the pointer identifier from the second screen.
Optionally, the description of the first sub-input may refer to the above description of the first input, and the embodiment of the present invention is not described again.
Alternatively, the first sub-input may comprise one input by the user on the first screen (denoted as input Te) and another input on the second screen (denoted as input Tf). Illustratively, the input Te is a sliding input from bottom to top and from right to left from the bottom edge to the left edge of the first screen, and a sliding track of the sliding input is an arc. The input Tf is a sliding input in which a sliding track of the user on the second screen is a straight line from left to right.
Exemplarily, as shown in fig. 7, a schematic diagram of displaying content for a terminal device according to an embodiment of the present invention is provided. The right side of the image P1 shown by the screen 42 in (7a) in fig. 7 is displayed with the object marker 43. The screen 41 shown in fig. 7 (7a) receives a slide input (input Te) from the bottom to the top and from the right to the left of the user from the bottom edge to the left edge of the first screen, the slide trace of which is an arc, and receives a slide input (input Tf) from the left to the right, which is a straight line of the slide trace of the user on the second screen.
It will be appreciated that the above-mentioned input Te is mainly used for moving the position of the pointer identification, i.e. the selected image. And the input Tf is mainly used to remove the image indicated by the pointer identification, i.e. the selected image.
Specifically, the "the terminal device updates M images in the image list to N images" in the above step 304' can be realized by the step 306:
step 306, the terminal device removes the image indicated by the pointer identifier from the second screen to update and display the M images as N images.
For example, as shown in (7b) in fig. 7, the user can control the terminal device to display the object marker 43 on the screen 42 to move from the right side of the image P1 to the right side of the image P4, and to display the image P4 on the screen 41.
Illustratively, the above N is equal to 4, and the N pictures include a picture P2, a picture P3, a picture P4, and a picture P5.
It should be noted that, with the video generation method provided by the embodiment of the present invention, the terminal device may support the user to delete the unnecessary images displayed on the second screen, so that the images in the target video that the user can generate are all images that the user needs.
In a possible implementation manner, in the video generation method provided by the embodiment of the present invention, the first sub-input is specifically used to trigger addition of P images to the image list, a value of P is associated with an input parameter of the first sub-input, and P is a positive integer.
Specifically, the "the terminal apparatus updates M images in the image list to N images" in the above step 304 'may be implemented by the step 306':
in step 306', the terminal device acquires the P images, and arranges and displays the P images and the M images according to the attribute information of the P images and the attribute information of the M images.
The arrangement order of the P images is associated with the attribute information of the P images, the image list includes the arranged P images and M images, and N is M + P.
Optionally, the image contents of the P images are determined according to the input area of the first sub-input.
Illustratively, P images are arranged before M images; alternatively, the P images are arranged after the M images; alternatively, some of the P images are arranged before the M images, and the other images of the P images except for the partial images are arranged after the M images.
Alternatively, the first sub-input may comprise one input by the user on the first screen (denoted as input Tg) and another input on the second screen (denoted as input Th). Illustratively, the input Tg is a sliding input in which a sliding track of the user on the first screen is a straight line from right to left. The input Th is a sliding input in which the user slides on the second screen in a straight line from left to right. At this time, the third input is used for triggering the terminal device to acquire images other than the M images, and the images are added as images to be edited, that is, added to the M images.
Note that the absolute value of the time difference between the time when the terminal device receives the input Tg and the time when the terminal device receives the input Th is within the preset range.
Alternatively, the relationship between the arrangement order of the P images and the arrangement order of the M images may be determined by an input position of a third input on the first screen.
Illustratively, the input Tg in the first sub-input is in the upper one-third region of the first screen (denoted as region Q1), the input Th is in the upper one-third region of the second screen (denoted as region Q2), and the arrangement order of the P images precedes that of the M images.
The input Tg in the first sub-input is in the middle third region of the first screen (denoted as region Q3), the input Th is in the upper third region of the second screen (denoted as region Q4), the arrangement order of some of the P images (denoted as first partial images) is before the arrangement order of the M images, and the arrangement order of the other images (denoted as second partial images) of the P images except for the partial images is after the arrangement order of the M images. Specifically, the number of the first partial images is the same as or different from the number of the second partial images.
The input Tg in the first sub-input is in the lower one-third region of the first screen (denoted as region Q5), the input Th is in the lower one-third region of the second screen (denoted as region Q6), and the arrangement order of the P images follows the arrangement order of the M images.
Exemplarily, as shown in fig. 8, a schematic diagram of displaying content for a terminal device according to an embodiment of the present invention is provided. In conjunction with fig. 4, the input Tg shown in fig. 8 (8a) is a slide input in which the user slides on the region Q3 of the first screen in a straight line from right to left. The input Th is a slide input in which the user slides on the area Q4 of the second screen in a straight line from left to right. Subsequently, after the terminal device receives the input Tg and the input Th, it may display, as in (8b) of fig. 8, a picture P6 which is a picture in the order before the arrangement order of M pictures, a picture P7 which is a picture in the order after the arrangement order of M pictures, the picture P6 and the picture P7 which are the above-mentioned P pictures, when P is equal to 2.
In the video generation method provided in the embodiment of the present invention, the terminal device may select P images according to the input parameter and the input area of the first sub-input in the second input of the user, and update the M images into N images using the P images. Therefore, the flexibility of the image selecting process is improved, and the steps of editing the image and generating the video by the terminal equipment are further simplified.
Optionally, the first sub-input is used to trigger the terminal device to copy and display images on the second screen, the arrangement order of the images is determined by the direction and/or the end position of the first sub-input on the second screen, and L is a positive integer.
Specifically, the "the terminal apparatus updates M images in the image list to N images" in the above-described step 304 'may be realized by the step 306':
in step 306', the terminal apparatus copies the image indicated by the pointer identification, and arranges and displays the copied image indicated by the pointer identification and the M images in accordance with the attribute information of the copied image indicated by the pointer identification and the attribute information of the M images.
The first sub-input may comprise one input by the user on the first screen (denoted as input Ti) and another input on the second screen (denoted as input Tj). Illustratively, the input Ti is a sliding input in which a sliding track of the user on the first screen is a straight line from right to left. The input Tj is a sliding input in which a sliding track of the user on the second screen is a straight line from top to bottom.
Exemplarily, as shown in fig. 9, a schematic diagram of displaying content for a terminal device according to an embodiment of the present invention is provided. Referring to fig. 4, an input Ti shown in (9a) of fig. 9 is a slide input in which the slide trajectory of the user on the first screen is a straight line from right to left. The input Tj is a sliding input in which the user slides on the second screen in a straight line from top to bottom to trigger the terminal device to copy the image P1, resulting in an image P8. Subsequently, after the terminal device receives the input Ti and the input Tj, it may display, as shown in (9b) of fig. 9, an arrangement order of the image P8 shown in (9b) of fig. 9 that follows the arrangement order of the image 1.
Therefore, the terminal equipment can conveniently and quickly copy the images of the video to be generated, so that the contents of a plurality of (such as two) images which are continuously arranged in the generated target video are the same, namely, the effect of freezing the images in the target video is realized.
In a possible implementation manner, in the video generating method provided by the embodiment of the present invention, after "the terminal device updates M images in the image list to N images" in step 304', the method may further include steps 307 and 308:
and 307, the terminal equipment receives a third input of the user.
For example, the operation manner of the third input may refer to the description of the operation manner of the first input in the foregoing embodiment, and the description of the embodiment of the present invention is omitted.
Alternatively, the third input may include a sub-input (denoted as input Tx) of the user on the first screen and a sub-input (denoted as input Ty) of the user on the second screen. It will be appreciated that the input Tx and the input Ty in the first input described above operate in the same or different ways.
Note that the absolute value of the time difference between the time when the terminal device receives the input Tx and the time when the input Ty is within the preset range.
Exemplarily, as shown in fig. 10, a schematic diagram of displaying content for a terminal device according to an embodiment of the present invention is provided. In conjunction with (7b) in fig. 7, the input Tx shown in (10a) in fig. 10 is a sliding input from top to bottom and from left to right from the left side edge to the bottom edge of the first screen, and the sliding trajectory of the sliding input is a circular arc. The input Ty is a sliding input from top to bottom and from right to left from the right side edge to the bottom edge of the second screen, and the sliding track of the sliding input is an arc.
And step 308, responding to the third input, and updating the N images in the image list into M images by the terminal equipment.
Subsequently, after the terminal device receives the input Tx and the input Ty, the operation of removing the display image 1 may be canceled, and the display including the image P2, the image P3, the image P1, the image P4, and the image P5 in the second screen as shown in (10b) in fig. 10 is displayed.
It should be noted that, in the video generation method provided by the embodiment of the present invention, in the process of selecting an image and editing the sequence of frames of the image, the user may cancel the result of the last output through a specific input (e.g., a third input). Therefore, the flexibility of the image selecting process is improved, and the steps of editing the image and generating the video by the terminal equipment are further simplified.
The video generation method according to the embodiment of the present invention will be described below with reference to the terminal device shown in fig. 11. The terminal device 11 shown in fig. 11 includes a first screen and a second screen, and the terminal device 11 further includes: a receiving module 11a, a display module 11b and a generating module 11 c; a receiving module 11a receiving a first input of a user in a state where a first image is displayed on a first screen; a display module 11b, configured to display an image list on the second screen in response to the first input received by the receiving module 11a, where the image list includes M images; the receiving module 11a is further configured to receive a second input of the user; a generating module 11c, configured to generate, in response to the second input received by the receiving module 11a, a target video according to N images in the image list displayed by the displaying module 11 b; wherein the M images include a first image, the N images are the same as or different from the M images, and M, N are integers greater than 1.
Optionally, the N images are different from the M images; the second input comprises a first sub-input, and the first sub-input is used for triggering the updating of the M images in the image list into N images; the generating module 11c is specifically configured to update the M images in the image list into N images, and generate the target video according to the N images.
Optionally, the second input further includes N-1 second sub-inputs, where the N-1 second sub-inputs are used to determine an order of synthesizing the N images into a frame of the video; the generating module 11c is specifically configured to synthesize N images according to the sequence of the frames, and generate a target video.
Optionally, a pointer identifier is further displayed on the image list, and the image displayed on the first screen is an image indicated by the pointer identifier; the second sub-input is further used to trigger moving the pointer identification from a first position of the second image on the image list to a second position of the first image on the image list and updating the second image displayed on the first screen to the first image.
Optionally, the terminal device 11 further includes: a first acquisition module; the first obtaining module is used for obtaining the M images according to the attribute information of the first image before the display module 11b displays the M images on the second screen; wherein M is a preset numerical value, and the arrangement sequence of the M images is associated with the attribute information of the M images.
Optionally, the terminal device 11 further includes: a second acquisition module; a second obtaining module, configured to obtain the M images according to the attribute information of the first image and the first input before the display module 11b displays the M images on the second screen; wherein the value of M is associated with the input parameter of the first input; the input parameters comprise at least one of input duration, length of input sliding track and input pressure value; the arrangement order of the M images is associated with the attribute information of the M images.
Optionally, the first sub-input is specifically used to trigger the removal of the image indicated by the pointer identifier from the second screen; the generating module 11c is specifically configured to remove the image indicated by the pointer identifier from the image list displayed on the second screen.
Optionally, the first sub-input is specifically configured to trigger addition of P images to the image list, where a value of P is associated with an input parameter of the first sub-input; the input parameters comprise at least one of input duration, input sliding track length and input pressure value, and P is a positive integer; a generating module 11c, configured to obtain P images, and arrange and display the P images and the M images according to the attribute information of the P images and the attribute information of the M images; the arrangement order of the P images is associated with the attribute information of the P images, the image list includes the arranged P images and M images, and N is M + P.
Optionally, the image contents of the P images are determined according to the input area of the first sub-input; the P images are arranged before the M images; alternatively, the P images are arranged after the M images; alternatively, some of the P images are arranged before the M images, and the other images of the P images except for the partial images are arranged after the M images.
Optionally, the terminal device 11 further includes: an update module; the receiving module is further configured to receive a third input of the user after the generating module 11c updates the M images in the image list to N images; and the updating module is used for responding to the third input received by the receiving module and updating the N images in the image list into M images.
It should be noted that, the terminal device provided in the embodiment of the present invention includes a first screen and a second screen. The terminal device receives a first input from a user in a state where a first image is displayed on a first screen, and displays an image list including M images on a second screen in response to the first input, the M images being acquired by the terminal device at least from the first image, the M images including the first image. The terminal device receives a second input from the user and may generate the target video from the N images in the image list in response to the second input. Wherein the N images are the same as or different from the M images, and M, N are integers greater than 1. Based on the scheme, the terminal device receives the first input in the state that the first screen displays the first image, and can select M images at least according to the first image without receiving the selection input of the user for each image in the plurality of images, so that the M images are added to the image list displayed on the second screen. Thus, the user can select N images that are the same as or different from the M images from the image list and edit the N images into the target video. In addition, the user can view the first image to be edited in the video displayed in the second screen while browsing the first image on the first screen, so that the user can select the image required by the user. Therefore, the step of controlling the terminal equipment to select the image by the user can be simplified, so that the steps of editing the image and generating the video by the terminal equipment are simplified.
The terminal device 11 provided in the embodiment of the present invention can implement each process implemented by the terminal device in the method embodiment, and is not described here again to avoid repetition.
Fig. 12 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention, where the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106 (including first and second screens), user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 12 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
A user input unit 107 for receiving a first input of a user in a state where a first image is displayed on a first screen; a display unit 106 for displaying an image list including M images on the second screen in response to the first input received by the user input unit 107; a user input unit 107 for receiving a second input from the user; a processor 110 for generating a target video from N images in the image list displayed by the display unit 106 in response to a second input received by the user input unit 107; wherein the M images include a first image, the N images are the same as or different from the M images, and M, N are integers greater than 1.
It should be noted that, the terminal device provided in the embodiment of the present invention includes a first screen and a second screen. The terminal device receives a first input from a user in a state where a first image is displayed on a first screen, and displays an image list including M images on a second screen in response to the first input, the M images being acquired by the terminal device at least from the first image, the M images including the first image. The terminal device receives a second input from the user and may generate the target video from the N images in the image list in response to the second input. Wherein the N images are the same as or different from the M images, and M, N are integers greater than 1. Based on the scheme, the terminal device receives the first input in the state that the first screen displays the first image, and can select M images at least according to the first image without receiving the selection input of the user for each image in the plurality of images, so that the M images are added to the image list displayed on the second screen. Thus, the user can select N images that are the same as or different from the M images from the image list and edit the N images into the target video. In addition, the user can view the first image to be edited in the video displayed in the second screen while browsing the first image on the first screen, so that the user can select the image required by the user. Therefore, the step of controlling the terminal equipment to select the image by the user can be simplified, so that the steps of editing the image and generating the video by the terminal equipment are simplified.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 12, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program is executed by the processor 110 to implement each process of the foregoing method embodiment, and can achieve the same technical effect, and for avoiding repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the method embodiments, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (13)

1. A video generation method is applied to a terminal device, the terminal device comprises a first screen and a second screen, and the method is characterized by comprising the following steps:
receiving a first input of a user in a state where a first image is displayed on the first screen; the first input is used for triggering the terminal equipment to start to select images;
in response to the first input, displaying an image list on the second screen, the image list including M images; the M images are acquired by the terminal equipment according to the attribute information of the first image; wherein the value of M is associated with the input parameter of the first input; the input parameters comprise at least one of input duration, input sliding track length and input pressure value;
receiving a second input of the user;
generating a target video according to the N images in the image list in response to the second input;
wherein the M images comprise the first image, the N images are the same as or different from the M images, and M, N are each an integer greater than 1.
2. The method of claim 1, wherein the N images are different from the M images;
the second input comprises a first sub-input, and the first sub-input is used for triggering the M images in the image list to be updated into the N images;
the generating the target video according to the N images in the image list includes:
and updating the M images in the image list into the N images, and generating the target video according to the N images.
3. The method of claim 2, wherein the second input further comprises N-1 second sub-inputs, wherein the N-1 second sub-inputs are used to determine an order in which the N images are combined into a frame of the video;
the generating the target video according to the N images includes:
and synthesizing the N images according to the sequence of the frames to generate the target video.
4. The method according to claim 3, wherein a pointer identifier is further displayed on the image list, and the image displayed on the first screen is an image indicated by the pointer identifier;
the second sub-input is further used for triggering the pointer identification to move from a first position of a second image on the image list to a second position of the first image on the image list, and updating the second image displayed on the first screen to the first image.
5. The method of claim 3, wherein prior to displaying the list of images on the second screen, the method further comprises:
acquiring the M images according to the attribute information of the first image;
and M is a preset numerical value, and the arrangement sequence of the M images is associated with the attribute information of the M images.
6. The method of claim 5, wherein prior to displaying the list of images on the second screen, the method further comprises:
acquiring the M images according to the attribute information of the first image and the first input;
wherein the value of M is associated with the input parameter of the first input; the input parameters comprise at least one of input duration, input sliding track length and input pressure value; the arrangement order of the M images is associated with attribute information of the M images.
7. The method according to claim 4, wherein the first sub-input is specifically used for triggering the removal of the image indicated by the pointer identification from the second screen;
the updating the M images in the image list to the N images comprises:
removing the image indicated by the pointer identification from the image list displayed on the second screen.
8. The method according to claim 2, wherein the first sub-input is specifically configured to trigger addition of P pictures to the picture list, and a value of P is associated with an input parameter of the first sub-input; the input parameters comprise at least one of input duration, input sliding track length and input pressure value, and P is a positive integer;
the updating the M images in the image list to the N images comprises:
acquiring the P images, and arranging and displaying the P images and the M images according to the attribute information of the P images and the attribute information of the M images;
the arrangement order of the P images is associated with the attribute information of the P images, the image list includes the arranged P images and the M images, and N is M + P.
9. The method according to claim 8, wherein the image content of the P images is determined according to the input area of the first sub-input;
the P images are arranged before the M images;
or, the P images are arranged after the M images;
or, some images in the P images are arranged before the M images, and other images except the partial images in the P images are arranged after the M images.
10. The method of claim 2, wherein after updating the M images in the image list to the N images, the method further comprises:
receiving a third input of the user;
in response to the third input, updating the N images in the image list to the M images.
11. A terminal device, characterized in that, terminal device includes first screen and second screen, terminal device still includes: the device comprises a receiving module, a display module and a generating module;
the receiving module is used for receiving a first input of a user in a state that a first image is displayed on the first screen; the first input is used for triggering the terminal equipment to start to select images;
the display module is used for responding to the first input received by the receiving module and displaying an image list on the second screen, wherein the image list comprises M images; the M images are acquired by the terminal equipment according to the attribute information of the first image; wherein the value of M is associated with the input parameter of the first input; the input parameters comprise at least one of input duration, input sliding track length and input pressure value;
the receiving module is further used for receiving a second input of the user;
the generating module is used for responding to the second input received by the receiving module and generating a target video according to N images in the image list displayed by the display module;
wherein the M images comprise the first image, the N images are the same as or different from the M images, and M, N are each an integer greater than 1.
12. A terminal device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video generation method according to any one of claims 1 to 10.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the video generation method according to any one of claims 1 to 10.
CN201810690884.6A 2018-06-28 2018-06-28 Video generation method and terminal equipment Active CN108881742B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810690884.6A CN108881742B (en) 2018-06-28 2018-06-28 Video generation method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810690884.6A CN108881742B (en) 2018-06-28 2018-06-28 Video generation method and terminal equipment

Publications (2)

Publication Number Publication Date
CN108881742A CN108881742A (en) 2018-11-23
CN108881742B true CN108881742B (en) 2021-06-08

Family

ID=64296499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810690884.6A Active CN108881742B (en) 2018-06-28 2018-06-28 Video generation method and terminal equipment

Country Status (1)

Country Link
CN (1) CN108881742B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109769089B (en) * 2018-12-28 2021-03-16 维沃移动通信有限公司 Image processing method and terminal equipment
CN110022445B (en) * 2019-02-26 2022-01-28 维沃软件技术有限公司 Content output method and terminal equipment
CN109889757B (en) * 2019-03-29 2021-05-04 维沃移动通信有限公司 Video call method and terminal equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8600214B2 (en) * 2007-10-29 2013-12-03 Samsung Electronics Co., Ltd. Portable terminal and method for managing videos therein
CN105230005A (en) * 2013-05-10 2016-01-06 三星电子株式会社 Display unit and control method thereof
CN105791976A (en) * 2015-01-14 2016-07-20 三星电子株式会社 Generating And Display Of Highlight Video Associated With Source Contents
CN106961559A (en) * 2017-03-20 2017-07-18 维沃移动通信有限公司 The preparation method and electronic equipment of a kind of video
CN107948730A (en) * 2017-10-30 2018-04-20 百度在线网络技术(北京)有限公司 Method, apparatus, equipment and storage medium based on picture generation video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8600214B2 (en) * 2007-10-29 2013-12-03 Samsung Electronics Co., Ltd. Portable terminal and method for managing videos therein
CN105230005A (en) * 2013-05-10 2016-01-06 三星电子株式会社 Display unit and control method thereof
CN105791976A (en) * 2015-01-14 2016-07-20 三星电子株式会社 Generating And Display Of Highlight Video Associated With Source Contents
CN106961559A (en) * 2017-03-20 2017-07-18 维沃移动通信有限公司 The preparation method and electronic equipment of a kind of video
CN107948730A (en) * 2017-10-30 2018-04-20 百度在线网络技术(北京)有限公司 Method, apparatus, equipment and storage medium based on picture generation video

Also Published As

Publication number Publication date
CN108881742A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN111061574B (en) Object sharing method and electronic device
CN110851051B (en) Object sharing method and electronic equipment
CN109002243B (en) Image parameter adjusting method and terminal equipment
CN108495029B (en) Photographing method and mobile terminal
CN110908558B (en) Image display method and electronic equipment
CN109525874B (en) Screen capturing method and terminal equipment
CN108762634B (en) Control method and terminal
CN109859307B (en) Image processing method and terminal equipment
CN110888707A (en) Message sending method and electronic equipment
CN109828731B (en) Searching method and terminal equipment
CN111638837B (en) Message processing method and electronic equipment
CN111127595B (en) Image processing method and electronic equipment
CN108228902B (en) File display method and mobile terminal
CN111026299A (en) Information sharing method and electronic equipment
CN110944236B (en) Group creation method and electronic device
CN111064848B (en) Picture display method and electronic equipment
CN110225180B (en) Content input method and terminal equipment
CN110865745A (en) Screen capturing method and terminal equipment
WO2020181956A1 (en) Method for displaying application identifier, and terminal apparatus
CN110868633A (en) Video processing method and electronic equipment
CN110989950A (en) Sharing control method and electronic equipment
CN108804628B (en) Picture display method and terminal
CN108881742B (en) Video generation method and terminal equipment
CN111596990A (en) Picture display method and device
CN108595070B (en) Icon display control method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant