CN114339073A - Video generation method and video generation device - Google Patents

Video generation method and video generation device Download PDF

Info

Publication number
CN114339073A
CN114339073A CN202210005180.7A CN202210005180A CN114339073A CN 114339073 A CN114339073 A CN 114339073A CN 202210005180 A CN202210005180 A CN 202210005180A CN 114339073 A CN114339073 A CN 114339073A
Authority
CN
China
Prior art keywords
target object
image
video
images
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210005180.7A
Other languages
Chinese (zh)
Inventor
郭越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210005180.7A priority Critical patent/CN114339073A/en
Publication of CN114339073A publication Critical patent/CN114339073A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses a video generation method and a video generation device, and belongs to the field of electronic equipment. The video generation method comprises the following steps: displaying a first image in response to a first input; receiving a second input to the target object on the first image; in response to the second input, a video in which the target object moves on the background image is generated and displayed, wherein the background image is an image formed after the target object is cut out from the first image.

Description

Video generation method and video generation device
Technical Field
The present application relates to the technical field of electronic devices, and in particular, to a video generation method and a video generation apparatus.
Background
In the using process of the electronic equipment, the photos are all static image information, and the ornamental value is not high. In particular, for an electronic device with a foldable screen, different display positions can be used for displaying different photos, and the different photos do not interact with the electronic device with the foldable screen adopted in the related art. Thus, for the user, only different photos are observed, and the enjoyment and interactivity of the whole electronic device are not good.
Disclosure of Invention
An object of the embodiments of the present application is to provide a video generation method and a video generation apparatus, which can solve the problem in the related art that the object in the photo is still and the ornamental value is not high.
In a first aspect, an embodiment of the present application provides a video generation method, including: displaying a first image in response to a first input; receiving a second input to the target object on the first image; in response to the second input, a video in which the target object moves on the background image is generated and displayed, wherein the background image is an image formed after the target object is cut out from the first image.
In a second aspect, an embodiment of the present application provides a video generating apparatus, including: a display unit for displaying a first image in response to a first input; a receiving unit for receiving a second input to the target object on the first image; a video generating unit for generating a video in which the target object moves on a background image in response to a second input, wherein the background image is an image formed after the target object is cut out from the first image; the display unit is also used for displaying video.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the video generation method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the video generation method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the video generation method according to the first aspect.
In an embodiment of the present application, a first image that a user wants to see may be displayed in response to a first input; the target object on the first image may be selected in response to a second input; then, a video in which the target object moves on the background image of the first image is generated and displayed. Therefore, the static image can be converted into the dynamic photo, and the ornamental value and the interestingness of the first image are improved to a great extent.
Drawings
FIG. 1 is a flow diagram of a video generation method according to one embodiment of the present application;
FIG. 2 is a block diagram of a video generation apparatus according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a screen display of an electronic device according to an embodiment of the present application;
FIG. 4 is a second schematic diagram of a screen display of an electronic device according to an embodiment of the present application;
FIG. 5 is a third schematic diagram of a screen display of an electronic device according to an embodiment of the present application;
FIG. 6 is a fourth schematic diagram of a screen display of an electronic device according to an embodiment of the present application;
FIG. 7 is a fifth exemplary screen display of an electronic device according to an embodiment of the present disclosure;
FIG. 8 is a sixth exemplary screen display of an electronic device according to an embodiment of the present application;
FIG. 9 is one of the schematic block diagrams of an electronic device of an embodiment of the present application;
fig. 10 is a second schematic block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video generation method, the video generation apparatus, the electronic device, and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
An embodiment of a first aspect of the present application provides a video generation method, as shown in fig. 1, the video generation method includes:
step 102, responding to a first input, generating and displaying a first image;
step 104, receiving a second input of the target object on the first image;
and 106, responding to the second input, and displaying the video with the target object moving on the background image, wherein the background image is an image formed after the target object is cut out from the first image.
The video generation method provided by the embodiment of the application can respond to a first input to display a first image; then, a second input to the target object on the first image is received, and in response to the second input, a video in which the target object moves on the background image is generated and displayed. Specifically, the background image is an image formed after the target object is cut out from the first image. That is, the first image includes an image of the target object and a background image.
Thus, as shown in fig. 4, 5 and 6, the video generation method proposed by the present application may first display a first image that a user wants to see in response to a first input; secondly, a target object on the first image may be selected in response to a second input; then, a video is generated in which the target object moves on the background image of the first image. Therefore, the static image can be converted into the dynamic photo, and the ornamental value and the interestingness of the first image are improved to a great extent.
Further, in some possible embodiments, the step 102 specifically includes: and responding to the first input, selecting at least two second images, and performing splicing processing on the at least two second images to display the first image.
In this embodiment, the video generation method proposed by the present application may select at least two second images in response to the first input; and then, splicing the at least two selected images to form and display a first image. Specifically, in the process of displaying a video in which a target object moves on a background image, the moving start point and the moving end point of the target object are located on different second images. In this way, the video generation method provided by the application can realize linkage between different photos, so that the target object moves from one second image to another second image.
More specifically, the video generation method proposed in the present application is applicable to a folding screen having at least two screens. Responding to a first input, selecting at least two second images, and displaying different second images on different sub-screens; and then, splicing the at least two second images to join the first image. Furthermore, the video generation method provided by the application can realize linkage between a plurality of sub-screens, and the using effect of the folding screen is improved to a great extent.
For example, as shown in fig. 3, in response to the first input, the present application may select three second pictures (i.e., the pictures 1 to 11 shown in fig. 3) and then perform a stitching process on the three second pictures to form the first pictures shown in fig. 4 to 6.
Further, in some possible embodiments, in step 106, an image of the target object and a background image are first generated; then, fusing the images of the target object to a plurality of background images respectively to generate a plurality of synthesized pictures; then, sequencing the synthesized picture according to the position of the target object in the synthesized picture to generate a video; further, playing the sequenced multiple composite pictures to display the video.
In this embodiment, the present application first determines a target object on a first image according to a response to a second input, and further generates an image of the target object and a background image of the first image. Specifically, the image of the target object can be obtained by matting processing; after the first image is subjected to matting processing, the original position of the image of the target object is subjected to differential compensation to obtain the background image.
Further, the image of the target object is respectively fused to the plurality of background images to obtain a plurality of synthesized pictures; the positions of the images corresponding to the targets in different composite pictures are different; then, the method sequences the plurality of synthesized pictures according to the positions of the images of the target objects in the synthesized pictures; further, the method and the device play the sequenced multiple composite pictures and form a video through the multiple continuously played composite pictures.
Further, in some possible embodiments, in the process of fusing the image of the target object to the plurality of background images respectively, first obtaining the size of the target object and the moving path of the target object in the finally generated video; then, the synthesis position of the image of the target object on each background image is determined based on the above-described size and the movement path. Therefore, the image of the target object can be respectively fused with different fusion positions bearing different background images, so as to ensure that the positions of the images corresponding to the target in different composite pictures are different. Thus, in the process of continuously playing the sequenced composite image, the effect that the target object moves on the background image can be formed.
Further, as shown in fig. 4, 5 and 6, in some possible embodiments, in the sorted composite pictures, a position where the image of the target position in the first composite picture is located corresponds to a starting position of the target object in the video; i.e. the fusion position of the first background image corresponds to the starting position of the target object in the video. In two adjacent synthetic images, the position of the image of the target position in the latter synthetic picture is moved by the distance of the size of the target object along the moving path relative to the position of the image of the target position in the former synthetic picture; that is, in two adjacent background images, the fusion position of the latter background image is moved by the distance of the size of the target object along the moving path relative to the fusion position of the former background image. In addition, the position of the image of the target position in the last composite picture corresponds to the end position of the target object in the video; i.e. the fusion position of the last background image corresponds to the end position of the target object in the video.
Therefore, as shown in fig. 4, 5 and 6, when two adjacent composite pictures are played continuously during video playing, the distance of the target object moving along the moving path by its size can be formed. Therefore, when a plurality of composite pictures are played continuously, the picture effect that the target object moves continuously from the starting position to the ending position can be formed, and the moving continuity of the target object in the video is improved.
Further, in some possible embodiments, as shown in fig. 4, 5 and 6, in two adjacent synthesized images, the position of the image of the target position in the latter synthesized picture is shifted by the distance of the size of the target object along the moving path relative to the position of the image of the target position in the former synthesized picture. Therefore, in the process of fusing the images of the target object to the plurality of background images respectively, the size of the target object and the moving distance of the target object on the moving path can be acquired firstly; then, the number of synthesized pictures to be synthesized is determined based on the above-described size and moving distance.
Therefore, on the basis of ensuring that the target object moves towards the end position in the two connected combined pictures by the self distance, the target object can be ensured to move from the initial position to the end position right after the plurality of combined pictures are played.
Further, in some possible embodiments, as shown in fig. 4, 5 and 6, in the process of ordering the plurality of composite pictures, the ordering rule of the plurality of composite pictures is as follows: firstly, acquiring a first position of a target object in a first image, and determining a second position corresponding to the first position in a composite picture; then, respectively acquiring third positions of the images of the target object in the plurality of synthesized pictures, and calculating the distances between the plurality of third positions and the second position; further, the present application sequences the plurality of composite pictures according to the order of the distance from small to large, and finally plays the sequenced plurality of composite pictures. Further, after sorting, the difference in distance is equal to the size of the target object in the adjacent two synthesized pictures.
Specifically, as shown in fig. 4, 5 and 6, the first position and the second position may be regarded as initial positions of the target object moving in the video, and the third position is a position of the target object in each synthesized picture. Thus, the distance between the third position and the second position is the moving distance of the target object on the composite picture relative to the first position. Therefore, the plurality of composite pictures are sequenced according to the sequence of the distances from small to large, so that the target object can be ensured to move from the initial position to the end position when the sequenced plurality of composite pictures are continuously played in the subsequent process, and the moving effect and the dynamic effect of the target object are further formed.
Further, in some possible embodiments, as shown in fig. 4, 5 and 6, in two adjacent synthesized images, the position of the image of the target position in the latter synthesized picture is shifted by the distance of the size of the target object along the moving path relative to the position of the image of the target position in the former synthesized picture. Therefore, when two adjacent composite pictures are played continuously, the distance of the target object moving along the moving video can be ensured. Therefore, on one hand, the situation that the target object system is protected and shielded when two adjacent synthesized pictures are continuously played can be guaranteed, on the other hand, the target object continuously moves, and further the moving effect of the target object in the video is guaranteed.
Further, in some possible embodiments, as shown in fig. 7, the video generation method proposed by the present application allows a user to autonomously set the moving path of the target object. Specifically, the method can determine a moving path of the target object in the video in response to the second input; the video of the target object moving along the movement path over the background image may then be displayed in response to the first input.
In particular, the present application may default to a movement path (e.g., moving from a first side of the first image to another side). Under the condition that the user does not set the moving path, the video with the target object moving along the default moving path on the background image can be displayed. In the case where the user sets a movement path, as shown in fig. 7, the present application sets a new movement path in response to the second input and displays a video in which the target object moves along the new movement path on the background image. Therefore, the moving path of the target object in the video can be adjusted, and the user-defined requirement is further met.
Further, in some possible embodiments, as shown in fig. 8, the present application allows zooming of the target object during displaying of the video in which the target object moves over the background image. That is, before displaying the video in which the target object moves on the background image, the present application performs zoom processing on the target object in the moving process in response to the third input.
Specifically, as shown in fig. 8, prior to displaying a video in which the target object moves over the background image, the present application may determine a scaling of the target object in response to a third input; then, according to the number of the synthesized pictures in the video, the scaling of the target object is determined to be the scaling of the target object in each synthesized image, and further the target object is gradually scaled in the moving process.
For example, the target object is enlarged ten times in scale, and the total number of the synthesized frames is ten. At this time, ten times of scaling is averagely distributed on ten synthesis pictures, and the image size of the target object in the next synthesis picture is ensured to be enlarged by one time relative to the image size of the target object in the previous synthesis picture in two adjacent synthesis pictures. Correspondingly, the target object is scaled down ten times, and the total number of synthesized pictures is ten. At this time, ten times of scaling is equally distributed to ten synthesis screens, and the image size of the target object in the next synthesis screen is reduced by one time with respect to the image size of the target object in the previous synthesis screen.
In the video generation method provided in the embodiment of the present application, the execution subject may be a video generation apparatus, or a control module in the video generation apparatus for executing the video generation method. The video generation apparatus provided in the embodiment of the present application will be described with reference to an example in which a video generation apparatus executes a video generation method.
As shown in fig. 2, a second embodiment of the present application provides a video generating apparatus 200, including: a display unit 202 for displaying a first image in response to a first input; a receiving unit 204 for receiving a second input to the target object on the first image; a video generating unit 206 for generating a video in which the target object moves on a background image in response to the second input, wherein the background image is an image formed after the target object is cut out from the first image; the display unit 202 is also used to display video.
In the video generating apparatus 200 according to the embodiment of the present application, the display unit 202 may display a first image in response to a first input; then, the receiving unit 204 receives a second input to the target object on the first image; then, the video generation unit 206 generates a video in which the target object moves on the background image in response to the second input described above, thereby making the display unit 202 displayable the video. Specifically, the background image is an image formed after the target object is cut out from the first image. That is, the first image includes an image of the target object and a background image.
Thus, as shown in fig. 4, 5 and 6, the video generating apparatus 200 of the present application may first display a first image that a user wants to see in response to a first input; secondly, a target object on the first image may be selected in response to a second input; then, a video is generated in which the target object moves on the background image of the first image. Therefore, the static image can be converted into the dynamic photo, and the ornamental value and the interestingness of the first image are improved to a great extent.
Further, in some possible embodiments, as shown in fig. 3, the display unit 202 is specifically configured to, in response to the first input, select at least two second images, and perform a stitching process on the at least two second images to display the first image.
In this embodiment, as shown in fig. 3, the display unit 202 may select at least two second images in response to the first input; then, the display unit 202 performs stitching processing on the selected at least two images to form and display a first image. Specifically, in the process of displaying a video in which a target object moves on a background image, the moving start point and the moving end point of the target object are located on different second images. In this way, the video generation apparatus 200 of the present application can realize linkage between different photos, so that the target object moves from one second image to another second image.
More specifically, the video generation method proposed in the present application is applicable to a folding screen having at least two screens. The display unit 202 responds to the first input, selects at least two second images, and displays different second images on different sub-screens; then, the display unit 202 splices at least two second images to join the first image. Furthermore, the video generation method provided by the application can realize linkage between a plurality of sub-screens, and the using effect of the folding screen is improved to a great extent.
Further, in some possible embodiments, the video generation unit 206 first generates an image of the target object and a background image; then, the video generating unit 206 fuses the images of the target object to the plurality of background images, respectively, to generate a plurality of composite pictures; then, the video generating unit 206 sorts the composite picture according to the position of the target object in the composite picture to generate a video; further, the display unit 202 plays the plurality of composite pictures after sorting to display a video.
In this embodiment, the video generating unit 206 first determines a target object on the first image according to the second input, and further generates an image of the target object and a background image of the first image. Specifically, the image of the target object can be obtained by matting processing; after the video generation unit 206 performs the matting processing on the first image, the video generation unit 206 performs difference compensation on the original position of the image of the target object to obtain the above-mentioned background image.
Further, the image of the target object is respectively fused to the plurality of background images to obtain a plurality of synthesized pictures; the positions of the images corresponding to the targets in different composite pictures are different; then, the video generating unit 206 sorts the plurality of composite screens in accordance with the positions of the images of the target objects in the composite screens; further, the display unit 202 plays the plurality of composite pictures after sorting, and forms a video by the plurality of composite pictures played continuously.
Further, in some possible embodiments, in the process that the video generating unit 206 fuses the image of the target object to the plurality of background images, respectively, the video generating unit 206 first obtains the size of the target object and the moving path of the target object in the finally generated video; then, the video generating unit 206 determines the synthesis position of the image of the target object on each background image based on the above-described size and moving path. Therefore, the image of the target object can be respectively fused with different fusion positions bearing different background images, so as to ensure that the positions of the images corresponding to the target in different composite pictures are different. In this way, an effect that the target object moves on the background image can be formed while the display unit 202 continuously plays the sorted composite image.
Further, in some possible embodiments, as shown in fig. 4, 5 and 6, in the sorted composite pictures, the position where the image of the target position in the first composite picture is located corresponds to the starting position of the target object in the video; i.e. the fusion position of the first background image corresponds to the starting position of the target object in the video. In two adjacent synthetic images, the position of the image of the target position in the latter synthetic picture is moved by the distance of the size of the target object along the moving path relative to the position of the image of the target position in the former synthetic picture; that is, in two adjacent background images, the fusion position of the latter background image is moved by the distance of the size of the target object along the moving path relative to the fusion position of the former background image. In addition, the position of the image of the target position in the last composite picture corresponds to the end position of the target object in the video; i.e. the fusion position of the last background image corresponds to the end position of the target object in the video.
Therefore, in the process of video playback by the display unit 202, when two adjacent composite frames are played back continuously, a distance can be formed by which the target object moves by its own size along the moving path. In this way, when a plurality of composite pictures are played continuously, the display unit 202 can form a picture effect that the target object moves continuously from the start position to the end position, and the continuity of the movement of the target object in the video is improved.
Further, in some possible embodiments, as shown in fig. 4, 5 and 6, in two adjacent synthesized images, the position of the image of the target position in the latter synthesized picture is shifted by the distance of the size of the target object along the moving path relative to the position of the image of the target position in the former synthesized picture. Therefore, in the process of fusing the images of the target object to the plurality of background images respectively, the size of the target object and the moving distance of the target object on the moving path can be acquired firstly; then, the number of synthesized pictures to be synthesized is determined based on the above-described size and moving distance.
Therefore, on the basis of ensuring that the target object moves towards the end position in the two connected combined pictures by the self distance, the target object can be ensured to move from the initial position to the end position right after the plurality of combined pictures are played.
Further, in some possible embodiments, as shown in fig. 4, 5 and 6, in the process of ordering the plurality of composite pictures by the video generating unit 206, the ordering rule of the plurality of composite pictures is as follows: first, the video generating unit 206 acquires a first position of the target object in the first image, and determines a second position corresponding to the first position in the composite screen; then, the video generating unit 206 respectively acquires third positions of the images of the target object in the plurality of synthesized pictures, and calculates distances between the plurality of third positions and the second position; further, the video generation unit 206 sorts the plurality of composite pictures in order of the distance from small to large, and the display unit 202 finally plays the plurality of sorted composite pictures. Further, after sorting, the difference in distance is equal to the size of the target object in the adjacent two synthesized pictures.
Specifically, the first position and the second position may be regarded as initial positions of the target object moving in the video, and the third position is a position where the target object is located in each of the composite frames. Thus, the distance between the third position and the second position is the moving distance of the target object on the composite picture relative to the first position. Therefore, the video generating unit 206 sequences the plurality of synthesized pictures according to the order from small to large, so that when the sequenced plurality of synthesized pictures are played continuously, the target object can be ensured to move from the initial position to the end position, and the moving effect and the dynamic effect of the target object are further formed.
Further, in some possible embodiments, as shown in fig. 4, 5 and 6, in two adjacent synthesized images, the position of the image of the target position in the latter synthesized picture is shifted by the distance of the size of the target object along the moving path relative to the position of the image of the target position in the former synthesized picture. Therefore, when two adjacent composite pictures are played continuously, the distance of the target object moving along the moving video can be ensured. Therefore, on one hand, the situation that the target object system is protected and shielded when two adjacent synthesized pictures are continuously played can be guaranteed, on the other hand, the target object continuously moves, and further the moving effect of the target object in the video is guaranteed.
Further, in some possible embodiments, as shown in fig. 7, the video generation method proposed by the present application allows a user to autonomously set the moving path of the target object. Specifically, the video generation unit 206 may determine a moving path of the target object in the video in response to the second input; then, the video generating unit 206 may display the video in which the target object moves along the moving path on the background image in response to the first input.
In particular, the present application may default to a movement path (e.g., moving from a first side of the first image to another side). Under the condition that the user does not set the moving path, the video with the target object moving along the default moving path on the background image can be displayed. In the case that the user sets a moving path, the present application sets a new moving path in response to the second input, and displays a video in which the target object moves along the new moving path on the background image. Therefore, the moving path of the target object in the video can be adjusted, and the user-defined requirement is further met.
Further, in some possible embodiments, as shown in fig. 8, the present application allows zooming of the target object during displaying of the video in which the target object moves over the background image. That is, before displaying the video in which the target object moves on the background image, the video generating unit 206 performs the scaling process on the target object in the process of moving in response to the third input.
Specifically, before displaying the video in which the target object moves on the background image, the video generating unit 206 may determine the scaling of the target object in response to the third input; then, the video generating unit 206 determines the scaling of the target object in each synthesized image according to the number of synthesized pictures in the video, and further causes the target object to scale gradually during the moving process.
For example, the target object is enlarged ten times in scale, and the total number of the synthesized frames is ten. At this time, ten times of scaling is averagely distributed on ten synthesis pictures, and the image size of the target object in the next synthesis picture is ensured to be enlarged by one time relative to the image size of the target object in the previous synthesis picture in two adjacent synthesis pictures. Correspondingly, the target object is scaled down ten times, and the total number of synthesized pictures is ten. At this time, ten times of scaling is equally distributed to ten synthesis screens, and the image size of the target object in the next synthesis screen is reduced by one time with respect to the image size of the target object in the previous synthesis screen.
The video generation apparatus 200 in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video generation apparatus 200 in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video generation apparatus 200 provided in this embodiment of the application can implement each process implemented by the method embodiment shown in fig. 1, and is not described here again to avoid repetition.
Therefore, the method and the device have the advantages that the folded screen can simultaneously present a larger visual field, a plurality of second images are simultaneously displayed on the screen to form a first image, and the first image is processed to form a linked video.
Specifically, in the using process, the user is first required to select different second images from the album, and the second images are respectively displayed on different screens. Specifically, a photo with a person is selected and placed in a first screen, two second images of scenes are selected and placed in a second screen and a third screen respectively, and the three second images are spliced together.
Then, the user selects a target object to be processed and moved in the first screen, and then the width of the target object is automatically calculated, wherein the width is calculated as A; moving the distance A by taking the center of the target object as a starting point, so that the object correspondingly moves to the next position, wherein the plane moving mode can be realized by cutting the target object and fusing the target object to the calculated fusion position; and (4) compensating the original position of the target object by using surrounding pixels as interpolation, and storing the second image after the processing and fusion are finished. By analogy, the target object is cut and moved to the position of the next A (the previous target object is taken as the origin) and stored; each saved composite picture is formed by splicing the three scenes together, and the object is cut and moved to the edge of a third screen in the same way; the result is required to be stored every time the cutting and moving are carried out, all the stored composite pictures are played in sequence to form a video, and the final effect is that the target object in the first screen is moved to the third screen and is like the effect of going out of the screen.
As shown in fig. 3, first the user opens the album and selects three second images in sequence to click on the composite video in the menu below, this time when the selected picture is displayed in the folded screen. As shown in fig. 3, the user clicks a target object to be moved, and after clicking the target object, the user is prompted by the object selected in the frame of the frame to determine whether to select the object correctly; and after the user confirms that no error exists, clicking a button below the middle screen to start the synthesis processing.
Further, the video after the composition is shown in fig. 5 and fig. 6, the dotted line indicates that the original position of the target object is cut off, and the video is played such that the object moves from the first screen to the edge of the third screen in sequence.
Further, the method and the device can give more designs to users interactively. For example, the user may set a moving path of the target object, and after selecting the target object, the user may select a direction in which the target object is to move, and by sliding the line of bearing on the hemisphere, the target object may move along an angle on the line of bearing, and finally move to the position shown in fig. 8, as shown in fig. 7. Therefore, the control of the user on the moving direction can be increased, and the finally generated video can be more diversified.
Further, the present application may add some size of transformation in addition to the control of orientation. For example, after the user has selected the target object, the user may select the final size of the target object (alternatively, an icon may be set, and the user may slide the coordinates to select the size), so that the size of the target object gradually changes with the movement of each composite screen, and finally becomes the size set by the user.
Optionally, as shown in fig. 9, an electronic device 900 is further provided in this embodiment of the present application, and includes a processor 902, a memory 904, and a program or an instruction stored in the memory 904 and executable on the processor 902, where the program or the instruction is executed by the processor 902 to implement each process of the above-mentioned video generation method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
Wherein, the display unit 1006 is used for responding to the first input and displaying the first image;
a radio frequency unit 1001 for receiving a second input to the target object on the first image;
the display unit 1006 is further configured to generate and display a video in which the target object moves on the background image in response to the second input, wherein the background image is an image formed after the target object is cut out from the first image.
Optionally, the display unit 1006 is further configured to select at least two second images in response to the first input, and perform stitching processing on the at least two second images to display the first image.
Optionally, the display unit 1006 is further configured to generate an image of the target object and a background image; fusing images of the target object to the plurality of background images respectively to generate a plurality of synthesized pictures; sequencing the synthesized picture according to the position of the image of the target object in the synthesized picture; and playing the sequenced multiple composite pictures to display the video.
Optionally, the display unit 1006 is further configured to determine a fusion position of the image of the target object in the background image according to the size of the target object and the moving path of the target object in the video; and fusing the images of the target objects to the fusion positions of the plurality of background images, respectively.
Optionally, in two adjacent background images, the fusion position of the latter background image is moved by a distance of the size of the target object relative to the fusion position of the former background image along the moving path; the fusion position of the first background image corresponds to the starting position of the target object in the video; the fusion position of the last background image corresponds to the end position of the target object in the video.
Optionally, the display unit 1006 is further configured to determine the number of times of fusion between the image of the target object and the background image according to the size of the target object and the moving path.
Optionally, a first position of the target object in the first image is acquired, and a second position corresponding to the first position in the composite picture is determined; respectively acquiring third positions of the images of the target object in the plurality of synthesized pictures, and calculating the distances between the plurality of third positions and the second position; and sorting the plurality of synthesized pictures in order of the distances from small to large.
Optionally, the display unit 1006 is further configured to determine a moving path of the target object in the video in response to a second input; and displaying the video of the target object moving along the moving path on the background image.
Optionally, the display unit 1006 is further configured to perform a zooming process on the target object during the moving process in response to a third input.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video generation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer-readable storage media, such as Read-Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, etc.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the video generation method embodiment, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method of video generation, comprising:
displaying a first image in response to a first input;
receiving a second input to a target object on the first image;
in response to the second input, generating and displaying a video in which the target object moves on a background image, wherein the background image is an image formed after the target object is cut out of the first image.
2. The video generation method according to claim 1, wherein the generating and displaying a video in which the target object moves on a background image in response to the second input includes:
generating an image of the target object and the background image;
fusing the images of the target object to a plurality of background images respectively to generate a plurality of synthesized pictures;
sequencing the composite picture according to the position of the image of the target object in the composite picture to generate the video; and
and playing the sequenced composite pictures to display the video.
3. The video generation method according to claim 2, wherein the step of fusing the image of the target object to the plurality of background images respectively specifically includes:
determining the fusion position of the image of the target object in the background image according to the size of the target object and the moving path of the target object in the video; and
fusing the images of the target object to the fusion positions of the plurality of background images, respectively.
4. The video generation method according to claim 2, wherein the ordering rule of the composite picture is:
acquiring a first position of the target object in the first image, and determining a second position corresponding to the first position in the composite picture;
respectively acquiring third positions of the images of the target object in the plurality of composite pictures, and calculating the distances between the plurality of third positions and the second position; and
and sequencing the plurality of synthetic pictures according to the sequence of the distances from small to large, wherein the difference value of the distances in two adjacent synthetic pictures is equal to the size of the target object.
5. The video generation method according to any one of claims 1 to 4, wherein the generating and displaying a video in which the target object moves on a background image in response to the second input includes:
determining a movement path of the target object in the video in response to the second input; and
generating and displaying the video of the target object moving along the moving path on a background image.
6. A video generation apparatus, comprising:
a display unit for displaying a first image in response to a first input;
a receiving unit for receiving a second input to a target object on the first image;
a video generation unit configured to generate a video in which the target object moves on a background image in response to the second input, wherein the background image is an image formed after the target object is cut out of the first image;
the display unit is further configured to display the video.
7. The video generating apparatus according to claim 6,
the video generation unit is specifically configured to generate an image of the target object and the background image; fusing the images of the target object to a plurality of background images respectively to generate a plurality of synthesized pictures; sequencing the composite picture according to the position of the image of the target object in the composite picture;
the display unit is specifically configured to play the sequenced multiple composite pictures to display the video.
8. The video generating apparatus according to claim 7,
the video generation unit is specifically configured to determine, according to the size of the target object and a moving path of the target object in the video, a fusion position of an image of the target object in the background image; and fusing the images of the target object to the fusion positions of the plurality of background images, respectively.
9. The video generation apparatus according to claim 7, wherein the ordering rule of the composite picture is:
acquiring a first position of the target object in the first image, and determining a second position corresponding to the first position in the composite picture;
respectively acquiring third positions of the images of the target object in the plurality of composite pictures, and calculating the distances between the plurality of third positions and the second position; and
and sequencing the plurality of synthetic pictures according to the sequence of the distances from small to large, wherein the difference value of the distances in two adjacent synthetic pictures is equal to the size of the target object.
10. The video generating apparatus according to any one of claims 6 to 9,
the video generation unit is further used for responding to the second input and determining a moving path of the target object in the video;
the display unit is further configured to display the video in which the target object moves along the movement path on a background image.
CN202210005180.7A 2022-01-04 2022-01-04 Video generation method and video generation device Pending CN114339073A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210005180.7A CN114339073A (en) 2022-01-04 2022-01-04 Video generation method and video generation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210005180.7A CN114339073A (en) 2022-01-04 2022-01-04 Video generation method and video generation device

Publications (1)

Publication Number Publication Date
CN114339073A true CN114339073A (en) 2022-04-12

Family

ID=81023916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210005180.7A Pending CN114339073A (en) 2022-01-04 2022-01-04 Video generation method and video generation device

Country Status (1)

Country Link
CN (1) CN114339073A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111654755A (en) * 2020-05-21 2020-09-11 维沃移动通信有限公司 Video editing method and electronic equipment
CN113114841A (en) * 2021-03-26 2021-07-13 维沃移动通信有限公司 Dynamic wallpaper acquisition method and device
CN113570609A (en) * 2021-07-12 2021-10-29 维沃移动通信有限公司 Image display method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111654755A (en) * 2020-05-21 2020-09-11 维沃移动通信有限公司 Video editing method and electronic equipment
CN113114841A (en) * 2021-03-26 2021-07-13 维沃移动通信有限公司 Dynamic wallpaper acquisition method and device
CN113570609A (en) * 2021-07-12 2021-10-29 维沃移动通信有限公司 Image display method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US8311915B2 (en) Detail-in-context lenses for interacting with objects in digital image presentations
CN112135049B (en) Image processing method and device and electronic equipment
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
CN113014801B (en) Video recording method, video recording device, electronic equipment and medium
CN111857512A (en) Image editing method and device and electronic equipment
CN113918260A (en) Application program display method and device and electronic equipment
CN113259743A (en) Video playing method and device and electronic equipment
CN113946250A (en) Folder display method and device and electronic equipment
CN112449110B (en) Image processing method and device and electronic equipment
CN113596574A (en) Video processing method, video processing apparatus, electronic device, and readable storage medium
CN112367487B (en) Video recording method and electronic equipment
CN113852757B (en) Video processing method, device, equipment and storage medium
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN114339073A (en) Video generation method and video generation device
CN111857474B (en) Application program control method and device and electronic equipment
CN113741775A (en) Image processing method and device and electronic equipment
CN113961113A (en) Image processing method and device, electronic equipment and readable storage medium
CN114786062A (en) Information recommendation method and device and electronic equipment
CN114302009A (en) Video processing method, video processing device, electronic equipment and medium
CN113379866A (en) Wallpaper setting method and device
CN113190162A (en) Display method, display device, electronic equipment and readable storage medium
CN112732958A (en) Image display method and device and electronic equipment
CN114390205B (en) Shooting method and device and electronic equipment
CN112367562B (en) Image processing method and device and electronic equipment
CN114500852B (en) Shooting method, shooting device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination