CN108184060A - A kind of method and terminal device of picture generation video - Google Patents

A kind of method and terminal device of picture generation video Download PDF

Info

Publication number
CN108184060A
CN108184060A CN201711488098.XA CN201711488098A CN108184060A CN 108184060 A CN108184060 A CN 108184060A CN 201711488098 A CN201711488098 A CN 201711488098A CN 108184060 A CN108184060 A CN 108184060A
Authority
CN
China
Prior art keywords
picture
key frame
common
relative position
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711488098.XA
Other languages
Chinese (zh)
Inventor
许佳丽
吴忠兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Aiyouwei Software Development Co Ltd
Original Assignee
Shanghai Aiyouwei Software Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Aiyouwei Software Development Co Ltd filed Critical Shanghai Aiyouwei Software Development Co Ltd
Priority to CN201711488098.XA priority Critical patent/CN108184060A/en
Publication of CN108184060A publication Critical patent/CN108184060A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides the methods and terminal of a kind of picture generation video, the method being capable of the variation based on the relative position of the common reference object of acquired the first picture and second picture, the intermediate picture between the first picture and second picture is supplemented, and will the first key frame corresponding with first picture, intermediate picture and second picture, intermediate frame and the second key frame generation video;Wherein, the intermediate picture is to include the corresponding different location of subject in different time points, and first picture and second picture is enable to form subject and have the video of consecutive action.The terminal device is configured as performing following operation:Obtain the first picture and second picture;Using first picture as the first key frame, the second picture is as the second key frame;The relative position for detecting the first picture and subject common in second picture changes;Relative position variation based on the subject, supplements the intermediate frame between the first key frame and the second key frame;Video is generated based on first key frame, intermediate frame and the second key frame.

Description

Method for generating video by using picture and terminal equipment
Technical Field
The invention relates to the technical field of intelligent terminals, in particular to a method for generating a video by using pictures and terminal equipment.
Background
With the popularization of intelligent terminals and cameras, more and more users begin to record their lives with videos and pictures, but for some good pictures, users want to make videos to play, and in order to meet the needs of the users, a simple picture video technology in a slide mode is produced. The user combines a plurality of pictures or photos (hereinafter collectively referred to as pictures) stored on the local or cloud server to form a video, and the video is shared on the social platform. However, the video generated in this way is only an existing set of photos played at regular time intervals, and cannot form a video in which the object to be photographed has a series of motions.
Disclosure of Invention
Objects of the invention
The invention aims to provide a picture generation video method, which can supplement an intermediate picture between a first picture and a second picture based on the change of the relative position of a common shooting object of the first picture and the second picture, and generate a video from the first picture, the intermediate picture and the second picture.
(II) technical scheme
In order to solve the above problem, the present invention provides a method for generating a video from a picture, comprising the following steps:
acquiring a first picture and a second picture;
taking the first picture as a first key frame and the second picture as a second key frame;
detecting the relative position change of a common shot object in the first picture and the second picture;
supplementing an intermediate frame between the first key frame and the second key frame based on the relative position change of the shot object;
generating a video based on the first key frame, the intermediate frame, and the second key frame.
In the foregoing technical solution, further, before the step of detecting a relative position change of a common subject in the first picture and the second picture, the method includes:
identifying first shot content in a first picture;
identifying second shot content in the second picture;
judging whether a common shot object exists in the first picture and the second picture or not based on the first shot content and the second shot content;
and if so, executing the step of detecting the relative position change of the common shot object in the first picture and the second picture.
In the foregoing technical solution, further, the method for determining whether a common object exists in the first picture and the second picture based on the first content and the second content includes:
similarity calculation is carried out on the first shot content and the second shot content, and a similarity value of the first shot content and the second shot content is obtained;
judging whether the similarity value is within a preset similarity threshold range or not;
if the similarity value is within a preset similarity threshold range, judging that the first picture and the second picture have a common shot object;
otherwise, the first picture and the second picture are judged to have no common shot object.
In the foregoing technical solution, further, the method for supplementing an intermediate frame between a first key frame and a second key frame based on a detected relative position change of a common object in the first picture and the second picture includes:
reading a first shooting time t of the first picture1
Reading a second photographing time t of the second picturen
Based on t1And tnCalculating the shooting time interval delta t of the first picture and the second picture;
identifying a first location s of a photographed object in the first picture1
Identifying a second location s of the object in the second picturen
Based on s1And snCalculating a distance interval deltas of the position change of the shot object in the first picture and the shot object in the second picture;
calculating the average motion speed v of a common photographed object in the first picture and the second picture;
using time point ti,i∈[2,n-1]Will t1To tnThis time period is divided into a plurality of target time periods;
calculating the average motion speed v of the common object in the first picture and the second pictureiPosition of time si
According to the time point tiCorresponding common object position siCorrespondingly generating an intermediate picture;
and taking the intermediate picture as an intermediate frame.
In the foregoing technical solution, further, the method for generating a video based on the first key frame, the intermediate frame, and the second key frame includes:
and sequencing the first key frame, the intermediate frame and the second key frame according to a preset rule, and playing at a certain speed to generate a video.
In the above technical solution, further, before the step of detecting a relative position change of a common subject in the first picture and the second picture, the method further includes:
acquiring the value of the shooting time interval delta t of the first picture and the second picture;
judging whether the value of the delta t is within a preset time interval range or not;
and if the value of the delta t is within a preset time interval range, executing the step of detecting the relative position change of the common shot object in the first picture and the second picture.
In the foregoing technical solution, further, the method for detecting a relative position change of a common object to be photographed in a first picture and a second picture includes:
establishing a first coordinate system by taking the central point of the first picture as a first coordinate origin, and establishing a second coordinate system by taking the central point of the second picture as a second coordinate origin;
determining a first coordinate set of a shot object in a first coordinate system in the first picture;
determining a second coordinate set of the shot object in a second coordinate system in the second picture;
judging whether the first coordinate set and the second coordinate set have differences or not;
if the difference between the first coordinate set and the second coordinate set is different, the relative position of the common shot object in the first picture and the second picture is judged to be changed;
otherwise, judging that the relative position of the common shot object in the first picture and the second picture is not changed.
In the foregoing technical solution, further, the method for detecting a relative position change of a common object to be photographed in a first picture and a second picture includes:
selecting a first characteristic point around a shot object in the first graph;
selecting a second feature point around the shot object in the second image;
judging whether the first characteristic point and the second characteristic point have difference or not;
if the first characteristic point and the second characteristic point are different, determining that the relative position of the common shot object in the first picture and the second picture is changed;
otherwise, judging that the relative position of the common shot object in the first picture and the second picture is not changed.
In the above technical solution, further after the step of generating a video based on the first key frame, the intermediate frame and the second key frame, the method further includes:
analyzing a current video topic of the currently generated video;
selecting a special effect corresponding to the current video theme from a preset special effect database;
based on the selected special effect, carrying out special effect processing on the generated video;
the special effect database comprises at least one special effect, and each special effect corresponds to at least one video theme.
In the foregoing technical solution, the present invention further provides a terminal device, where when generating a video from a picture, the terminal device is configured to perform the following operations:
acquiring a first picture and a second picture;
taking the first picture as a first key frame and the second picture as a second key frame;
detecting the relative position change of a common shot object in the first picture and the second picture;
supplementing an intermediate frame between the first key frame and the second key frame based on the relative position change of the shot object;
generating a video based on the first key frame, the intermediate frame, and the second key frame.
(III) advantageous effects
Compared with the prior art, the method for generating the video by the pictures can supplement an intermediate picture between a first picture and a second picture based on the change of the relative position of a common shooting object of the first picture and the second picture, and generate the video by a first key frame, the intermediate frame and a second key frame corresponding to the first picture, the intermediate picture and the second picture; the intermediate picture comprises corresponding different positions of the shot object at different time points, so that the first picture and the second picture can form a video with continuous motion of the shot object.
Drawings
Fig. 1 is a schematic flowchart of a method for generating a video from pictures according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method before the step of detecting a relative position change of a common object in a first picture and a second picture provided by an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for determining whether a common object in a first picture and a second picture is the same according to an embodiment of the present application;
FIG. 4 is a schematic flow chart diagram of a method for supplementing an intermediate frame between a first key frame and a second key frame provided by an embodiment of the present application;
fig. 5 is a flowchart illustrating a method for determining whether a value of a shooting time interval Δ t of the first picture and the second picture is within a predetermined time interval range according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a method for detecting a relative position change of a common object in a first picture and a second picture according to an embodiment of the present application;
fig. 7 is a schematic flowchart of another pair of methods for detecting a relative position change of a common object in a first picture and a second picture according to an embodiment of the present application;
fig. 8 is a flowchart illustrating a method for performing special effects processing on a generated video according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
In the drawings a schematic view of a layer structure according to an embodiment of the invention is shown. The figures are not drawn to scale, wherein certain details are exaggerated and possibly omitted for clarity. The shapes of various regions, layers, and relative sizes and positional relationships therebetween shown in the drawings are merely exemplary, and deviations may occur in practice due to manufacturing tolerances or technical limitations, and a person skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions, as actually required.
It is to be understood that the embodiments described in the figures and specification of the present invention are only some embodiments and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, in the description of the present invention, it should be noted that the terms "first", "second", and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The terminal device according to some embodiments of the present invention may be an electronic device, and the electronic device may include one or a combination of several of a smartphone, a personal computer (PC, such as a tablet, a desktop, a notebook, a netbook, a PDA, a mobile phone, an e-book reader, a Portable Multimedia Player (PMP), an audio/video player (MP3/MP4), a camera, a virtual reality device (VR), a wearable device, and the like. According to some embodiments of the invention, the wearable device may include an accessory type (e.g., watch, ring, bracelet, glasses, or Head Mounted Device (HMD)), an integrated type (e.g., electronic garment), a decorative type (e.g., skin pad, tattoo, or built-in electronic device), etc., or a combination of several. In some embodiments of the present invention, the electronic device may be flexible, not limited to the above devices, or may be a combination of one or more of the above devices. In the present invention, the term "user" may indicate a person using an electronic device or a device (e.g., an artificial intelligence electronic device) using an electronic device.
The invention will be described in more detail below with reference to the accompanying drawings. Like elements or steps are denoted by like reference numerals throughout the various figures. For purposes of clarity, the various features of the drawings are not necessarily to scale, and the same reference numbers will be used in the various drawings to identify the same steps.
Embodiments of the present invention provide a method for generating a video from a picture, and in order to facilitate understanding of the embodiments of the present invention, the embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1 (i.e., 100 shown in the drawing), fig. 1 is a flowchart illustrating a method for generating a video from a picture according to an embodiment of the present disclosure.
As shown in fig. 1, an embodiment of the present application provides a method for generating a video by using a picture, including:
step S101: acquiring a first picture and a second picture;
step S102: taking the first picture as a first key frame and the second picture as a second key frame;
step S103: detecting the relative position change of a common shot object in the first picture and the second picture;
step S104: supplementing an intermediate frame between the first key frame and the second key frame based on the relative position change of the shot object;
step S105: generating a video based on the first key frame, the intermediate frame, and the second key frame.
The first picture and the second picture have the same background, namely the first picture and the second picture are pictures which are shot by the same shot object under the same background and have different states.
Referring to fig. 2 (i.e. 200 shown in the drawing), fig. 2 is a flowchart illustrating a method before the step of detecting a relative position change of a common object in a first picture and a second picture according to an embodiment of the present application.
As shown in fig. 2: before the step of detecting the relative position change of the common object in the first picture and the second picture, the method comprises the following steps:
step S201: identifying a first shot content in a first picture and a second shot content in a second picture;
step S202: judging whether a common shot object exists in the first picture and the second picture or not based on the first shot content and the second shot content;
step S203: if yes, executing the step of detecting the relative position change of the common shot object in the first picture and the second picture;
step S204: otherwise, executing the step of acquiring the first picture and the second picture;
wherein the first shot content and the second shot content include a shot object and a background.
Referring to fig. 3 (i.e. 300 in the drawing), fig. 3 is a flowchart illustrating a method for determining whether a common object in a first picture and a second picture is the same according to an embodiment of the present application.
As shown in fig. 3: the method for judging whether a common shot object exists in the first picture and the second picture or not based on the first shot content and the second shot content comprises the following steps:
step S301: similarity calculation is carried out on the first shot content and the second shot content, and a similarity value of the first shot content and the second shot content is obtained;
the similarity value of the first shot content and the second shot content can be calculated by utilizing the existing similarity calculation method of the picture;
step S302: judging whether the similarity value is within a preset similarity threshold range or not;
step S303: if the similarity value is within a preset similarity threshold range, judging that the first picture and the second picture have a common shot object;
step S304: otherwise, the first picture and the second picture are judged to have no common shot object.
Referring to fig. 4 (i.e., 400 shown in the drawing), fig. 4 is a flowchart illustrating a method for supplementing an intermediate frame between a first key frame and a second key frame according to an embodiment of the present application.
As shown in fig. 4; the method for supplementing the intermediate frame between the first key frame and the second key frame based on the detected relative position change of the common shot object in the first picture and the second picture comprises the following steps:
step S401: reading a first shooting time t of the first picture1And a second shooting time t of the second picturen
Based on t1And tnCalculating the shooting time interval delta t of the first picture and the second picture;
where, t isn-t1
Step S402: identifying a first location s of a photographed object in the first picture1And a second position s of the object in the second picturen
Based on s1And snCalculating a distance interval deltas of the position change of the shot object in the first picture and the shot object in the second picture;
wherein,Δs=sn-s1
step S403: calculating the average motion speed v of a common photographed object in the first picture and the second picture;
wherein v is Δ s/Δ t
Step S404: using time point ti,i∈[2,n-1]Will t1To tnThis time period is divided into a plurality of target time periods;
step S405: calculating the average motion speed v of the common object in the first picture and the second pictureiPosition of time si
Step S406: according to the time point tiCorresponding common object position siCorrespondingly generating an intermediate picture;
and taking the intermediate picture as an intermediate frame.
Wherein, generating the intermediate picture can be realized by the following steps:
since the first picture and the second picture have the same shooting background, at the time point t of generationiCan generate the time point tiThe corresponding picture templates have the same size and specification as the first picture and the second picture and the same shooting background;
since the time t has already been calculatediPosition s of object to be photographediSo that it can be known at the time point tiThe position of the shot object in the picture template;
filling the shot object into the picture template to obtain a time point tiA picture of time, which is an intermediate picture.
Further, a first coordinate system is established by taking the central point of the first picture as a first coordinate origin, and a second coordinate system is established by taking the central point of the second picture as a second coordinate origin;
the first coordinate system and the second coordinate system are the same because the first picture and the second picture are the same in size and specification;
establishing a coordinate system which is the same as the first picture and the second picture in the picture template;
acquiring the shot object and the time point tiA corresponding set of coordinates;
filling the coordinate set into a coordinate system in the picture template to obtain an available time point tiA picture of time, which is an intermediate picture.
As an optional implementation, the method for generating a video based on the first key frame, the intermediate frame and the second key frame includes:
and sequencing the first key frame, the intermediate frame and the second key frame according to a preset rule, and playing at a certain speed to generate a video.
Referring to fig. 5 (i.e. 500 in the drawing), fig. 5 is a flowchart illustrating a method for determining whether a value of a shooting time interval Δ t of the first picture and the second picture is within a predetermined time interval range according to an embodiment of the present application.
As shown in fig. 5: before the step of detecting a relative position change of the common object in the first picture and the second picture, the method further comprises:
step S501: acquiring the value of the shooting time interval delta t of the first picture and the second picture;
step S502: judging whether the value of the delta t is within a preset time interval range or not;
step S503: and if the value of the delta t is within a preset time interval range, executing the step of detecting the relative position change of the common shot object in the first picture and the second picture.
Step S504: otherwise, executing the step of acquiring the first picture and the second picture.
Referring to fig. 6 (i.e. 600 in the drawings), fig. 6 is a flowchart illustrating a method for detecting a relative position change of a common object in a first picture and a second picture according to an embodiment of the present application.
As shown in fig. 6: the method for detecting the relative position change of the common shot object in the first picture and the second picture comprises the following steps:
step S601: establishing a first coordinate system by taking the central point of the first picture as a first coordinate origin, and establishing a second coordinate system by taking the central point of the second picture as a second coordinate origin;
the first coordinate system and the second coordinate system are the same two coordinate systems because the first picture and the second picture have the same size and specification;
step S602: determining a first coordinate set of the shot object in a first coordinate system in the first picture and a second coordinate set of the shot object in a second coordinate system in the second picture;
step S603: judging whether the first coordinate set and the second coordinate set have differences or not;
step S604: if the difference between the first coordinate set and the second coordinate set is different, the relative position of the common shot object in the first picture and the second picture is judged to be changed;
step S605: otherwise, judging that the relative position of the common shot object in the first picture and the second picture is not changed.
Referring to fig. 7 (i.e. 700 shown in the drawing), fig. 7 is a further flowchart illustrating a pair of methods for detecting a relative position change of a common object in a first picture and a second picture according to an embodiment of the present application.
As shown in fig. 7: the method for detecting the relative position change of the common shot object in the first picture and the second picture comprises the following steps:
step S701: selecting a first characteristic point around the shot object in the first graph and a second characteristic point around the shot object in the second graph;
step S702: judging whether the first characteristic point and the second characteristic point have difference or not;
step S703: if the first characteristic point and the second characteristic point are different, determining that the relative position of the common shot object in the first picture and the second picture is changed;
step S704: otherwise, judging that the relative position of the common shot object in the first picture and the second picture is not changed.
Referring to fig. 8 (i.e., 800 shown in the drawing), fig. 8 is a flowchart illustrating a method for performing special effects processing on a generated video according to an embodiment of the present application.
As shown in fig. 8: after the step of generating a video based on the first key frame, the intermediate frames and the second key frame, the method further comprises:
step S801: analyzing a current video topic of the currently generated video;
step S802: selecting a special effect corresponding to the current video theme from a preset special effect database;
step S803: based on the selected special effect, carrying out special effect processing on the generated video;
the special effect database comprises at least one special effect, and each special effect corresponds to at least one video theme.
Special effects can increase the enjoyment and interest of the generated video.
Embodiments of the present application further provide a terminal device, where when generating a video from a picture, the terminal device is configured to perform the following operations:
acquiring a first picture and a second picture;
taking the first picture as a first key frame and the second picture as a second key frame;
detecting the relative position change of a common shot object in the first picture and the second picture;
supplementing an intermediate frame between the first key frame and the second key frame based on the relative position change of the shot object;
generating a video based on the first key frame, the intermediate frame, and the second key frame.
The application aims to protect a picture display method and a terminal, and the picture can be displayed by utilizing a negative screen on the terminal; on one hand, the user can quickly view various pictures on a negative screen page; meanwhile, the function of a page with one screen is enriched, so that different pictures can be displayed by one screen; on the other hand, the background picture of the negative one-screen page can be independently set without influencing the wallpaper of the main screen page; the terminal has strong practicability, can be widely applied to various terminals, and improves the intelligent level of the terminal and the user experience. It is to be noted that the above-described embodiments are merely examples, and the present application is not limited to such examples, but various changes may be made.
It should be noted that, in the present specification, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. Finally, it should be noted that the series of processes described above includes not only processes performed in time series in the order described herein, but also processes performed in parallel or individually, rather than in time series.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer program instructions, and the program can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explaining the principles of the invention and are not to be construed as limiting the invention. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present invention should be included in the protection scope of the present invention. Further, it is intended that the appended claims cover all such variations and modifications as fall within the scope and boundaries of the appended claims or the equivalents of such scope and boundaries.

Claims (10)

1. A method for generating a video from pictures, comprising the steps of:
acquiring a first picture and a second picture;
taking the first picture as a first key frame and the second picture as a second key frame;
detecting the relative position change of a common shot object in the first picture and the second picture;
supplementing an intermediate frame between the first key frame and the second key frame based on the relative position change of the shot object;
generating a video based on the first key frame, the intermediate frame, and the second key frame.
2. The method according to claim 1, wherein before the step of detecting a change in the relative position of the common subject in the first picture and the second picture, the method comprises:
identifying first shot content in a first picture;
identifying second shot content in the second picture;
judging whether a common shot object exists in the first picture and the second picture or not based on the first shot content and the second shot content;
and if so, executing the step of detecting the relative position change of the common shot object in the first picture and the second picture.
3. The method according to claim 2, wherein the method of determining whether or not a common subject exists in the first picture and the second picture based on the first captured content and the second captured content comprises:
similarity calculation is carried out on the first shot content and the second shot content, and a similarity value of the first shot content and the second shot content is obtained;
judging whether the similarity value is within a preset similarity threshold range or not;
if the similarity value is within a preset similarity threshold range, judging that the first picture and the second picture have a common shot object;
otherwise, the first picture and the second picture are judged to have no common shot object.
4. The method according to claim 1, wherein the method for supplementing the intermediate frame between the first key frame and the second key frame based on the detected relative position change of the common photographed object in the first picture and the second picture comprises:
reading a first shooting time t of the first picture1
Reading a second photographing time t of the second picturen
Based on t1And tnCalculating the shooting time interval delta t of the first picture and the second picture;
identifying a first location s of a photographed object in the first picture1
Identifying a second location s of the object in the second picturen
Based on s1And snCalculating a distance interval deltas of the position change of the shot object in the first picture and the shot object in the second picture;
calculating the average motion speed v of a common photographed object in the first picture and the second picture;
using time point ti,i∈[2,n-1]Will t1To tnThis time period is divided into a plurality of target time periods;
calculating the average motion speed v of the common object in the first picture and the second pictureiPosition of time si
According to the time point tiCorresponding common object position siCorrespondingly generating an intermediate picture;
and taking the intermediate picture as an intermediate frame.
5. The method of claim 4, wherein the method of generating a video based on the first key frame, the intermediate frame, and the second key frame comprises:
and sequencing the first key frame, the intermediate frame and the second key frame according to a preset rule, and playing at a certain speed to generate a video.
6. The method according to claim 4, wherein before the step of detecting a relative change in position of the common subject in the first picture and the second picture, the method further comprises:
acquiring the value of the shooting time interval delta t of the first picture and the second picture;
judging whether the value of the delta t is within a preset time interval range or not;
and if the value of the delta t is within a preset time interval range, executing the step of detecting the relative position change of the common shot object in the first picture and the second picture.
7. The method according to claim 1, wherein the method for detecting the relative position change of the common object in the first picture and the second picture comprises:
establishing a first coordinate system by taking the central point of the first picture as a first coordinate origin, and establishing a second coordinate system by taking the central point of the second picture as a second coordinate origin;
determining a first coordinate set of a shot object in a first coordinate system in the first picture;
determining a second coordinate set of the shot object in a second coordinate system in the second picture;
judging whether the first coordinate set and the second coordinate set have differences or not;
if the first coordinate set and the second coordinate set are different, determining that the relative position of the common shot object in the first picture and the second picture is changed;
otherwise, judging that the relative position of the common shot object in the first picture and the second picture is not changed.
8. The method according to claim 1, wherein the method for detecting the relative position change of the common object in the first picture and the second picture comprises:
selecting a first characteristic point around a shot object in the first graph;
selecting a second feature point around the shot object in the second image;
judging whether the first characteristic point and the second characteristic point have difference or not;
if the first characteristic point and the second characteristic point are different, determining that the relative position of the common shot object in the first picture and the second picture is changed;
otherwise, judging that the relative position of the common shot object in the first picture and the second picture is not changed.
9. The method of claim 1, wherein after the step of generating a video based on the first key frame, intermediate frames, and second key frames, the method further comprises:
analyzing a current video topic of the currently generated video;
selecting a special effect corresponding to the current video theme from a preset special effect database;
based on the selected special effect, carrying out special effect processing on the generated video;
the special effect database comprises at least one special effect, and each special effect corresponds to at least one video theme.
10. A terminal device for picture generation video, wherein when generating a picture into video, the terminal device is configured to:
acquiring a first picture and a second picture;
taking the first picture as a first key frame and the second picture as a second key frame;
detecting the relative position change of a common shot object in the first picture and the second picture;
supplementing an intermediate frame between the first key frame and the second key frame based on the relative position change of the shot object;
generating a video based on the first key frame, the intermediate frame, and the second key frame.
CN201711488098.XA 2017-12-29 2017-12-29 A kind of method and terminal device of picture generation video Pending CN108184060A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711488098.XA CN108184060A (en) 2017-12-29 2017-12-29 A kind of method and terminal device of picture generation video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711488098.XA CN108184060A (en) 2017-12-29 2017-12-29 A kind of method and terminal device of picture generation video

Publications (1)

Publication Number Publication Date
CN108184060A true CN108184060A (en) 2018-06-19

Family

ID=62549531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711488098.XA Pending CN108184060A (en) 2017-12-29 2017-12-29 A kind of method and terminal device of picture generation video

Country Status (1)

Country Link
CN (1) CN108184060A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151557A (en) * 2018-08-10 2019-01-04 Oppo广东移动通信有限公司 Video creation method and relevant apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1797473A (en) * 2004-12-24 2006-07-05 上海景海软件科技有限公司 Method for editing computer animation
CN105027110A (en) * 2013-02-04 2015-11-04 谷歌公司 Systems and methods of creating an animated content item
WO2017013697A1 (en) * 2015-07-17 2017-01-26 三菱電機株式会社 Animation display device and animation display method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1797473A (en) * 2004-12-24 2006-07-05 上海景海软件科技有限公司 Method for editing computer animation
CN105027110A (en) * 2013-02-04 2015-11-04 谷歌公司 Systems and methods of creating an animated content item
WO2017013697A1 (en) * 2015-07-17 2017-01-26 三菱電機株式会社 Animation display device and animation display method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151557A (en) * 2018-08-10 2019-01-04 Oppo广东移动通信有限公司 Video creation method and relevant apparatus
CN109151557B (en) * 2018-08-10 2021-02-19 Oppo广东移动通信有限公司 Video creation method and related device

Similar Documents

Publication Publication Date Title
US11189037B2 (en) Repositioning method and apparatus in camera pose tracking process, device, and storage medium
CN112740709B (en) Computer-implemented method, computing device, and computer-readable medium for performing gating for video analytics
CN108780389B (en) Image retrieval for computing devices
CN106664376B (en) Augmented reality device and method
EP3383036A2 (en) Information processing device, information processing method, and program
CN111985268B (en) Method and device for driving animation by face
KR102718174B1 (en) Display images that optionally depict motion
US10198846B2 (en) Digital Image Animation
EP4246963A1 (en) Providing shared augmented reality environments within video calls
WO2018000609A1 (en) Method for sharing 3d image in virtual reality system, and electronic device
US11151791B2 (en) R-snap for production of augmented realities
CN110163066B (en) Multimedia data recommendation method, device and storage medium
CN111640197A (en) Augmented reality AR special effect control method, device and equipment
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112243583A (en) Multi-endpoint mixed reality conference
CN114531553B (en) Method, device, electronic equipment and storage medium for generating special effect video
CN112348968B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN109584358A (en) A kind of three-dimensional facial reconstruction method and device, equipment and storage medium
KR102127351B1 (en) User terminal device and the control method thereof
US10642881B2 (en) System architecture for universal emotive autography
US9503632B2 (en) Guidance based image photographing device and method thereof for high definition imaging
CN112868224A (en) Techniques to capture and edit dynamic depth images
CN115002359B (en) Video processing method, device, electronic equipment and storage medium
EP3989591A1 (en) Resource display method, device, apparatus, and storage medium
CN112581358A (en) Training method of image processing model, image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180619