CN112019769A - Dynamic picture creating method and system - Google Patents

Dynamic picture creating method and system Download PDF

Info

Publication number
CN112019769A
CN112019769A CN202010746126.9A CN202010746126A CN112019769A CN 112019769 A CN112019769 A CN 112019769A CN 202010746126 A CN202010746126 A CN 202010746126A CN 112019769 A CN112019769 A CN 112019769A
Authority
CN
China
Prior art keywords
picture
pictures
video
background
moving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010746126.9A
Other languages
Chinese (zh)
Inventor
张博伦
林晋
许乾坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Tuele Information Technology Service Co ltd
Original Assignee
Shanghai Tuele Information Technology Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Tuele Information Technology Service Co ltd filed Critical Shanghai Tuele Information Technology Service Co ltd
Priority to CN202010746126.9A priority Critical patent/CN112019769A/en
Publication of CN112019769A publication Critical patent/CN112019769A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Circuits (AREA)

Abstract

The invention relates to a dynamic picture creating method and a dynamic picture creating system, which comprise the following steps: separating an image from an original picture to obtain a first picture and a second picture; based on the first picture, obtaining a first group of multiple pictures related to the first picture; moving the first group of the plurality of pictures and performing video acquisition to obtain a first video; and merging the first video and the second picture, wherein the system comprises an image separation module, a picture acquisition module, a picture moving module, a video acquisition module and a synthesis module. The invention generates a video with richer effect according to a monotonous picture, has high processing efficiency and small occupied space, and can meet the utilization requirements of people on the picture from various aspects.

Description

Dynamic picture creating method and system
Technical Field
The invention relates to the technical field of image editing, in particular to a dynamic picture creating method and system.
Background
As a basic carrier of visual contents, still pictures are widely used. With the development of science and technology and the increasing improvement of the quality of life of people, static pictures can not meet the requirements of people on visual effects. Compared with a static picture, the video with rich dynamic visual effect can attract the attention of people. For example, people often insert some dynamic special effects into a PPT document used during announcement, and the dynamic special effects can attract the attention of viewers better, so that the announcement is achieved.
In order to obtain a richer visual effect, a motion picture similar to a video effect appears in the prior art. However, existing motion pictures are either simply a combination of multiple still pictures or are small videos. At present, no technical scheme for generating dynamic effect by using static pictures exists.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides a dynamic picture creating method and a dynamic picture creating system, which utilize a static picture to obtain rich dynamic effects.
In order to solve the above technical problem, the present invention provides a dynamic picture creating method, which includes the following steps:
separating an image from an original picture to obtain a first picture and a second picture;
based on the first picture, obtaining a first group of multiple pictures related to the first picture;
moving the first group of the plurality of pictures and performing video acquisition to obtain a first video; and
the first video is merged with the second picture.
According to another aspect of the present invention, the present invention further provides a dynamic picture creating system, which includes: the image processing system comprises an image separation module, a picture acquisition module, a picture moving module, a video acquisition module and a synthesis module, wherein the image separation module is configured to separate an image from an original picture to obtain a first picture and a second picture; the picture acquisition module is configured to acquire, based on a first picture, a first set of multiple pictures associated with the first picture; the picture moving module is configured to move a first set of multiple pictures at a first video acquisition region; the video capture module is configured to set a first video capture zone, perform video capture to obtain a first video when a first group of multiple pictures moves in the first video capture zone; the compositing module is configured to merge the first video and the second picture.
The invention utilizes the monotonous static pictures to generate the videos with richer effect, has high processing efficiency and small occupied space, and can meet the utilization requirements of people on the pictures from various aspects.
Drawings
Preferred embodiments of the present invention will now be described in further detail with reference to the accompanying drawings, in which:
FIG. 1 is a flowchart of a method for creating a dynamic picture according to one embodiment of the invention;
FIG. 2 is a flow diagram of an image separation process according to one embodiment of the invention;
FIG. 3 is an original picture according to one embodiment of the invention;
FIG. 4 is a background picture after being filled in accordance with one embodiment of the present invention;
FIG. 5 is a flow diagram of obtaining background video according to one embodiment of the invention;
FIG. 6 is a schematic diagram of the positions of a first background picture and a first video capture area according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of the positions of a second background picture and a first video capture area according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of the positions of two background pictures according to one embodiment of the invention;
FIG. 9A is an original picture according to another embodiment of the present invention;
9B-9D are screenshots of videos generated from the pictures shown in FIG. 9A;
FIG. 10 is a functional block diagram of a dynamic picture creation system according to one embodiment of the present invention;
FIG. 11 is a functional block diagram of the image separation module according to one embodiment of the present invention;
FIG. 12 is a schematic diagram of different subjects in the same picture according to an embodiment of the invention;
FIG. 13 is a functional block diagram of a picture movement module according to one embodiment of the present invention; and
FIG. 14 is a functional block diagram of a composition module according to one embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following detailed description, reference is made to the accompanying drawings that form a part hereof and in which is shown by way of illustration specific embodiments of the application. In the drawings, like numerals describe substantially similar components throughout the different views. Various specific embodiments of the present application are described in sufficient detail below to enable those skilled in the art to practice the teachings of the present application. It is to be understood that other embodiments may be utilized and structural, logical or electrical changes may be made to the embodiments of the present application.
The invention carries out certain processing on one or more original pictures to obtain a plurality of theme pictures, one or more pictures are moved simultaneously or independently to generate a video, and the other one or more pictures are used as static images in the video. In order to distinguish these pictures, a picture used for generating a video is referred to as a first picture, and a picture as a video still image is referred to as a second picture. The first picture of the generated video may be multiple, and the second picture may also be multiple. The present invention will be described in detail below with reference to specific examples.
Fig. 1 is a flowchart of a method for creating a dynamic picture according to an embodiment of the present invention. In the present invention, the aim is to animate the background of a picture, while the foreground picture is taken as a static image in the video. The method comprises the following steps:
in step S1, the background is separated from the original picture to obtain a background picture and a foreground picture. In this step, in order to animate the background of the picture, the background and the foreground need to be separated from the original picture. An example of a separation process is shown in FIG. 2:
in step S11, the original picture is recognized to distinguish the background image and the foreground image, or called non-background image.
In step S12, the non-background image is taken from the original image to generate the foreground picture.
In step S13, it is determined whether the recognized background is only one subject. The theme is whether the background content is the same type of content, for example, blue sky white cloud in sky is a type of content, so that sky is a theme, while grass is different from the middle content in sky, so that grass is another theme, and the theme of the background can be identified by comparing pixel values. If the background has only one theme, step S14 is performed. If the background includes a plurality of subjects, step S15 is performed.
Step S14, the background is taken out from the original picture to generate a background picture, and then step S16 is executed.
In step S15, the background with different subjects is taken from the original picture to generate a plurality of background pictures with different subjects, that is, a plurality of pictures capable of generating video are obtained.
Step S16, filling a non-background portion in the background picture with the background image, so that the background picture is filled with the background image. As shown in fig. 3 and 4, fig. 3 is an original picture, and fig. 4 is a filled background picture. When the foreground background is identified, the images of all the backgrounds are taken out to obtain a picture, and a part of the original background image is used for filling the non-background part, so that a complete background picture is obtained, as shown in fig. 4.
In step S2, a first plurality of pictures associated with a background picture is obtained based on the background picture. For example, in this embodiment, one background picture is copied to obtain two identical background pictures.
In step S3, two pictures of the first group are sequentially and continuously moved in the first video capturing area to obtain the background video. One embodiment of generating a background video is shown in FIG. 5:
step S31, arranging the first plurality of background pictures according to a preset position rule, wherein one background picture fills the first plurality of background picturesA video acquisition area. For example, when there are two background pictures in total, one of them is set in the video capture area. In this embodiment, the size of the video capture area is the same as the size of the background picture. As shown in fig. 6, a diagram of the position relationship between the first background picture 201 and the first video capturing area 301 is shown. The second background picture 202 is separated from the first video capturing area 301 by a distance S, as shown in fig. 7, which is a diagram of the position relationship between the second background picture 202 and the video capturing area 301. Thus, it can be seen that, in fact, the first background picture 201 and the second background picture 202 are partially overlapped with each other, as shown in fig. 8, the length of the overlapped part is S0
Step S32, setting a transparency property of the plurality of background images and stretching. In the embodiment shown in fig. 8, in order to make the entire background visible at the beginning of the video, the transparency of the first background picture 201 filling the first video capture area is set to 0%, i.e. completely opaque, and the transparency of the second background picture 202 overlapping with the first background picture is set to 100%, i.e. completely transparent. Thus, at the beginning of the video, the background of the video is clearly available, and if there is a picture that does not fill the first video capture area, it is stretched so that it fills the first video capture area 301.
And step S33, moving a plurality of background pictures from the initial position in the same direction and acquiring a video. The background images may be moved simultaneously or at different times, and the moving speed may be the same or different, depending on the content in the background. In the embodiment shown in fig. 8, the content in the two background pictures is white clouds in the sky, and the drift directions of the clouds should be the same due to the wind, so that the two background pictures with blue sky white clouds should move in the same direction, and move to the left or right with reference to the current picture position. In this embodiment, the moving speed of the two background pictures may be the same, or approximately the same. As a simplified embodiment, two pictures are superimposed and merged together as layers, and the merged picture is moved while moving.
And step S34, adjusting the transparency of the background picture, and stretching the edge of the picture background picture away from the video acquisition area to keep the edge of the background picture aligned with the video acquisition area all the time. As shown in fig. 6, when the first background picture 201 starts to move, the left border of the first background picture 201 will leave the left border of the video capture area, and the right border will move out of the capture area. In order to avoid a sharp boundary in the generated video, the left side edge-based part of the first background picture 201 is stretched until the left side frame of the video capture area 301 is stretched. And, as the picture moves, the transparency of the picture is changed, for example, the first background picture 201 is completely opaque at the initial position, and during the moving process, the transparency is uniformly increased, and when it moves by the distance S, it becomes completely transparent, that is, the transparency is 100%. The transparency of the second background picture 202 is adjusted at the same time as the transparency of the first background picture 201 is adjusted. The second background picture 202 is completely transparent in the initial position, and the transparency thereof is uniformly reduced in the moving process, and when it moves by the distance S, the whole image thereof is subjected to the video capturing area, and the transparency at this time is 0%, that is, completely opaque. Thus, for some backgrounds, such as sky with clouds, the moving clouds can be presented and simultaneously the changing and dissipating effects can be achieved, so that the dynamic effect is more vivid.
In step S35, it is determined whether the moving distance of the background picture reaches S. With reference to fig. 6-8, when two background pictures move simultaneously, at the same speed and in the same direction, when the first background picture 201 moves to the right by the distance S, the left boundary of the second background picture 202 exactly overlaps with the left frame of the video capture area 301, and at this time, if the movement continues, a blank space will appear in the video capture area. Accordingly, when the moving distance of the background picture reaches S, step S36 is executed. If not, the process returns to step S35 to continue the determination.
And step S36, judging whether the preset video time length is reached, and if the preset video time length is reached, ending the video acquisition. If not, the process returns to step S33 to move the picture from the initial position.
And step S4, synthesizing the foreground picture on a background video picture. Thus, a video with a static foreground image and a dynamic background is obtained.
For a picture with a plurality of different subjects, as shown in fig. 12, two background pictures of sky and lake water can be obtained according to a sky region 91 and a lake water region 92, respectively, and a foreground picture 93 is generated according to a mountain forest region 93. At this time, there are two background pictures that can generate a background video, which can be referred to as a first picture and a third picture, and a static mountain forest foreground picture is a second picture. The flow shown in fig. 5 is respectively adopted to obtain two background videos, namely a sky background video and a lake water background video. When the two background videos are generated, the moving speeds of the background pictures may be different, for example, when the moving speed of the sky background picture is greater than the moving speed of the lake water background picture, the effects of high moving speed of the white clouds in the sky and low flowing speed of the lake water can be obtained. In this embodiment, the background picture generating the background video may be moved left or right, thereby representing the influence of wind direction on clouds and water currents in the background. And combining the sky background video, the lake water background video and the mountain forest foreground picture to obtain an integral video. As in fig. 9A-9D, the pictures at the start of the video, 1 st second, 2 nd second, and 3 rd second, respectively.
According to the embodiment, it is conceivable for those skilled in the art that, according to the content in the background, the moving direction of the picture may also be upward or downward, or the background picture is enlarged or reduced, or the picture is circularly moved to the left and then to the right, and then, in cooperation with the setting of the moving speed, a video conforming to the background content may be obtained. For example, when the content in the background picture is a waterfall flowing from top to bottom, the moving direction may be from top to bottom; when the content in the background picture is fireworks, the moving direction may be from bottom to top. When the content in the background picture is a starry sky, the plurality of background pictures can be enlarged or reduced one by one. For example, when the contents of the background picture are forest, rice field with large and clear rice, and floret, the background picture is repeatedly moved in a left-right circulating manner by stretching the partial image, and the forest tip, rice tip, and floret are swung in the left-right direction according to the moving speed. In order to achieve a more realistic effect, the moving speed and the moving distance can be changed in the moving process.
In another embodiment, the original picture may be multiple, such that the first plurality of pictures used in generating the video is a set of temporally related pictures. For example, a group of children running on the grassland take pictures continuously, when the images are separated, the children picture and the grassland picture are obtained from the first original picture, and the children picture and the grassland picture are also obtained from the original pictures. In this case, the child picture obtained from the first original picture is taken as the first picture, the grass picture is taken as the second picture, and the child pictures obtained from the other original pictures are combined with the first picture to obtain a plurality of pictures of the first group. Based on the video generation process of the foregoing embodiment, a video with a grass picture as a still image and a child as a moving image can be obtained.
Similarly, the first plurality of pictures for generating the video may be a group of pictures associated with space, for example, a group of pictures in which a car runs on a highway, the separated car is used as the second picture, the pictures of the highway and its background, which are extracted from each picture, except for the car are combined into the first plurality of pictures, and a video with the car as a static state and the background as a dynamic state is obtained by using the video generating process of the foregoing embodiment.
Fig. 10 is a schematic block diagram of a dynamic picture creation system according to an embodiment of the present invention, wherein the system includes an image separation module 1, a picture taking module 2, a picture moving module 3, a video capture module 4, and a composition module 5. The image separation module 1 is configured to separate an image from an original picture to obtain a first picture and a second picture. The first picture may be a background picture, and the second picture is a non-background picture. Further, as shown in fig. 11, the image separation module 1 includes an identification unit 11, an extraction unit 12, and a filling unit 13. The recognition unit 11 is configured to recognize an original picture, for example, images of different subject contents, such as a background image and a foreground image, can be recognized from the original picture by using a target recognition algorithm, a pixel value algorithm, and the like. Furthermore, different backgrounds and different foregrounds can be distinguished. For example, different contents in the background can be obtained by the target recognition method to form different subjects, that is, different backgrounds. The picture shown in fig. 12 can distinguish two different backgrounds of sky and lake. The extraction unit 12 extracts a background image from the original picture to generate a background picture, and extracts a non-background image from the original picture to generate a foreground picture. The background picture and the foreground picture can be correspondingly processed. As shown in fig. 12, the image of the standard shape is obtained from the original picture according to the maximum range, for example, the region 91 is the sky background, the region 92 is the lake background, and the region 93 is the mountain forest image. These regions are extracted separately. Obviously, the background picture includes all background images in the original picture, but also includes non-background images, and the non-background images include partial background images, so that the filling unit 13 fills the original non-background portions in the background picture with the background images by using the corresponding algorithm. So that the whole background picture is the background image.
The picture taking module 2 may take different methods to obtain the first plurality of pictures. For example, when there is only one original picture, such as the pictures shown in fig. 3 and 9A, a first plurality of pictures can be obtained by copying the background picture for generating a video. If there is a group of multiple original pictures, multiple temporally or spatially related pictures can be obtained through the processing of the image separation module 1, and the group of pictures is taken as a first group of pictures.
The picture moving module 3 is configured to sequentially move a plurality of pictures of the first group in the video capture area. In a specific embodiment, as shown in fig. 13, the picture moving module 3 comprises a position arrangement unit 31 and more than one moving unit 32. Wherein the position arrangement unit 32 arranges the plurality of pictures of the first group according to a preset position rule. Still taking the embodiment in fig. 6-8 as an example, the size of the first video acquisition area 301 is the same as the size of the background picture. As shown in fig. 6, the first background picture 201 coincides with the video acquisition area 301. As shown in fig. 7, the second background picture 202 is separated from the video capture area 301 by S, the first background picture 201 and the second background picture 202 are partially overlapped with each other, and as shown in fig. 8, the length of the overlapped portion is S0. At the positionThe right border of the second background picture 202 is arranged to be located in the video capture area and there is a large overlap. If the video acquired by moving the background picture through the moving unit 33 in this state has obvious boundaries, the system of the present invention further includes an image processing module 6 in order to achieve a more realistic effect. Thus, after the position of each picture is arranged, the image processing module 6 stretches the edge of the background image that does not fill the video capture area to the edge of the video capture area 301. As shown in fig. 7, the right edge of the second background picture 202 is stretched to the right edge of the first video acquisition area 301, so that the second background picture 202 also fills the first video acquisition area 301. In addition, since the two background images are overlapped, the transparency of each background image is adjusted by the image processing module 6. In one embodiment, the image processing module 6 sets the transparency of the background image originally filling the video capturing area, such as the first background picture 201, to be 0% and the transparency of the remaining background image, such as the second background picture 202, to be 100% at the moving initial position of the same group of background images.
In the present embodiment, the number of the moving units 33 corresponds to the number of the background pictures, and as in the examples shown in fig. 6 to 8, two moving units 33 are required to move the first background picture 201 and the second background picture 202, respectively. In this embodiment, when the first background picture 201 is moved to the right, the left boundary of the first background picture 201 will be moved away from the left boundary of the video capture area 301, and in order to prevent a distinct boundary from being generated in the resulting video, the image processing module 5 stretches the left boundary of the first background picture 201 so that it is always aligned with the left boundary of the video capture area 301. Meanwhile, the transparency of the background picture is adjusted at a constant speed, the first background picture 201 with the initial position transparency of 0% is adjusted to be 100% when moved by the preset distance S, and the second background picture 202 with the initial position transparency of 100% is adjusted to be 0% when moved by the preset distance S.
In another embodiment, the moving unit 33 automatically determines one or more of the moving direction, the moving speed, and the moving distance of the same group of background pictures according to the content in the background pictures before moving the pictures. As shown in fig. 9A, when the background is sky, the embodiments shown in fig. 6-8 can be used, such as two pictures, moving by S (1/5-3/5 of the picture length), moving by 10 pixels per second, etc. If the content in the background is waterfall, the content is moved from top to bottom, and please refer to the description of the aforementioned method section.
In another embodiment, the moving unit 33 is one, a plurality of background pictures are arranged and combined by the position arranging unit 31, each background picture is used as an image layer of the combined picture, and therefore, the moving unit 33 moves the combined picture.
The video capture module 4 is configured to set a video capture area, and capture a video to obtain a background video when the background picture moves in the first video capture area 301. Wherein, the size of the first video capturing area 301 is smaller than or equal to the background picture, and the capturing frame rate can be set according to the actual requirement. When a plurality of background pictures are separated, after one background video is obtained from the background pictures, the first video capturing area 301 may be adjusted to obtain another background video from another background picture. And a plurality of video acquisition areas can be set according to the size of the background picture, and a plurality of background videos can be obtained simultaneously in a parallel task mode. When a plurality of video acquisition areas are adopted, the plurality of video acquisition areas can be set to have the same frame rate for the convenience of video splicing.
The composition module 5 is configured to compose the non-background picture (second picture) on a background video picture. In one embodiment, as shown in fig. 14, the composition module 5 includes a video stitching unit 51 and a composition unit 52. Wherein the video stitching unit 51 is configured to stitch more than one video to obtain an overall video, and the composition unit 52 is configured to compose the non-background picture on the overall video picture.
The invention obtains the small video with dynamic effect according to the monotonous static picture, has high processing efficiency and small occupied space, and can meet the utilization requirements of people on the picture from various aspects.
The above embodiments are provided only for illustrating the present invention and not for limiting the present invention, and those skilled in the art can make various changes and modifications without departing from the scope of the present invention, and therefore, all equivalent technical solutions should fall within the scope of the present invention.

Claims (34)

1. A dynamic picture creating method comprises the following steps:
separating an image from an original picture to obtain a first picture and a second picture;
based on the first picture, obtaining a first group of multiple pictures related to the first picture;
moving the first group of the plurality of pictures and performing video acquisition to obtain a first video; and
the first video is merged with the second picture.
2. A method as claimed in claim 1, wherein the plurality of pictures of the first group are temporally correlated with the first picture.
3. The method of claim 1, wherein the first plurality of pictures is spatially correlated with the first picture.
4. A method as claimed in claim 1, wherein the plurality of pictures of the first group are obtained via copying the first picture.
5. The method of claim 1, wherein the first picture is a background picture.
6. The method of claim 5, wherein the non-background portion of the first picture is filled with the background image.
7. Method according to claim 1, wherein the first video is generated at a first video acquisition area, wherein the size of the first plurality of pictures is larger than or equal to the size of the first video acquisition area.
8. The method of claim 7, further comprising:
arranging the first group of multiple pictures according to a preset position rule; and
the edges of the first group of one or more pictures are stretched to fill the first video acquisition area.
9. The method according to claim 8, wherein when arranging the first plurality of pictures, at least two adjacent pictures are partially overlapping; and the plurality of pictures have a transparency property.
10. The method according to claim 9, wherein in the moved initial position of the first plurality of pictures, the transparency of the current picture filling the first video acquisition area is 0% and the transparency of the remaining pictures is 100%.
11. The method of claim 10, wherein during the moving of the first plurality of pictures, the transparency of the first plurality of pictures is adjusted at a uniform rate, the picture with the transparency of 0% at the initial position is adjusted to 100% when moved by a preset distance, and the picture with the transparency of 100% at the initial position is adjusted to 0% when moved by the preset distance.
12. Method according to claim 7, wherein during the moving process the edges of the plurality of pictures leaving the first video acquisition area are stretched such that their edges remain aligned with the first video acquisition area edges during the picture moving process.
13. The method of claim 1, further comprising: determining one or more of a number, a direction of movement, a speed of movement, a distance of movement of the first plurality of pictures based on a subject matter of the first plurality of pictures.
14. The method of claim 1, wherein the moving direction of the first plurality of pictures during the moving is: one or more of moving a picture to the left, right, up, or down, rocking a picture left or right or up and down, zooming in or out on a picture, and so forth.
15. The method of claim 1, further comprising:
separating an image from the original picture to obtain a third picture;
based on the third picture, obtaining a second group of multiple pictures related to the third picture;
moving the plurality of pictures of the second group and carrying out video acquisition to obtain a second video; and
merging the second video, the first video and the second picture;
wherein, the theme of the first picture is different from that of the third picture.
16. Method according to claim 15, wherein the second video is generated in a second video capture area, wherein the size of the third picture is larger than or equal to the size of the second video capture area.
17. The method of claim 16, wherein the frame rate of acquisition of the first and second video acquisition zones is the same.
18. A dynamic picture creation system, comprising:
an image separation module configured to separate an image from an original picture to obtain a first picture and a second picture;
a picture acquisition module configured to acquire, based on a first picture, a first set of multiple pictures associated with the first picture;
a picture moving module; configured to move a first group of a plurality of pictures at a first video acquisition region;
a video capture module configured to set a first video capture zone, perform video capture to obtain a first video when a first group of multiple pictures moves in the first video capture zone; and
a composition module configured to merge the first video and the second picture.
19. The system of claim 18, wherein the image separation module comprises:
an identification unit configured to identify subject matter of an original picture; and
the extraction unit is configured to extract a subject image from the original picture according to the subject content so as to generate at least a first picture and a second picture.
20. The system of claim 18 or 19, wherein the picture taking module is configured to copy the first picture to obtain a first set of multiple pictures.
21. The system of claim 20, wherein the first picture is a background picture.
22. The system of claim 21, wherein the image separation module further comprises a filling unit configured to fill non-background portions in the first picture with background images.
23. The system of claim 18, wherein the picture taking module is configured to extract a plurality of pictures related to a subject in a first picture from other original pictures having temporal or spatial association with the original pictures to obtain a first group of the plurality of pictures.
24. The system of claim 18, wherein the picture movement module comprises:
a position arrangement unit configured to arrange the plurality of pictures of the first group in a first video acquisition area according to a preset position rule; and
a moving unit configured to move the arranged first plurality of pictures.
25. The system of claim 24, wherein the position arrangement unit is further configured to partially overlap two adjacent pictures.
26. The system of claim 24, further comprising: an image processing module configured to stretch edges of one or more pictures that do not fill the first video capture area to fill the first video capture area; and when the adjacent pictures are partially overlapped with each other, adjusting the transparency of the pictures.
27. The system of claim 26, wherein the image processing module is further configured to set the transparency of the picture filling the first video acquisition area to 0% and the transparency of the remaining pictures to 100% in the moving initial position of the picture; and in the moving process of the first group of pictures, uniformly adjusting the transparency of the first group of pictures, adjusting the pictures with the initial position transparency of 0% to be 100% when moving for a preset distance, and adjusting the pictures with the initial position transparency of 100% to be 0% when moving for the preset distance.
28. The system of claim 26, wherein the image processing module is further configured to stretch the picture during the movement of the first plurality of pictures to align an edge of the picture with an edge of the first video capture area during the movement as the edge of the picture leaves the first video capture area.
29. The system of claim 24, wherein the mobile unit is configured to determine one or more of a direction of movement, a speed of movement, a distance of movement of the first plurality of pictures according to a subject matter of the first picture.
30. The system of claim 29, wherein the moving direction of the moving unit during the moving for the first plurality of pictures is: one or more of moving a picture to the left, right, up, or down, rocking a picture left or right or up and down, zooming in or out on a picture, and so forth.
31. The system of claim 19, wherein the extraction unit separates a third subject image from the original picture to obtain a third picture; correspondingly, the picture acquiring module is used for acquiring a second group of multiple pictures related to a third picture based on the third picture; and the video acquisition module sets a corresponding second video acquisition area for the third picture to obtain a second video corresponding to the third picture.
32. The system of claim 31, wherein the video capture module is configured to configure the first or second video capture area to have a size less than or equal to a size of the corresponding first or third picture.
33. The system of claim 31, wherein the video capture module sets the same capture frame rate for the first and second video capture zones.
34. The system of claim 31, the synthesis module comprising:
a video stitching unit configured to stitch the first video and the second video to obtain an overall background video; and
a synthesizing unit configured to synthesize the second picture on the entire background video screen.
CN202010746126.9A 2020-07-29 2020-07-29 Dynamic picture creating method and system Pending CN112019769A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010746126.9A CN112019769A (en) 2020-07-29 2020-07-29 Dynamic picture creating method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010746126.9A CN112019769A (en) 2020-07-29 2020-07-29 Dynamic picture creating method and system

Publications (1)

Publication Number Publication Date
CN112019769A true CN112019769A (en) 2020-12-01

Family

ID=73498620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010746126.9A Pending CN112019769A (en) 2020-07-29 2020-07-29 Dynamic picture creating method and system

Country Status (1)

Country Link
CN (1) CN112019769A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010081411A (en) * 2000-02-14 2001-08-29 김효근 Method and apparatus for generating digital moving pictures
CN101324963A (en) * 2008-07-24 2008-12-17 上海交通大学 Fluid video synthetic method based on static image
US20100271365A1 (en) * 2009-03-01 2010-10-28 Facecake Marketing Technologies, Inc. Image Transformation Systems and Methods
US20130279885A1 (en) * 2010-12-28 2013-10-24 Toshiaki Nakagawa Pseudo-video creation device, pseudo-video creation method, and pseudo-video creation program
CN103813106A (en) * 2012-11-12 2014-05-21 索尼公司 Image Processing Device, Image Processing Method And Program
CN104618572A (en) * 2014-12-19 2015-05-13 广东欧珀移动通信有限公司 Photographing method and device for terminal
CN106339158A (en) * 2016-08-17 2017-01-18 东方网力科技股份有限公司 Dynamic display method and device for static images based on large data
CN109361880A (en) * 2018-11-30 2019-02-19 三星电子(中国)研发中心 A kind of method and system showing the corresponding dynamic picture of static images or video
CN110717962A (en) * 2019-10-18 2020-01-21 厦门美图之家科技有限公司 Dynamic photo generation method and device, photographing equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010081411A (en) * 2000-02-14 2001-08-29 김효근 Method and apparatus for generating digital moving pictures
CN101324963A (en) * 2008-07-24 2008-12-17 上海交通大学 Fluid video synthetic method based on static image
US20100271365A1 (en) * 2009-03-01 2010-10-28 Facecake Marketing Technologies, Inc. Image Transformation Systems and Methods
US20130279885A1 (en) * 2010-12-28 2013-10-24 Toshiaki Nakagawa Pseudo-video creation device, pseudo-video creation method, and pseudo-video creation program
CN103813106A (en) * 2012-11-12 2014-05-21 索尼公司 Image Processing Device, Image Processing Method And Program
CN104618572A (en) * 2014-12-19 2015-05-13 广东欧珀移动通信有限公司 Photographing method and device for terminal
CN106339158A (en) * 2016-08-17 2017-01-18 东方网力科技股份有限公司 Dynamic display method and device for static images based on large data
CN109361880A (en) * 2018-11-30 2019-02-19 三星电子(中国)研发中心 A kind of method and system showing the corresponding dynamic picture of static images or video
CN110717962A (en) * 2019-10-18 2020-01-21 厦门美图之家科技有限公司 Dynamic photo generation method and device, photographing equipment and storage medium

Similar Documents

Publication Publication Date Title
US7162083B2 (en) Image segmentation by means of temporal parallax difference induction
JP5289586B2 (en) Dynamic image collage
JP4754364B2 (en) Image overlay device
US20090290796A1 (en) Image processing apparatus and image processing method
US11039088B2 (en) Video processing method and apparatus based on augmented reality, and electronic device
CN103366352A (en) Device and method for producing image with background being blurred
US20140078170A1 (en) Image processing apparatus and method, and program
JP6715864B2 (en) Method and apparatus for determining a depth map for an image
KR20190030870A (en) Image composition apparatus using virtual chroma-key background, method and computer program
US20130188094A1 (en) Combining multiple video streams
EP4052229A1 (en) An image processing method for setting transparency values and color values of pixels in a virtual image
WO2012163743A1 (en) Method and device for retargeting a 3d content
CN111405339A (en) Split screen display method, electronic equipment and storage medium
CN113225606A (en) Video barrage processing method and device
Schmeing et al. Depth image based rendering
US10650488B2 (en) Apparatus, method, and computer program code for producing composite image
KR101744141B1 (en) Method for reconstructing a photograph by object retargeting and the apparatus thereof
US10796421B2 (en) Creating selective virtual long-exposure images
CN106254790A (en) Take pictures processing method and processing device
CN112019769A (en) Dynamic picture creating method and system
Li et al. Optimal seamline detection in dynamic scenes via graph cuts for image mosaicking
KR101373631B1 (en) System for composing images by real time and method thereof
JP6396932B2 (en) Image composition apparatus, operation method of image composition apparatus, and computer program
Kawakita et al. Real-time three-dimensional video image composition by depth information
US20230127589A1 (en) Real-time video overlaying and sharing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201201

WD01 Invention patent application deemed withdrawn after publication