CN117615084B - Video synthesis method and computer readable storage medium - Google Patents

Video synthesis method and computer readable storage medium Download PDF

Info

Publication number
CN117615084B
CN117615084B CN202410085542.7A CN202410085542A CN117615084B CN 117615084 B CN117615084 B CN 117615084B CN 202410085542 A CN202410085542 A CN 202410085542A CN 117615084 B CN117615084 B CN 117615084B
Authority
CN
China
Prior art keywords
video
template
target
basic element
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410085542.7A
Other languages
Chinese (zh)
Other versions
CN117615084A (en
Inventor
张文和
郑瑞慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Aizhao Feida Imaging Technology Co ltd
Original Assignee
Nanjing Aizhao Feida Imaging Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Aizhao Feida Imaging Technology Co ltd filed Critical Nanjing Aizhao Feida Imaging Technology Co ltd
Priority to CN202410085542.7A priority Critical patent/CN117615084B/en
Publication of CN117615084A publication Critical patent/CN117615084A/en
Application granted granted Critical
Publication of CN117615084B publication Critical patent/CN117615084B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Circuits (AREA)

Abstract

The invention discloses a video synthesis method and a computer readable storage medium. The method comprises the steps of pre-making basic elements and storing the basic elements in designated positions; setting a video template based on a template setting plug-in, and generating a video synthesis template; and synthesizing a video according to the current video synthesis template and a plurality of target video materials in a plurality of target video positions selected by the current video synthesis template. According to the method, the video synthesis template is formed by the prefabricated elements according to the basic element configuration, the target video can be synthesized according to the video synthesis template after the video synthesis template is selected, and various special effects and elements can be temporarily checked or switched on and off, so that the method is convenient and quick; the video synthesis template is stored as a visual synthesis instruction file, can be directly changed in the document, can be automatically completed without long-time learning, and influences the synthesis effect when being executed next time.

Description

Video synthesis method and computer readable storage medium
Technical Field
The invention relates to the technical field of video synthesis, in particular to a video synthesis method and a computer readable storage medium.
Background
The general clipping software is complex, needs learning, and is not easy to modify. However, it is difficult to implement a large amount of fast video composition work in both learning editing software and templated video composition software, and the video composition work is not suitable for use in scenic spot snapshot scenes, which often need to be delivered to tourists in one minute or even shorter, and many thousands of tourists may need to be collected and synthesized every day, so that it is difficult for conventional editing software to achieve high-efficiency continuous automatic composition.
Disclosure of Invention
The invention aims at overcoming the defects in the prior art and provides a video synthesis method and a computer readable storage medium.
To achieve the above object, in a first aspect, the present invention provides a method for video synthesis, including:
pre-manufacturing basic elements and storing the basic elements in specified positions;
the method comprises the steps that video template setting is carried out based on a template setting plug-in, a template setting table is arranged on the template setting plug-in, a basic element selection switch, a basic element selection list, a basic element length setting bar and pull button, a camera video selection switch, a camera video position selection list and a camera video length setting bar and pull button are arranged in the template setting table, the basic element selection switch is used for enabling a user to select whether basic elements are added at the position, the basic element selection list is used for enabling the user to select a prefabricated content option of the basic elements, the basic element length setting bar and the pull button are used for setting the length and interception starting and ending positions of the selected basic elements, the camera video selection switch is used for enabling the user to select whether target videos are added at the position, the camera video position selection list is used for enabling the user to select the catalog position of the target videos, and the camera video length setting bar and the pull button are used for setting the length and interception starting and ending positions of the target videos;
generating a video composition template based on the on/off condition of the basic element selection switch and the camera video selection switch, the basic element selected by the basic element selection list, the basic element length configured by the basic element length setting bar and the pull button, the target video selected by the camera video position selection list, the length of the target video configured by the camera video length setting bar and the pull button, and the interception start and end positions;
and synthesizing a video according to the current video synthesis template and a plurality of target videos in the selected target video positions.
Further, the video data in the selected video position in the camera video position selection list is scanned regularly, and if new target video data exists, video synthesis is automatically performed.
Further, a face recognition selection switch is further arranged on the template setting table, in the on state, if more than two target videos are set by the video synthesis template, a plurality of image frames with faces are respectively extracted from the target videos, analysis and comparison are carried out on the extracted image frames based on a face recognition technology, so that whether more than two target videos contain the same person or not is judged, and more than two target videos containing the same person are synthesized into one video according to the video synthesis template.
Furthermore, the template setting table is also provided with a random element switch, and when the random element switch is in an on state, a prefabricated content of the basic element can be randomly selected from the basic element selection list to serve as the content of the synthesized video.
Furthermore, each basic element and the camera video on the template setting table are also provided with an original sound silencing intensity setting button of the source video sound, a sound volume value is increased by using a plus number, a sound volume value is reduced by using a minus number, the sound volume value=0 represents complete silencing, and the sound volume value=9 represents the maximum sound volume.
Further, a video synthesis template is generated to be stored in a visualized synthesis instruction file, and the file name of the synthesis instruction file is presented on a template setting table for selection.
Further, the basic elements are divided into vertical basic elements and horizontal basic elements so as to respectively and correspondingly form a vertical video synthesis template and a horizontal video synthesis template, a list used for selecting the vertical video synthesis template and the vertical video synthesis template is arranged on the template setting table, and the list used for selecting the vertical video synthesis template and the horizontal video synthesis template has the function of analyzing the horizontal and vertical characteristics of a target video so as to automatically call the corresponding vertical video synthesis template and horizontal video synthesis template.
Further, the basic elements include a head video, a tail video, a transition effect, music, a watermark, and special effects processing.
Further, the video synthesis templates in the same direction are classified and stored, and the corresponding video synthesis templates are called by AI according to the characteristics to carry out video synthesis.
In a second aspect, the invention provides a computer readable storage medium storing a computer program which when executed by a processor is adapted to carry out the method described above.
The beneficial effects are that: according to the method, the basic elements are prefabricated, the video synthesis template is formed by using the switch type template configuration according to the basic elements, after the video synthesis template is selected and the automatic synthesis mode is started to scan whether the target video exists, the target video can be synthesized according to the video synthesis template, various special effects and elements can be selected or switched at any time to modify the template, and the method is convenient and quick; the video synthesis template is stored as a visual synthesis instruction file, can be directly changed in a document, can be automatically completed without long-time learning, and influences the synthesis effect when being executed next time; all operations are local operations, cloud computing service is not required to be called through a network, so that the operation can be performed in places without a network or with poor network effect, and for the times that the volog short video is widely demanded, a convenient editing template can be compiled, and movable synthesis equipment can be a very convenient and quick tool for providing the volog short video for scenic spots.
Drawings
FIG. 1 is an interface schematic of a template setting insert according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of selecting a header on a template setting insert;
FIG. 3 is a schematic diagram of selecting a video composition template orientation on a template setup plug-in;
FIG. 4 is a schematic diagram of special effects options on a template setup plug-in;
FIG. 5 is a schematic diagram of a detail effect within the effect option on the template setup plug-in;
FIG. 6 is a detailed effect flow diagram of an automatic change from one segment to multiple segments in a composite effect;
FIG. 7 is a schematic diagram of a directory setting on a template setting plug-in.
Detailed Description
The invention will be further illustrated by the following drawings and specific examples, which are carried out on the basis of the technical solutions of the invention, it being understood that these examples are only intended to illustrate the invention and are not intended to limit the scope of the invention.
The embodiment of the invention provides a video synthesis method, which comprises the following steps:
the basic elements are prefabricated and stored in the designated positions. The basic elements comprise a head video, a tail video, a transition effect, music, watermark, special effect processing and the like, and corresponding photos can be selected as basic elements for synthesizing the video.
As shown in fig. 1 to 7, the video template setting is performed based on a template setting plug-in, a template setting table is provided on the template setting plug-in, a basic element selection switch, a basic element selection list, a basic element length setting bar and a pull button, a camera video selection switch, a camera video position selection list, a camera video length setting bar and a pull button are provided in the template setting table, the basic element selection switch is used for a user to select whether to add a basic element at the position, the basic element selection list is used for the user to select the content of the basic element, the basic element length setting bar and the pull button are used for setting the length and the interception start and end positions of the selected basic element, the camera video selection switch is used for the user to select whether to add a target video at the position, the camera video position selection list is used for the user to select the directory position where the target video is located, and the camera video length setting bar and the pull button are used for setting the length and the interception start and end positions of the target video. The basic element selection switch and the camera video selection switch are preferably selected, and the options of 'no' exist in the basic element and camera video transition effect selection list are selected as 'off'. Each basic element and the camera video on the template setting table are also provided with an original sound silencing intensity setting button of the source video sound, a sound volume value is increased by using a number +and a sound volume value is reduced by using a number +, the sound volume value=0 represents complete silencing, and the sound volume value=9 represents the maximum sound volume.
A video composition template is generated based on the on/off condition of the basic element selection switch and the camera video selection switch, the basic element selected by the basic element selection list, the basic element length configured by the basic element length setting bar and the pull button, the target video selected by the camera video position selection list, the length of the target video configured by the camera video length setting bar and the pull button, and the interception start and end positions. Based on the condition that the basic element selection switch or the camera video selection switch is turned on, special effect processing can be performed on the basic element video or the target video, a special effect length bar and a set pull button can intercept and copy a section of video within the length range set by lens materials such as a head, a tail, a transition, a camera and the like to serve as a source video for adding special effects, and the original video can be inserted or covered after processing. The special effects option shown in fig. 4 may be selected in various ways, and the drop-down option may preserve future extensibility. Based on the selection condition of the basic element selection list, the picture can display the option of the special effect according to the requirement, taking the composite special effect as an example, the composite special effect is selected to have the options of enlarging and reducing, changing speed, watermarking templates, stop-motion and the like, and the watermarking templates can pull down to select one watermarking template video to be synthesized with the intercepted and copied special effect original video once, for example, a dynamic watermarking template is added. The stop motion option provides a setting bar, and a position range can be selected from the intercepted and copied special effect source video to make a stop motion effect, namely, the stop motion of a video picture is helpful to be seen by a viewer in the period of time, or a prompt for increasing video emphasis is added, further, the composite special effect shown in fig. 5 can automatically generate a composite effect of two or three video segments by one video segment, and the composite effect is inserted into or covers the original video to enable the original shorter video to pass through the special effect processing of the added video segment and then match transition and source video switching, so that a richer visual effect is finally obtained, the video value is improved, the consumption is improved, and the specific processing flow of one video segment is changed into multiple segments referring to fig. 6.
And synthesizing a video according to the current video synthesis template and a plurality of target videos in the selected target video positions. Referring to fig. 1 specifically, fig. 1 illustrates a template setting table of the present application, where the template setting table includes a film head selection switch and a film head selection list and a film head length setting bar corresponding thereto, and a pull position button, preferably implements four camera video selection switches and a camera video position selection list and a camera video length setting bar corresponding thereto, three transition selection switches and a transition selection list and a transition length setting bar corresponding thereto, and a pull position button, a film tail selection switch and a film tail selection list and a film tail length setting bar corresponding thereto, and further includes a film head, a film tail, a transition effect selection list corresponding to a camera video, an acoustic silencing intensity setting, a photo source material selection switch, a special effect selection button, and the like. The video selection switches are arranged between the video selection switches and the video selection switches from top to bottom respectively, and the three transition selection switches are arranged between any two video selection switches at intervals. The four camera video selection switches are all opened, so that four different positions or the same target video can be selected. The video of each position can be respectively shot by one video acquisition device and then transmitted, but the method is not limited to the video, and the number of the camera video selection switches can be changed according to the field requirement. The video synthesized by the video synthesis template shown in fig. 1 includes a 11.1 second head+7.4 second target video set by the uppermost camera+5.1 second tail, plus a partial overlap effect of transition therebetween, and the length of the video is 23.7 seconds.
When automatic video composition is performed, video composition can be automatically performed in a first-in first-out order by periodically scanning video data in a video position selected in a camera video position selection list, if new target video data exists. Thus, after an operator determines a video synthesis template for use, video collection can be performed, a plurality of tourist lens fragments are sequentially collected on a recreation route and transferred into corresponding camera video positions, so that template setting requirements are met, namely, when each camera video position has a compliant video input, video is automatically synthesized, as shown in fig. 7, and stored in a corresponding finished product catalog, and target videos do not need to be selected manually one by one to synthesize. The video data in the video locations selected in the camera video location selection list may be imported from an external device or remotely transmitted over a network.
The template setting table is also provided with a face recognition selection switch, and in the on state, if the target video set by the video synthesis template is more than two, the face recognition selection switch extracts a plurality of image frames with faces from the target video respectively, analyzes the extracted image frames based on the face recognition technology to judge whether the more than two target videos contain the same person, and synthesizes the more than two target videos containing the same person into one video according to the video synthesis template. It should be noted that the above-mentioned case that the target video is more than two includes two cases that the camera video selection switch is turned on is more than two and that the target video position corresponding to when one camera video selection switch is turned on contains a plurality of target videos of the same tourist, and in both cases, more than two target videos can be found through face recognition and then one video is synthesized. The face features of the target video can be analyzed in advance to generate corresponding face feature files, the next comparison can only compare the face feature files, the closest threshold is used as a basis for judging whether the target video is the same person, and each video data can be compared for a plurality of times only by analyzing once.
The template setting table can be further provided with a random element switch, and if the transition effect of the transition video set by the video synthesis template is more than two under the opening state of the random element switch, the transition effect can be randomly and automatically selected so as to increase the change of the video and reduce the identity of the template synthesis.
It is also preferable that the generated video composition template is saved in a visual composition instruction file, and the file name of the composition instruction file is presented on the template setting table for selection. When the operator wants to select or change the video synthesis template, the operator can select from the template setting table quickly, or can directly modify the visualized synthesis instruction file by adopting a text editing software, and the template setting table is imported when the operator uses the template next time. Specifically, the synthesized instruction file can be directly modified and imported by a user, can be processed by English or Chinese, and can be edited by using 'notepad' software in the system. The preferred Chinese synthetic instruction file is specifically as follows:
and (3) template: vertical template-1, vertical plate, random/0, face recognition/1;
music: "D:/MUSIC/Zhongshang MUSIC speed 1.MP3", length/30.0, intensity/0.6, fade/fade;
video: "D:/HEAD/vertical plate header template mp4", length/8.01, gradient/gradient;
input: "C:/camera_In1", face recognition/1, length/3.0, silence/0, fade/fade;
video: "D:/TRANSITION/hills for 8 seconds mp4", length/8.01, gradient/circulation, silence/0;
input: "C:/camera_In2", face recognition/1, length/5.0, silence/0, fade/fade;
video: "D:/transmission/hills for 10 seconds to mp4", length/10, gradient/circulation, mute/0.1;
input: "C:/camera_In3", face recognition/1, length/4.0, silence/0, fade/fade;
video: "D:/TAIL/vertical TAIL template. Mp4", length/8.01, gradient/circulation, mute/0.4.
The basic elements are divided into vertical basic elements and horizontal basic elements so as to respectively and correspondingly form a vertical target video synthesis template and a horizontal target video synthesis template, and a list for selecting the vertical video synthesis template and the vertical video synthesis template is arranged on the template setting table. The video landscape and portrait can be determined according to the resolution of the video data, for example, 1980x1080 is the video data of the landscape board, and 1080x1920 is the video data of the portrait board. According to the horizontal and vertical of the video data, a horizontal template or a vertical template can be automatically adopted. When the video is automatically synthesized, the type of the video synthesis template can be automatically selected according to the type of the target video, specifically, when the target video is detected to be the horizontal video, the horizontal template is automatically used for synthesis, and if the target video is detected to be the vertical video, the vertical video synthesis template is automatically used for synthesis, so that a user can start detection and synthesis only by randomly collecting the video material of the horizontal plate or the video material of the vertical plate and inputting the video position of the camera, and the recognition and template selection time of the user is saved. In addition, the video composition templates in the same direction can be stored in a plurality of software systems, and AI is invoked and composed according to the characteristics, for example: the light of scenic spot day is different, divide into: morning, midday, dusk, night, etc. The characteristics of the input guest material can be evaluated by AI to select the corresponding template material for synthesis. The concept of the part also comprises seasons (spring, summer, autumn and winter), different items in scenic spots (such as roller coasters, rapid progress, dodgems and the like in amusement parks, different pieces of head and tail music which are matched with each other can be needed), character characteristics (men/women, adults/children and the like), and the characteristics can be synthesized by automatically selecting different templates or basic element video contents in the templates through AI identification analysis results of target videos of camera video positions during synthesis through basic element video contents classified in advance.
Based on the above embodiments, it will be readily appreciated by those skilled in the art that the present invention also provides a computer readable storage medium storing a computer program for implementing the above method when executed by a processor.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that other parts not specifically described are within the prior art or common general knowledge to a person of ordinary skill in the art. Modifications and alterations may be made without departing from the principles of this invention, and such modifications and alterations should also be considered as being within the scope of the invention.

Claims (7)

1. A method of video composition, comprising:
pre-manufacturing basic elements and storing the basic elements in specified positions;
the method comprises the steps that video template setting is carried out based on a template setting plug-in, a template setting table is arranged on the template setting plug-in, a basic element selection switch, a basic element selection list, a basic element length setting bar and pull button, a camera video selection switch, a camera video position selection list and a camera video length setting bar and pull button are arranged in the template setting table, the basic element selection switch is used for enabling a user to select whether basic elements are added in a synthesized video or not, the basic element selection list is used for enabling the user to select a prefabricated content option of the basic elements, the basic element length setting bar and the pull button are used for setting the length and interception starting and ending positions of the selected basic elements, the camera video selection switch is used for enabling the user to select whether a target video is added in the synthesized video or not, the camera video position selection list is used for enabling the user to select the catalog position of the target video, and the camera video length setting bar and the pull button are used for setting the length and interception starting and ending positions of the target video;
generating a video composition template based on the on/off condition of the basic element selection switch and the camera video selection switch, the basic element selected by the basic element selection list, the basic element length configured by the basic element length setting bar and the pull button, the target video selected by the camera video position selection list, the length of the target video configured by the camera video length setting bar and the pull button, and the interception start and end positions;
synthesizing a video according to the current video synthesis template and a plurality of target videos in a plurality of target video positions selected by the current video synthesis template to serve as the synthesized video;
the template setting table is also provided with a face recognition selection switch, and in the on state, if the target video set by the video synthesis template is more than two, the face recognition selection switch respectively extracts a plurality of image frames with faces from the target video, and analyzes and compares the extracted image frames based on the face recognition technology to judge whether the more than two target videos contain the same person, if so, the more than two target videos containing the same person are synthesized into one video according to the video synthesis template;
the basic elements comprise a head video, a tail video, a transition effect, music, a watermark and special effect processing.
2. A method of video composition according to claim 1, wherein video data in selected video locations in the camera video location selection list is scanned periodically and video composition is automatically performed if there is new target video data.
3. The method of video composition according to claim 1, wherein a random element switch is further provided on the template setting table, and wherein the random element switch randomly selects a prefabricated content of the basic element from the basic element selection list as the content of the composite video in an on state.
4. A method of video composition according to claim 1, wherein the video composition template is generated for storage in a visual composition instruction file and the template file name of the composition instruction file is presented on a template setting table for selection.
5. The method according to claim 1, wherein the basic elements are divided into vertical basic elements and horizontal basic elements to form vertical video composition templates and horizontal video composition templates, respectively, and the template setting table is provided with a list for selecting the vertical video composition templates and the vertical video composition templates, and the list for selecting the vertical video composition templates and the horizontal video composition templates has a function of analyzing the horizontal and vertical characteristics of the target video to automatically call the corresponding vertical video composition templates and horizontal video composition templates.
6. The method of claim 5, wherein video composition templates in the same direction are classified and stored, and the corresponding video composition templates are called AI according to characteristics to perform video composition.
7. A computer readable storage medium storing a computer program, which when executed by a processor is adapted to carry out the method of any one of claims 1 to 6.
CN202410085542.7A 2024-01-22 2024-01-22 Video synthesis method and computer readable storage medium Active CN117615084B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410085542.7A CN117615084B (en) 2024-01-22 2024-01-22 Video synthesis method and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410085542.7A CN117615084B (en) 2024-01-22 2024-01-22 Video synthesis method and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN117615084A CN117615084A (en) 2024-02-27
CN117615084B true CN117615084B (en) 2024-03-29

Family

ID=89951983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410085542.7A Active CN117615084B (en) 2024-01-22 2024-01-22 Video synthesis method and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117615084B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063611A (en) * 2018-07-19 2018-12-21 北京影谱科技股份有限公司 A kind of face recognition result treating method and apparatus based on video semanteme
CN111654619A (en) * 2020-05-18 2020-09-11 成都市喜爱科技有限公司 Intelligent shooting method and device, server and storage medium
CN116017094A (en) * 2022-12-29 2023-04-25 空间视创(重庆)科技股份有限公司 Short video intelligent generation system and method based on user requirements
CN116055762A (en) * 2022-12-19 2023-05-02 北京百度网讯科技有限公司 Video synthesis method and device, electronic equipment and storage medium
CN117201858A (en) * 2023-09-06 2023-12-08 北京陌陌信息技术有限公司 Video generation method, device and equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8207989B2 (en) * 2008-12-12 2012-06-26 Microsoft Corporation Multi-video synthesis
CN110677734B (en) * 2019-09-30 2023-03-10 北京达佳互联信息技术有限公司 Video synthesis method and device, electronic equipment and storage medium
CN113838490B (en) * 2020-06-24 2022-11-11 华为技术有限公司 Video synthesis method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063611A (en) * 2018-07-19 2018-12-21 北京影谱科技股份有限公司 A kind of face recognition result treating method and apparatus based on video semanteme
CN111654619A (en) * 2020-05-18 2020-09-11 成都市喜爱科技有限公司 Intelligent shooting method and device, server and storage medium
CN116055762A (en) * 2022-12-19 2023-05-02 北京百度网讯科技有限公司 Video synthesis method and device, electronic equipment and storage medium
CN116017094A (en) * 2022-12-29 2023-04-25 空间视创(重庆)科技股份有限公司 Short video intelligent generation system and method based on user requirements
CN117201858A (en) * 2023-09-06 2023-12-08 北京陌陌信息技术有限公司 Video generation method, device and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
傻瓜式制作视频宣传片;马震安;《 电脑爱好者》;20171015;全文 *

Also Published As

Publication number Publication date
CN117615084A (en) 2024-02-27

Similar Documents

Publication Publication Date Title
US8035656B2 (en) TV screen text capture
EP0788064B1 (en) Motion image indexing and search system
JP4125140B2 (en) Information processing apparatus, information processing method, and program
CN107003720B (en) Scripted digital media message generation
US7904815B2 (en) Content-based dynamic photo-to-video methods and apparatuses
CN108900771B (en) Video processing method and device, terminal equipment and storage medium
KR102193567B1 (en) Electronic Apparatus displaying a plurality of images and image processing method thereof
US20110080424A1 (en) Image processing
CN111930994A (en) Video editing processing method and device, electronic equipment and storage medium
JP2009237702A (en) Album creating method, program and apparatus
JP2001273505A (en) Visual language classification system
JP2001256335A (en) Conference recording system
CN103839562A (en) Video creation system
US20240121452A1 (en) Video processing method and apparatus, device, and storage medium
CN117615084B (en) Video synthesis method and computer readable storage medium
KR101011194B1 (en) Mobile apparatus with a picture conversion and drawing function and method of the same
JP6276570B2 (en) Image / audio reproduction system, image / audio reproduction method and program
JP2006060652A (en) Digital still camera
KR101843133B1 (en) Apparatus for recording and playing written contents and method for controlling the same
JP4233362B2 (en) Information distribution apparatus, information distribution method, and information distribution program
JP6261198B2 (en) Information processing apparatus, information processing method, and program
WO2012153747A1 (en) Information processing device, information processing method, and information processing program
JP2008090526A (en) Conference information storage device, system, conference information display device, and program
JP2008187256A (en) Motion image creating device, method and program
JP4876736B2 (en) Document camera device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant