WO2022063093A1 - 用于生成文字模式的视频的方法、装置、设备和介质 - Google Patents
用于生成文字模式的视频的方法、装置、设备和介质 Download PDFInfo
- Publication number
- WO2022063093A1 WO2022063093A1 PCT/CN2021/119438 CN2021119438W WO2022063093A1 WO 2022063093 A1 WO2022063093 A1 WO 2022063093A1 CN 2021119438 W CN2021119438 W CN 2021119438W WO 2022063093 A1 WO2022063093 A1 WO 2022063093A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- generating
- user
- text
- information sharing
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000004044 response Effects 0.000 claims abstract description 17
- 238000003860 storage Methods 0.000 claims description 12
- 238000010586 diagram Methods 0.000 description 26
- 238000012545 processing Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000009191 jumping Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 235000009508 confectionery Nutrition 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000009987 spinning Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/036—Insert-editing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234336—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by media transcoding, e.g. video is transformed into a slideshow of still pictures or audio is converted into text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8126—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8146—Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
- H04N21/8153—Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics comprising still images, e.g. texture, background image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/27—Server based end-user applications
- H04N21/274—Storing end-user multimedia data in response to end-user request, e.g. network recorder
- H04N21/2743—Video hosting of uploaded data from client
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
Definitions
- Various implementations of the present disclosure relate to the field of computers, and in particular, to a video method, apparatus, device, and computer storage medium for generating text patterns.
- a method for generating a text-mode video in an information sharing application is provided.
- a request for generating a video is received from a user of an information sharing application.
- an initial page for generating a video is displayed, and the initial page includes a prompt for entering text.
- text input from the user is obtained.
- an apparatus for generating a text-mode video in an information sharing application includes: a receiving module configured to receive a request for generating a video from a user of an information sharing application; a display module configured to display an initial page for generating a video in the information sharing application, the initial page including an initial page for generating a video in the information sharing application a prompt for entering text; an obtaining module configured to obtain textual input from a user in response to detecting a user's touch in the area where the initial page is located; and a generating module configured to generate a video based on the textual input for use in the Published in information sharing applications.
- an electronic device comprises: a memory and a processor; wherein the memory is for storing one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method according to the first aspect of the present disclosure.
- a computer-readable storage medium having stored thereon one or more computer instructions, wherein the one or more computer instructions are executed by a processor to implement the method according to the first aspect of the present disclosure .
- a user can directly generate a corresponding video based on text input inside the information sharing application. In this way, the complexity of the user's operation can be reduced, and richer published content can be provided to the user.
- FIG. 1 schematically shows a block diagram of an application environment according to an exemplary implementation of the present disclosure
- FIG. 2 schematically illustrates a block diagram of a user interface for generating text-mode video according to an exemplary implementation of the present disclosure
- FIG. 3 schematically shows a flowchart of a method for generating a text-mode video according to an exemplary implementation of the present disclosure
- FIG. 4 schematically illustrates a block diagram of a user interface for entering text according to an exemplary implementation of the present disclosure
- Figure 5 schematically illustrates a block diagram of a user interface for selecting a video background according to an exemplary implementation of the present disclosure
- FIG. 6 schematically illustrates a block diagram of a user interface for editing video according to an exemplary implementation of the present disclosure
- FIG. 7 schematically illustrates a block diagram of a user interface for previewing a video according to an exemplary implementation of the present disclosure
- FIG. 8 schematically shows a block diagram of an apparatus for generating a text-mode video according to an exemplary implementation of the present disclosure.
- FIG. 9 illustrates a block diagram of a computing device capable of implementing various implementations of the present disclosure.
- the term “comprising” and the like should be understood as open-ended inclusion, ie, “including but not limited to”.
- the term “based on” should be understood as “based at least in part on”.
- the terms “one implementation” or “the implementation” should be understood to mean “at least one implementation”.
- the terms “first”, “second”, etc. may refer to different or the same objects. Other explicit and implicit definitions may also be included below.
- FIG. 1 schematically shows a block diagram of an application environment 100 according to an exemplary implementation of the present disclosure.
- a user may view and/or post a video through application 110 .
- the application 110 can push the video 120 to the user, and the user can watch the favorite video by searching, scrolling down, turning pages, and the like.
- the user can press the "publish" button 130 to publish the video.
- a variety of video publishing modes have been developed. For example, users can publish videos by taking pictures, shooting in sections, snapshots, uploading from albums, and so on. Each user can choose the way they like to publish the video. Some users may wish to publish a video produced based on text, for example, the user expects to input wishes such as "Happy Mid-Autumn Festival", "Happy Birthday”, etc., and a corresponding video is generated for publication.
- a method for generating a text-mode video in an information sharing application is proposed.
- a request for generating a video may be received from a user of an information sharing application, and a method for generating a text-mode video may then be initiated.
- FIG. 2 schematically shows a block diagram of a user interface 200 for generating text-mode video according to an exemplary implementation of the present disclosure.
- the user interface 200 shown in FIG. 2 may be entered.
- the user may select the text mode 220 in the menu at the bottom of the user interface 200 in order to initiate a generation method according to an exemplary implementation of the present disclosure.
- an initial page 210 for generating a video may be displayed.
- the initial page 210 may include a prompt for entering text: "Tap to enter text".
- the user can enter corresponding text in the initial page 210 .
- the user may perform a touch operation in the area where the initial page 210 is located in order to initiate the process of entering text.
- the application 110 obtains textual input from the user and generates a video including the textual input for publication.
- the page layout shown in FIG. 2 is only schematic, and other page layouts may be adopted according to the exemplary implementation of the present disclosure, as long as the method according to the exemplary implementation of the present disclosure can be implemented.
- the user does not need to call the video editing application separately, but can directly generate the corresponding video based on the text input inside the information sharing application. In this way, the complexity of the user's operation can be reduced, errors that may be caused by the user during switching between multiple applications can be avoided, and the user can be provided with richer published content.
- FIG. 3 schematically illustrates a flowchart of a method 300 for generating a text-mode video according to an exemplary implementation of the present disclosure.
- a request to generate a video is received from a user of the information sharing application.
- a user may swipe the menu at the bottom of the user interface 200 as shown in FIG. 2 and select the text mode 220 from a variety of video modes.
- an initial page 210 for generating a video is displayed, where the initial page 210 includes prompts for entering text.
- the input prompt may be displayed at a prominent position in the initial page. The user can enter the desired text according to the prompts, for example, the user can invoke an input dialog by touching any blank area in the initial page 210 to enter text.
- a textual input from the user is obtained.
- the user may touch any blank area in the initial page 210 in order to enter text, more details of which will be described below with reference to FIG. 4 .
- 4 schematically illustrates a block diagram of a user interface 400 for entering text in accordance with an exemplary implementation of the present disclosure.
- an input box 410 may pop up for receiving text input.
- the user may enter the plain text content "Happy Mid-Autumn Festival".
- the text input may include text and emojis.
- the user can also input an emoticon such as "smiley".
- the emoticons herein may be emoticons drawn by an operating system on the mobile terminal, and each emoticon may have a unique code.
- the emoji images drawn can vary from operating system to operating system. For example, in the "smiley faces" drawn by the two operating systems, the degree to which the corners of the mouth are raised can vary.
- a video is generated based on the textual input for publication in the information sharing application. Having obtained the textual input, a video including the textual input can be generated for publication.
- text input is the most basic element for generating a video
- a default length video may be generated based on a default video background.
- the application 110 may select a moonlit background based on the content of the text and generate a video including the text "Happy Mid-Autumn Festival".
- the initial page 210 may include more options. In the following, returning to FIG. 2, more details about the initial page 210 are described.
- the initial page 210 may further include an option 234 for selecting a video background.
- the user can click on option 234 to select the desired video background, for example, one or more of images, videos, emoticons, and emoticon animations can be selected as the background.
- the video may be generated based on the video background selected by the user. Assuming the user selects an image of a mooncake, the background of the resulting video will include the mooncake pattern.
- the image position, the number of images, the motion trajectory of the image, and the like can be further specified. More details regarding video backgrounds are described with reference to FIG. 5 , which schematically illustrates a block diagram of a user interface 500 for selecting a video background in accordance with an exemplary implementation of the present disclosure.
- the user can choose a mooncake image as the background, and can specify to include 3 images randomly distributed in the video. At this point, images 510, 520, and 530 will be included in the resulting video. Further, the image can be specified to move in a certain direction.
- motion trajectories may be predefined, eg, straight lines, curves, etc.; alternatively and/or additionally, motion trajectories may be randomly generated.
- additional rules may be set: for example, it may be specified that in the case of displaying multiple images, collisions between the images should be avoided; for another example, it may be specified to change the direction of movement when the image reaches the display boundary, etc.
- a video may be selected as a background, and a portion of a certain time period in the video may be specified to be used (eg, the start time and end time of the specified time period), a certain time period in the video may be selected region (for example, specifying a portion of a window to use), and so on.
- an emoticon or an emoticon animation may be selected as a video background.
- the initial page 210 may further include a speaking option 230 for speaking a text input.
- the user can activate or deactivate the automatic reading function by clicking the operation.
- the application 110 can automatically read the text input by the user based on artificial intelligence technology, and generate a video based on the read audio.
- the generated video may include audio read aloud; alternatively and/or additionally, the generated video may include both textual and audio content.
- the reading options may further include at least any one of the following: gender, age, voice style, and speaking rate of the speaker.
- gender gender
- age age
- voice style and speaking rate of the speaker.
- various sound styles can be provided to meet the needs of different users.
- sound styles may include, but are not limited to: rich, sweet, lively, and the like. Users can choose different speech rates of high, medium or low to support personalized settings for the reading effect.
- the user can cancel the read aloud option, and the generated video only includes text content at this time.
- a user can be provided with various materials for generating a video, thereby providing a richer media representation.
- the content of the initial page 210 has been described above with reference to the accompanying drawings.
- the user can make settings in the initial page 210 to define various parameters for generating the video.
- the "next" button 232 may be clicked to display the edit page.
- FIG. 6, schematically shows a block diagram of a user interface 600 for editing a video according to an exemplary implementation of the present disclosure.
- the user may operate in the editing page 610, and the application 110 may generate a corresponding video based on the user's user operation on the editing page 610.
- the edit page 610 may include at least any of the following: an option 620 for editing speaking settings, an option 622 for editing text input, and an option 624 for editing a video background.
- the user can enable or disable the automatic reading function via the option 620; the user can edit the text that has been entered via the option 622, and can set the font, font size, color, display position, etc. of the text; Option 624 to edit an already selected background, reselect a background or add a new background, etc.
- the user may press the "Next" button 640 to generate a corresponding video based on the edited options specified by the user in the edit page 610 .
- the edit page 610 may provide the user with the function of modifying various parameters. In this way, the user is provided with an opportunity to modify the previous settings when they are not satisfied, thereby facilitating the user's operation and generating a satisfactory video.
- the editing page 610 may further include an option 630 for selecting to add background sound to the video.
- the background sound here may include background music and/or other sounds such as character narration.
- the user may press option 630 to select background music or other audio for the video.
- the user may enter a narration, eg, the user may read a poem about the Mid-Autumn Festival, and the like.
- the application 110 may generate a corresponding video based on the background sound specified by the user operation.
- a user can be allowed to add more diverse sound files to a video, so as to generate richer video content.
- the edit page 610 may further include an option 632 for selecting to add stickers to the video.
- the stickers here can include text stickers as well as image stickers.
- Text stickers can include text, such as common phrases in various artistic fonts, etc.
- Image stickers can include icons, common expressions, and frames.
- the user may press option 632 to insert stickers into the video, eg, the user may insert the text sticker "family reunion", and the image sticker "heart”, etc. Further, the user can adjust the position, size and orientation of the sticker by touching, dragging, rotating, zooming and so on.
- the application 110 may generate a corresponding video based on the sticker specified by the user operation.
- a user may be allowed to add more personality elements to a video. In this way, the video may be more interesting and provide a richer media representation.
- the editing page 610 may further include: an option for specifying the length of the video.
- the video may have a default length of, for example, 3 seconds (or other value).
- users can customize the length of the video.
- the user may be allowed to further set the matching relationship between the background sound (or video) and the length of the video.
- a sound (or video) segment that matches the length of the video can be cut from the background sound (or video). If the length specified by the user is greater than the length of the background sound (or video), the user can set loop playback.
- the length of the generated video may be set based on the length of the background sound (or video).
- a corresponding video may be generated based on the length specified by the user operation.
- the user is allowed to adjust more parameters of video generation, thereby facilitating the user to generate a satisfactory video work.
- the editing page 610 may further include an option for specifying an animation mode for at least any one of text input and video background.
- the animation mode here can include various display modes of text input and video background.
- an animation mode for text input can specify that the text input be displayed in a gradient manner, a motion track manner.
- the animation mode for the video background may specify the manner in which the background is displayed.
- the animation mode can specify the display area, number, display method (stretching or tiled display), display track, etc. of the image.
- the animation mode can specify that a portion of the video is used as the background for the generated video, can specify the relationship between the video background and the resolution of the generated video, and so on.
- the video background is an emoticon (or an emoticon animation)
- the number of emoticons included in the generated video, the display position of the emoticon, and the motion trajectory, etc. can be specified.
- FIG. 7 schematically illustrates a block diagram of a user interface 700 for previewing a video, according to an exemplary implementation of the present disclosure.
- the text input will move in the direction indicated by arrow 720, reappear in the upper part of the display area after moving out of the lower part of the display area, and so on.
- the three images 510, 512 and 514 can be moved in a randomly selected straight line direction. For example, image 512 may move in direction 710 and reorient the movement when reaching the boundaries of the display area.
- a predetermined default animation mode can be provided, in which the user does not have to select parameters related to animation display one by one, but can directly select a static background image to generate a dynamic video.
- a default animation mode for background images may specify that 3 images are displayed, with the images jumping through the video. At this time, when the user selects the moon cake pattern, the generated video will include the jumping effect of 3 moon cake patterns.
- another default animation mode may specify that 1 image is displayed and that the image rotate in the video. At this point, the resulting video will include a spinning animation of the mooncake pattern.
- the default animation mode for text entry may specify that the text entry is displayed at the center of the video.
- dynamic video pictures can be generated based on static text input. In this way, a richer visual expression can be provided to the user, thereby satisfying the needs of different users.
- the video is published in the information sharing application.
- the "next" button 640 may be pressed in order to generate a video.
- the video herein may be a video file in various formats supported by the application 110 .
- text-mode videos can be generated and published in a single application. Compared with the prior art solution of switching between a video editing application and an information sharing application, the above-described method can generate and publish a video in a simpler and more efficient manner without switching applications.
- the code of the emoji may be stored in association with the video. It will be understood that there may be differences in the drawing of emoji when the terminal device adopts different operating systems. Suppose the user inputs the emoji "smiley”, and the code of the emoji is "001", then the code "001" can be stored directly instead of directly adding the video content based on the operating system of the user's terminal device. Emoticons.
- FIG. 8 schematically shows a block diagram of an apparatus 800 for generating a text-mode video according to an exemplary implementation of the present disclosure.
- the apparatus 800 includes: a receiving module 810, configured to receive a request for generating a video from a user of an information sharing application; a display module 820, configured to display an initial page for generating a video in the information sharing application , the initial page includes a prompt for entering text; an obtaining module 830 is configured to obtain text input from the user in response to detecting a user's touch in the area where the initial page is located; and a generating module 840 is configured to based on the text Enter Generate video for posting in information sharing applications.
- a receiving module 810 configured to receive a request for generating a video from a user of an information sharing application
- a display module 820 configured to display an initial page for generating a video in the information sharing application , the initial page includes a prompt for entering text
- an obtaining module 830 is configured to obtain text input from the user in response to detecting a user's touch in the area where the initial page is located
- a generating module 840 is configured to based on the text
- the initial page further includes an option for selecting a video background; and the generating module 840 is further configured to include: in response to receiving the user-selected video background, generating a video based on the video background, the video background including At least one of the following: image, video, emoji, and emoji animation.
- the initial page further includes a speaking option for speaking the text input; and the generating module 840 is further configured to include: in response to receiving a user selection of the speaking option, based on the audio of the speaking text input Generate video.
- the reading options include at least any one of the following: the speaker's gender, age, voice style, and speaking rate.
- the generating module 840 is further configured to generate a video based on the textual input in response to receiving a user de-selection of the read aloud option.
- the generating module 840 includes: an editing page display module configured to display an editing page for generating a video in the information sharing application in response to detecting that the user confirms the initial page; and the generating module 840 further It includes: a video generation module, configured to generate a video based on the user's operation on the editing page.
- the editing page includes: an option for editing at least any one of the text input, the video background, and the read-aloud option; and the video generation module is further configured to: based on the post-editing specified by the user operation option to generate video.
- the editing page includes: an option for selecting to add background sound to the video; and the video generation module is further configured to: generate the video based on the background sound specified by the user operation.
- the editing page includes: an option for selecting to add a sticker to the video; and the video generation module is further configured to: generate a video based on the sticker specified by the user operation, the sticker including a text sticker and an image sticker .
- the editing page includes: an option for specifying a length of the video; and the video generating module is further configured to: generate the video based on the length specified by the user operation.
- the editing page includes: an option for specifying an animation mode of at least any one of text input and a video background; and the video generation module is further configured to: specify the animation mode based on a user operation or Predetermined animation mode to generate video.
- the text input includes an emoji
- the generating module 840 includes an emoji storage module configured to store a code of the emoji in association with the video, for use in accordance with the terminal for playing the video The type of device to display the emoji corresponding to the code.
- the apparatus 800 further includes: a publishing module configured to publish the video in the information sharing application in response to a request from the user for publishing the video.
- the units included in the apparatus 800 may be implemented in various manners, including software, hardware, firmware, or any combination thereof.
- one or more units may be implemented using software and/or firmware, such as machine-executable instructions stored on a storage medium.
- some or all of the units in apparatus 800 may be implemented, at least in part, by one or more hardware logic components.
- exemplary types of hardware logic components include field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standards (ASSPs), systems on chips (SOCs), complex programmable logic devices (CPLD), etc.
- FIG. 9 shows a block diagram of a computing device/server 900 in which one or more implementations of the present disclosure may be implemented. It should be understood that the computing device/server 900 shown in FIG. 9 is merely exemplary and should not constitute any limitation on the functionality and scope of the implementations described herein.
- computing device/server 900 is in the form of a general purpose computing device.
- Components of computing device/server 900 may include, but are not limited to, one or more processors or processing units 910, memory 920, storage devices 930, one or more communication units 940, one or more input devices 950, and one or more Output device 960.
- the processing unit 910 may be an actual or virtual processor and can perform various processes according to programs stored in the memory 920 . In a multi-processor system, multiple processing units execute computer-executable instructions in parallel to increase the parallel processing capabilities of the computing device/server 900 .
- Computing device/server 900 typically includes multiple computer storage media. Such media can be any available media that can be accessed by computing device/server 900, including but not limited to volatile and nonvolatile media, removable and non-removable media.
- Memory 920 may be volatile memory (eg, registers, cache, random access memory (RAM)), non-volatile memory (eg, read only memory (ROM), electrically erasable programmable read only memory (EEPROM) , Flash) or some combination of them.
- Storage device 930 may be removable or non-removable media, and may include machine-readable media, such as flash drives, magnetic disks, or any other media that may be capable of storing information and/or data (eg, training data for training). ) and can be accessed within computing device/server 900.
- Computing device/server 900 may further include additional removable/non-removable, volatile/non-volatile storage media.
- disk drives for reading or writing from removable, non-volatile magnetic disks eg, "floppy disks"
- CD-ROM drive for reading or writing.
- each drive may be connected to a bus (not shown) by one or more data media interfaces.
- Memory 920 may include a computer program product 925 having one or more program modules configured to perform various methods or actions of various implementations of the present disclosure.
- the communication unit 940 enables communication with other computing devices through a communication medium. Additionally, the functions of the components of computing device/server 900 may be implemented in a single computing cluster or multiple computing machines capable of communicating over a communication connection. Thus, computing device/server 900 may operate in a networked environment using logical connections to one or more other servers, network personal computers (PCs), or another network node.
- PCs network personal computers
- Input device 950 may be one or more input devices, such as a mouse, keyboard, trackball, and the like.
- Output device 960 may be one or more output devices, such as a display, speakers, printer, and the like.
- the computing device/server 900 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., through the communication unit 940, as needed, with one or more external devices that enable the user to communicate with the computing device/server. 900 interacts with any device (eg, network card, modem, etc.) that enables computing device/server 900 to communicate with one or more other computing devices. Such communication may be performed via an input/output (I/O) interface (not shown).
- I/O input/output
- a computer-readable storage medium having stored thereon one or more computer instructions, wherein the one or more computer instructions are executed by a processor to implement the method described above.
- These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processing unit of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
- These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
- Computer-readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executables for implementing the specified logical function(s) instruction.
- the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Computer Graphics (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (16)
- 一种用于在信息共享应用中生成文字模式的视频的方法,包括:从所述信息共享应用的用户接收用于生成所述视频的请求;在所述信息共享应用中,显示用于生成所述视频的初始页面,所述初始页面包括用于输入文字的提示;响应于在所述初始页面所在的区域中检测到所述用户的触摸,获取来自所述用户的文字输入;以及基于所述文字输入生成所述视频,以用于在所述信息共享应用中发布。
- 根据权利要求1所述的方法,其中所述初始页面进一步包括用于选择视频背景的选项;以及生成所述视频进一步包括:响应于接收到所述用户选择的视频背景,基于所述视频背景生成所述视频,所述视频背景包括以下中的至少任一项:图像、视频、表情符号以及表情动画。
- 根据权利要求1所述的方法,其中所述初始页面进一步包括用于朗读所述文字输入的朗读选项;以及生成所述视频进一步包括:响应于接收到所述用户对所述朗读选项的选择,基于朗读所述文字输入的音频来生成所述视频。
- 根据权利要求3所述的方法,其中所述朗读选项包括以下中的至少任一项:朗读者的性别、年龄、声音风格以及语速。
- 根据权利要求3所述的方法,其中生成所述视频进一步包括:响应于接收到所述用户取消对所述朗读选项的选择,基于所述文字输入来生成所述视频。
- 根据权利要求1所述的方法,其中生成所述视频包括:响应于检测到所述用户确认所述初始页面,在所述信息共享应用中显示用于生成所述视频的编辑页面;以及基于所述用户对所述编辑页面的用户操作,生成所述视频。
- 根据权利要求6所述的方法,其中所述编辑页面包括:用于编辑所述文字输入、所述视频背景以及所述朗读选项中的至少任一项的选项;以及基于所述用户操作生成所述视频包括:基于由所述用户操作指定的编辑后的选项,生成所述视频。
- 根据权利要求6所述的方法,其中所述编辑页面包括:用于选择向所述视频中添加背景声音的选项;以及基于所述用户操作生成所述视频包括:基于所述用户操作指定的背景声音,生成所述视频。
- 根据权利要求6所述的方法,其中所述编辑页面包括:用于选择向所述视频中添加贴纸的选项;以及基于所述用户操作来生成所述视频包括:基于所述用户操作指定的贴纸,生成所述视频,所述贴纸包括文字贴纸以及图像贴纸。
- 根据权利要求6所述的方法,其中所述编辑页面包括:用于指定所述视频的长度的选项;以及基于所述用户操作生成所述视频包括:基于所述用户操作指定的长度,生成所述视频。
- 根据权利要求6所述的方法,其中所述编辑页面包括:用于指定所述文字输入和所述视频背景中的至少任一项的动画模式的选项;以及基于所述用户操作生成所述视频包括:基于所述用户操作指定的动画模式或者预定动画模式,生成所述视频。
- 根据权利要求1所述的方法,其中所述文字输入包括表情符号,以及生成所述视频包括:与所述视频相关联地存储所述表情符号的代码,以用于按照用于播放所述视频的终端设备的类型来显示与所述代码相对应的表情符号。
- 根据权利要求1所述的方法,进一步包括:响应于来自所述用户的用于发布所述视频的请求,在所述信息共享应用中发布所 述视频。
- 一种用于在信息共享应用中生成文字模式的视频的装置,包括:接收模块,配置用于从所述信息共享应用的用户接收用于生成所述视频的请求;显示模块,配置用于在所述信息共享应用中,显示用于生成所述视频的初始页面,所述初始页面包括用于输入文字的提示;获取模块,配置用于响应于在所述初始页面所在的区域中检测到所述用户的触摸,获取来自所述用户的文字输入;以及生成模块,配置用于基于所述文字输入生成所述视频,以用于在所述信息共享应用中发布。
- 一种电子设备,包括:存储器和处理器;其中所述存储器用于存储一条或多条计算机指令,其中所述一条或多条计算机指令被所述处理器执行以实现根据权利要求1至13中任一项所述的方法。
- 一种计算机可读存储介质,其上存储有一条或多条计算机指令,其中所述一条或多条计算机指令被处理器执行以实现根据权利要求1至13中任一项所述的方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020237002186A KR102613143B1 (ko) | 2020-09-25 | 2021-09-18 | 텍스트-비디오 생성 방법, 장치, 설비 및 매체 |
EP21871463.2A EP4171047A4 (en) | 2020-09-25 | 2021-09-18 | METHOD AND APPARATUS FOR GENERATING TEXT VIDEO, DEVICE AND MEDIUM |
JP2023506273A JP7450112B2 (ja) | 2020-09-25 | 2021-09-18 | 文字モードでビデオを生成する方法、装置、機器、および媒体 |
US18/087,566 US11922975B2 (en) | 2020-09-25 | 2022-12-22 | Method, apparatus, device and medium for generating video in text mode |
US18/429,190 US20240170026A1 (en) | 2020-09-25 | 2024-01-31 | Method, apparatus, device and medium for generating video in text mode |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011027603.2A CN112153475B (zh) | 2020-09-25 | 2020-09-25 | 用于生成文字模式的视频的方法、装置、设备和介质 |
CN202011027603.2 | 2020-09-25 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/087,566 Continuation US11922975B2 (en) | 2020-09-25 | 2022-12-22 | Method, apparatus, device and medium for generating video in text mode |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022063093A1 true WO2022063093A1 (zh) | 2022-03-31 |
Family
ID=73897580
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/119438 WO2022063093A1 (zh) | 2020-09-25 | 2021-09-18 | 用于生成文字模式的视频的方法、装置、设备和介质 |
Country Status (6)
Country | Link |
---|---|
US (2) | US11922975B2 (zh) |
EP (1) | EP4171047A4 (zh) |
JP (1) | JP7450112B2 (zh) |
KR (1) | KR102613143B1 (zh) |
CN (1) | CN112153475B (zh) |
WO (1) | WO2022063093A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112153475B (zh) | 2020-09-25 | 2022-08-05 | 北京字跳网络技术有限公司 | 用于生成文字模式的视频的方法、装置、设备和介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109215655A (zh) * | 2018-10-30 | 2019-01-15 | 维沃移动通信有限公司 | 视频中添加文本的方法和移动终端 |
CN110134920A (zh) * | 2018-02-02 | 2019-08-16 | 中兴通讯股份有限公司 | 绘文字兼容显示方法、装置、终端及计算机可读存储介质 |
WO2020150693A1 (en) * | 2019-01-18 | 2020-07-23 | Snap Inc. | Systems and methods for generating personalized videos with customized text messages |
CN112153475A (zh) * | 2020-09-25 | 2020-12-29 | 北京字跳网络技术有限公司 | 用于生成文字模式的视频的方法、装置、设备和介质 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8744239B2 (en) | 2010-08-06 | 2014-06-03 | Apple Inc. | Teleprompter tool for voice-over tool |
US8701020B1 (en) * | 2011-02-01 | 2014-04-15 | Google Inc. | Text chat overlay for video chat |
US20130294746A1 (en) * | 2012-05-01 | 2013-11-07 | Wochit, Inc. | System and method of generating multimedia content |
US20160173960A1 (en) | 2014-01-31 | 2016-06-16 | EyeGroove, Inc. | Methods and systems for generating audiovisual media items |
US10623747B2 (en) | 2014-06-20 | 2020-04-14 | Hfi Innovation Inc. | Method of palette predictor signaling for video coding |
US20160234494A1 (en) | 2015-02-10 | 2016-08-11 | Qualcomm Incorporated | Restriction on palette block size in video coding |
US20170098324A1 (en) * | 2015-10-05 | 2017-04-06 | Vitthal Srinivasan | Method and system for automatically converting input text into animated video |
WO2017218901A1 (en) * | 2016-06-17 | 2017-12-21 | Infields, Llc | Application for enhancing metadata tag uses for social interaction |
EP3538644B1 (en) * | 2016-11-10 | 2021-12-29 | Becton, Dickinson and Company | Timeline system for monitoring a culture media protocol |
GB2555838A (en) | 2016-11-11 | 2018-05-16 | Sony Corp | An apparatus, computer program and method |
CN107145564A (zh) * | 2017-05-03 | 2017-09-08 | 福建中金在线信息科技有限公司 | 一种信息发布方法及装置 |
KR20180125237A (ko) * | 2017-05-15 | 2018-11-23 | 한경훈 | 모바일 단말기의 이모티콘 입력방법, 그 방법을 위한 소프트웨어를 저장하는 소프트웨어 분배 서버 |
KR101950674B1 (ko) * | 2017-05-26 | 2019-05-17 | (주)거노코퍼레이션 | 동영상 편집 방법을 컴퓨터에서 수행하기 위한 앱이 기록된 컴퓨터 |
CN110062269A (zh) * | 2018-01-18 | 2019-07-26 | 腾讯科技(深圳)有限公司 | 附加对象显示方法、装置及计算机设备 |
CN109120866B (zh) * | 2018-09-27 | 2020-04-03 | 腾讯科技(深圳)有限公司 | 动态表情生成方法、装置、计算机可读存储介质和计算机设备 |
JP2020053026A (ja) | 2019-07-24 | 2020-04-02 | 株式会社ドワンゴ | サーバシステム、アプリケーションプログラム配信サーバ、閲覧用端末、コンテンツ閲覧方法、アプリケーションプログラム、配信方法、アプリケーションプログラム配信方法 |
-
2020
- 2020-09-25 CN CN202011027603.2A patent/CN112153475B/zh active Active
-
2021
- 2021-09-18 EP EP21871463.2A patent/EP4171047A4/en active Pending
- 2021-09-18 JP JP2023506273A patent/JP7450112B2/ja active Active
- 2021-09-18 KR KR1020237002186A patent/KR102613143B1/ko active IP Right Grant
- 2021-09-18 WO PCT/CN2021/119438 patent/WO2022063093A1/zh unknown
-
2022
- 2022-12-22 US US18/087,566 patent/US11922975B2/en active Active
-
2024
- 2024-01-31 US US18/429,190 patent/US20240170026A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110134920A (zh) * | 2018-02-02 | 2019-08-16 | 中兴通讯股份有限公司 | 绘文字兼容显示方法、装置、终端及计算机可读存储介质 |
CN109215655A (zh) * | 2018-10-30 | 2019-01-15 | 维沃移动通信有限公司 | 视频中添加文本的方法和移动终端 |
WO2020150693A1 (en) * | 2019-01-18 | 2020-07-23 | Snap Inc. | Systems and methods for generating personalized videos with customized text messages |
CN112153475A (zh) * | 2020-09-25 | 2020-12-29 | 北京字跳网络技术有限公司 | 用于生成文字模式的视频的方法、装置、设备和介质 |
Non-Patent Citations (5)
Title |
---|
ANONYMOUS: "How to Make A Text Video with Rolling Captions on TikTok", 9 December 2019 (2019-12-09), XP055914130, Retrieved from the Internet <URL:http://www.coozhi.com/youxishuma/shouji/123723.html> * |
ANONYMOUS: "How to Make A Text-to-Speech Rotating Video on ZiShuo App", BAIDU JINGYAN, 2 July 2019 (2019-07-02), XP055914128, Retrieved from the Internet <URL:https://jingyan.baidu.com/article/b87fe19ec678271218356881.html> * |
ANONYMOUS: "Learn How to Make A Text Video on TikTok in 1 Minute", ZHIHU, 11 August 2020 (2020-08-11), XP055914132, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/180383015> * |
See also references of EP4171047A4 * |
YUDAN JUNSHOU TECHNOLOGIES: "It's that simple! Generate Douyin explosion text video in 3 seconds", 31 July 2018 (2018-07-31), CN, pages 1 - 3, XP009535824, Retrieved from the Internet <URL:https://www.sohu.com/a/244381326_100067544> * |
Also Published As
Publication number | Publication date |
---|---|
US20230130806A1 (en) | 2023-04-27 |
JP2023534757A (ja) | 2023-08-10 |
JP7450112B2 (ja) | 2024-03-14 |
KR20230023804A (ko) | 2023-02-17 |
EP4171047A4 (en) | 2023-11-29 |
CN112153475B (zh) | 2022-08-05 |
CN112153475A (zh) | 2020-12-29 |
US20240170026A1 (en) | 2024-05-23 |
KR102613143B1 (ko) | 2023-12-13 |
EP4171047A1 (en) | 2023-04-26 |
US11922975B2 (en) | 2024-03-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102490421B1 (ko) | 터치 감응형 이차 디스플레이에서 사용자 인터페이스 제어부들을 동적으로 제공하기 위한 시스템들, 디바이스들, 및 방법들 | |
JP5752708B2 (ja) | 電子テキスト処理及び表示 | |
US8559732B2 (en) | Image foreground extraction using a presentation application | |
TWI653545B (zh) | 用於即時手寫辨識之方法、系統及非暫時性電腦可讀媒體 | |
US20240107127A1 (en) | Video display method and apparatus, video processing method, apparatus, and system, device, and medium | |
US20160358367A1 (en) | Animation based on Content Presentation Structures | |
US20060253783A1 (en) | Story template structures associated with story enhancing content and rules | |
TWI478043B (zh) | 行動裝置應用頁面樣版之產生系統、方法及其記錄媒體 | |
US20230129847A1 (en) | Method, apparatus and device for issuing and replying to multimedia content | |
US20220093132A1 (en) | Method for acquiring video and electronic device | |
US20160267700A1 (en) | Generating Motion Data Stories | |
US20140164371A1 (en) | Extraction of media portions in association with correlated input | |
US11178356B2 (en) | Media message creation with automatic titling | |
US20240170026A1 (en) | Method, apparatus, device and medium for generating video in text mode | |
CN112040142B (zh) | 用于移动终端上的视频创作的方法 | |
US20140163956A1 (en) | Message composition of media portions in association with correlated text | |
WO2024082981A1 (zh) | 用于特效交互的方法、装置、设备和存储介质 | |
US20230282240A1 (en) | Media Editing Using Storyboard Templates | |
KR20130027412A (ko) | 개인 비디오를 제작하는데 사용되는 편집시스템 | |
WO2023246467A1 (zh) | 用于视频推荐的方法、装置、设备和存储介质 | |
WO2023217122A1 (zh) | 视频剪辑模板搜索方法、装置、电子设备及存储介质 | |
CN115580749A (zh) | 展示方法、装置及可读存储介质 | |
TWM437485U (en) | Editing system for producing personalized audio/video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21871463 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20237002186 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021871463 Country of ref document: EP Effective date: 20230119 |
|
ENP | Entry into the national phase |
Ref document number: 2023506273 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |