WO2023005194A1 - 视频生成方法及电子设备 - Google Patents
视频生成方法及电子设备 Download PDFInfo
- Publication number
- WO2023005194A1 WO2023005194A1 PCT/CN2022/076700 CN2022076700W WO2023005194A1 WO 2023005194 A1 WO2023005194 A1 WO 2023005194A1 CN 2022076700 W CN2022076700 W CN 2022076700W WO 2023005194 A1 WO2023005194 A1 WO 2023005194A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- action
- storyboard
- preset
- script
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 239000000463 material Substances 0.000 claims abstract description 631
- 230000009471 action Effects 0.000 claims abstract description 452
- 238000012545 processing Methods 0.000 claims abstract description 11
- 230000001960 triggered effect Effects 0.000 claims description 20
- 210000000988 bone and bone Anatomy 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 15
- 238000003860 storage Methods 0.000 claims description 12
- 230000004044 response Effects 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 6
- 230000000875 corresponding effect Effects 0.000 description 151
- 238000004519 manufacturing process Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 17
- 230000008569 process Effects 0.000 description 8
- 238000009826 distribution Methods 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Definitions
- the present disclosure relates to the technical field of video processing, and in particular, to a video generation method and electronic equipment.
- Film and television production mainly includes the following steps in the related art: (1) find suitable script; (2) organize filming team, carry out split-screen design to script, then shoot according to shot; (3) post-production (comprising editing) to all shots and special effects, etc.).
- the disclosure provides a video generation method and electronic equipment.
- the disclosed technical scheme is as follows:
- a method for generating a video including:
- a target video is generated based on the preset action material corresponding to the actor attribute information, the target standard action material corresponding to the action attribute information, and the target role material.
- a video generation device including:
- the storyboard script information acquisition module is configured to execute and acquire the storyboard script information of the target script
- the information determination module is configured to determine the target character material corresponding to the storyboard script information, and the actor attribute information and action attribute information corresponding to the target character material;
- the target video generating module is configured to generate a target video based on the preset action material corresponding to the actor attribute information, the target standard action material corresponding to the action attribute information, and the target character material.
- an electronic device including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to implement follows the steps below:
- a target video is generated based on the preset action material corresponding to the actor attribute information, the target standard action material corresponding to the action attribute information, and the target role material.
- a non-volatile computer-readable storage medium when instructions in the storage medium are executed by a processor of an electronic device, the electronic device can perform the following steps:
- a target video is generated based on the preset action material corresponding to the actor attribute information, the target standard action material corresponding to the action attribute information, and the target character material.
- a computer program product including a computer program, and when the computer program is executed by a processor, the following steps are implemented:
- a target video is generated based on the preset action material corresponding to the actor attribute information, the target standard action material corresponding to the action attribute information, and the target character material.
- the technical solutions provided by the embodiments of the present disclosure can generate target videos directly based on the target character material corresponding to the target script, the preset action material corresponding to the actor attribute information, and the target standard action material corresponding to the action attribute information, realizing screenwriting and performance 1.
- the decoupling of these three video production links, and the action materials can be reused, without the need for video shooting for each video production, which greatly improves the video production efficiency and effectively reduces the video production cost.
- Fig. 1 is a schematic diagram showing an application environment according to an exemplary embodiment
- Fig. 2 is a flowchart of a method for generating a video according to an exemplary embodiment
- Fig. 3 is a schematic diagram of a scene editing page according to an exemplary embodiment
- Fig. 4 is a schematic diagram of a storyboard editing page according to an exemplary embodiment
- Fig. 5 is a schematic diagram of a scene editing page according to an exemplary embodiment
- Fig. 6 is a schematic diagram of a scene editing page according to an exemplary embodiment
- Fig. 7 is a flowchart showing a pre-generated standard action material library according to an exemplary embodiment
- Fig. 8 is a flowchart showing a pre-generated standard action material library according to an exemplary embodiment
- Fig. 9 is a flow chart of generating a target video based on preset action material corresponding to actor attribute information, target standard action material and target character material corresponding to action attribute information according to an exemplary embodiment
- Fig. 10 is a flow chart of generating storyboard character material based on target standard action material and preset action material according to an exemplary embodiment
- Fig. 11 is a block diagram of a video generation device according to an exemplary embodiment
- Fig. 12 is a block diagram showing an electronic device for video generation according to an exemplary embodiment.
- the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for display, analysis data, etc.) involved in this disclosure are authorized by the user or fully obtained by all parties.
- Authorized Information and Data are authorized by the user or fully obtained by all parties.
- FIG. 1 is a schematic diagram of an application environment according to an exemplary embodiment. As shown in FIG. 1, the application environment includes a first terminal 100, a second terminal 200, a third terminal 300, a fourth Terminal 400 and server 500.
- the first terminal 100 is a terminal corresponding to the script creator, and is used to provide script upload service for any user.
- the script creator sends the created script to the server 500 based on the first terminal 100 .
- the second terminal 200 is a terminal corresponding to a user (at least one first preset actor) who shoots a standard action video, and is used to provide any user with an upload service of a standard action video.
- at least one first preset actor is based on the second
- the second terminal 200 sends the standard action video to the server 500 .
- the third terminal 300 is the terminal corresponding to the second preset actor, and provides action video (non-standard action video) upload service for any user.
- any second preset actor sends Preset action videos.
- the fourth terminal 400 is a terminal corresponding to any director, and is used to provide script-based video creation services for any user.
- the server 500 is a background server of the first terminal 100 , the second terminal 200 , the third terminal 300 , and the fourth terminal 400 .
- at least one first preset actor refers to one first preset actor or more than one first preset actor.
- the above-mentioned first terminal 100, second terminal 200, third terminal 300 and fourth terminal 400 include but are not limited to smartphones, desktop computers, tablet computers, notebook computers, smart speakers, digital assistants, augmented reality (augmented reality, AR)/virtual reality (virtual reality, VR) devices, smart wearable devices and other types of electronic devices.
- the above-mentioned electronic devices run software, such as application programs, etc., and the first terminal 100, the second terminal 200, the third terminal 300, and the fourth terminal 400 can implement corresponding operations through the running software.
- the operating system running on the electronic device includes but not limited to Android system, IOS system, linux, windows and so on.
- the above-mentioned server 500 is an independent physical server, or a server cluster or distributed system composed of multiple physical servers, or provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services Cloud servers for basic cloud computing services such as cloud communications, middleware services, domain name services, security services, CDN (Content Delivery Network, content distribution network), and big data and artificial intelligence platforms.
- cloud services such as cloud communications, middleware services, domain name services, security services, CDN (Content Delivery Network, content distribution network), and big data and artificial intelligence platforms.
- FIG. 1 is only an application environment provided by the present disclosure, and in actual application, other application environments are also included, such as the first terminal 100, the second terminal 200, the third terminal 300,
- the fourth terminal 400 corresponds to different servers respectively.
- the servers corresponding to the first terminal 100, the second terminal 200, and the third terminal 300 send the created script, standard action video and preset action video to the fourth terminal 400 corresponding to server.
- the first terminal 100, the second terminal 200, the third terminal 300, the fourth terminal 400, and the server 500 are directly or indirectly connected through wired or wireless communication, which is not limited in the present disclosure.
- Fig. 2 is a flow chart of a method for generating a video according to an exemplary embodiment. As shown in Fig. 2 , the method for generating a video is executed by a fourth terminal and includes the following steps.
- the target video is generated based on the preset action material corresponding to the actor attribute information, the target standard action material and the target character material corresponding to the action attribute information.
- the target script is pre-provided by the first terminal, or the target script is provided by other devices, which is not limited in this embodiment of the present disclosure.
- the platform pushes the target script to the terminal corresponding to the user who needs to create the video by actively pushing.
- a user such as a director who needs to create a video selects a target script through an active search.
- the fourth terminal displays a storyboard editing page corresponding to the target script, and the storyboard editing page is used to edit and configure the storyboard corresponding to the target script, and then generate a target video corresponding to the target script.
- the storyboard editing page includes a storyboard display area, which is used to display the storyboard, and the storyboard display area is a blank drawing board in the initial state, during the process of making the video , fill the blank drawing board by adding storyboard materials in the storyboard display area, and then form a storyboard.
- the storyboard includes information corresponding to each frame in the storyboard, and the information corresponding to each frame includes images, music, lines, and the like.
- the target scenario includes a plurality of shot scenario information.
- the above-mentioned storyboard editing page also includes a storyboard script display area, which is used to display multiple storyboard script information corresponding to the target script, and a storyboard script information corresponds to a storyboard screen.
- the target script corresponds to There are multiple storyboards, and the storyboard displayed in the storyboard display area at a certain moment is the current storyboard.
- the current storyboard is a storyboard corresponding to any one of the storyboard script information among the plurality of storyboard script information.
- a certain storyboard information is selected by clicking on an area where the storyboard information is located, and correspondingly, the storyboard corresponding to the selected storyboard information is the current storyboard.
- the above-mentioned video generation method before displaying the storyboard editing page corresponding to the target script, the above-mentioned video generation method further includes: displaying a script presentation page, where the script presentation page shows summary information of at least one script;
- the script selection command triggered by the script summary information performs splitting processing on the target script to obtain multiple splitting script information, and the target script is the script corresponding to the script selection command.
- at least one scenario refers to one scenario or more than one scenario.
- the script presentation page displays the script summary information of at least one script that can be selected, and the script summary information is the main information that can describe the script.
- the scenario summary information includes a scenario name, an introduction, and the like.
- the user (director) triggers the script selection instruction by clicking on the script summary information of a certain script based on the demand.
- the target script is subjected to splitting processing to obtain multiple splitting script information, and then the page is redirected to the splitting editing page.
- the script summary information of at least one script that can be selected is displayed to the user through the script display page, which is convenient for the user to intuitively select the target script on demand, and after the target script is selected, the target script is segmented to obtain Multiple storyboard script information, which is convenient for subsequent users to edit storyboards in combination with each storyboard script information on the storyboard editing page.
- the acquisition of the storyboard script information of the target script includes: acquiring the script content information of the target script; performing semantic recognition on the script content information to obtain a semantic recognition result; performing splitting processing on the target script based on the semantic recognition result, Get storyboard information.
- semantic recognition is performed on the script content information of the target script, and based on the result of the semantic recognition, the target script is segmented.
- the semantic recognition result is a plurality of script content information corresponding to the recognized semantics, and correspondingly, the script content information corresponding to each semantic is used as the scene script information.
- acquiring the storyboard script information of the target script includes: displaying the script content information of the target script on the script display page; and acquiring a plurality of storyboard script information in response to a storyboard instruction triggered based on the script content information.
- the screenplay processing is performed on the content information of the script in combination with the preset separator.
- insert the preset separator between two adjacent storyboard script information and after setting all the preset separators, trigger the storyboard command through the preset button and other controls; correspondingly, trigger the storyboard command , based on the preset separator in the script content information, a plurality of storyboard script information can be extracted.
- the preset separator can be preset in combination with actual applications.
- a storyboard command is triggered to obtain a plurality of storyboard script information.
- the user determines the target character material and actor attribute information and action attribute information corresponding to the target character material based on the storyboard editing page provided by the fourth terminal.
- the above-mentioned determination of the target character material corresponding to the storyboard script information, and the actor attribute information and action attribute information corresponding to the target character material include: responding to the character material addition instruction, displaying in the storyboard display area the character material corresponding to the character material addition instruction.
- the target character material and display the configuration operation information of the character attribute corresponding to the target character material on the storyboard editing page; in response to the attribute configuration instruction triggered based on the configuration operation information, display the actor attribute information and the actor attribute information corresponding to the target character material on the storyboard editing page Action attribute information.
- the storyboard editing page further includes a storyboard material display area and a character material configuration area, and the storyboard material display area displays storyboard materials for constructing a storyboard picture.
- the storyboard material includes character material, and correspondingly, at least one character material is displayed in the storyboard material display area; the character material is the main storyline of the storyboard picture composed of people, animals, etc. (the storyboard script information corresponds to the main storyline) image of the object.
- at least one character material refers to one character material or more than one character material, and the character material is a wireframe outline of the above-mentioned object (role).
- the character materials include but are not limited to young women, young men, child women, child men, middle-aged women, middle-aged men, and the like.
- the character material configuration area is used to display the configuration operation information of the character attribute corresponding to the target character material.
- the configuration operation information is used to trigger the configuration of the character attributes corresponding to the target character material.
- Character attributes include actor attribute information and action attribute information.
- the above-mentioned response to the character material addition instruction displays the character material addition instruction corresponding to the character material addition instruction in the storyboard screen display area.
- the target character material; and the configuration operation information for displaying the character attributes corresponding to the target character material on the storyboard editing page includes: displaying the target character material in the storyboard display area in response to a character material addition instruction triggered based on any character material; and The configuration operation information of the character attribute corresponding to the target character material is displayed in the character material configuration area.
- the triggered character material is determined as the target character material; the target character material is displayed in the storyboard display area; and the target character material is displayed in the character material configuration area.
- the configuration operation information of the character attribute corresponding to the target character material; based on the configuration operation information, actor attribute information and action attribute information are configured.
- the trigger operation on the character material in the storyboard material display area can trigger the character material addition command, the trigger operation is the operation of dragging the character material to the storyboard display area, or the trigger operation is clicking the character material action, or the triggering action is another action.
- the fourth terminal determines the target character involved in the storyboard script information based on the storyboard script information, and the user drags the character material corresponding to the target character in the storyboard material display area according to the target role, Thus, the fourth terminal determines the character material dragged to the storyboard display area as the target character material; or, the user clicks on the character material corresponding to the target character in the storyboard material display area, so that the fourth terminal determines the character material clicked by the user Determine it as the target character material, and display the target character material in the storyboard display area.
- FIG. 3 is a schematic diagram of a scene editing page according to an exemplary embodiment.
- the area 301 is the display area of the storyboard screen
- the area 302 is the display area of the storyboard material
- the area 303 is the configuration area of the character material.
- Figure 4 is A schematic diagram of a scene editing page according to an exemplary embodiment.
- the information 401 is configuration operation information of role attributes.
- one storyboard corresponds to one or more target character materials.
- the target character materials are added sequentially, and the process of adding each target character material is similar to the above-mentioned process of adding target character materials, which will not be repeated here.
- the position information of the character material in the storyboard display area is the position information of the character material in subsequent storyboards.
- At least one character material is displayed in the storyboard material display area, and the user can add the character material by selecting diagonal materials, which can greatly improve the convenience of adding character materials, and further improve the editing efficiency of the storyboard screen.
- character material can also be added in other ways.
- the above-mentioned character material addition command is triggered, and then the character material is added.
- the action attribute information is information characterizing the action of the character in the corresponding storyboard.
- the corresponding operation is performed on the configuration operation information to trigger the text input box displaying the action attribute information on the storyboard edit page, correspondingly, in the text input box, Enter action attribute information, such as action keywords.
- FIG. 5 is a schematic diagram of a scene editing page provided according to an exemplary embodiment.
- the corresponding operation is performed on the configuration operation information to trigger the display of a filter box with multiple standard action options on the storyboard editing page.
- At least one standard action option is used to determine the action attribute information.
- at least one standard action option refers to one standard action option or more than one standard action option.
- multiple standard action options may include but are not limited to "running", “salute", “look left” and “Look right” and other action information.
- the actor attribute information is the information of the actor playing the role, and the actor attribute information includes but not limited to the actor's name, age, clothing and other information.
- the basic information of selectable actors is displayed on the storyboard edit page for the user to select as needed.
- the basic information of the selectable actors is displayed on the storyboard editing page through a pop-up window.
- FIG. 6 is a schematic diagram of a scene editing page provided according to an exemplary embodiment.
- the preset action material is action material extracted from the action video of the target actor whose attribute information corresponds to the target actor
- the target standard action material is the action material in the standard action material that matches the action attribute information
- the standard action The material is the action material extracted from the standard action video of at least one first preset actor.
- the target video is a film and television work, or a short video with a certain storyline, etc.
- the above-mentioned video generation method further includes the step of pre-generating a standard action material library.
- the step of pre-generating a standard action material library includes:
- S701 acquire a standard action video shot by at least one first preset actor under a preset background.
- the skeleton sequence image corresponding to each first preset actor is extracted from the standard action video.
- the skeleton sequence image corresponding to each first preset actor is determined as the standard action material of each first preset actor.
- a standard action material library is generated based on the standard action material of at least one first preset actor.
- the first preset actor is an actor who shoots a standard action video. In some embodiments, when at least one first preset actor is a plurality of first preset actors, the plurality of first preset actors shoot standard action videos corresponding to different standard actions.
- the standard action material extracted from the standard action video is used to correct the non-standard action material (the action material extracted from the standard action video shot by a non-first preset actor).
- the preset backgrounds include but are not limited to green curtains, blue curtains, red curtains or backgrounds of other colors suitable for matting for video matting.
- the skeleton sequence image is a plurality of skeleton images corresponding to a certain standard action, and the skeleton image is an image including key parts and key points when the actor performs the corresponding action.
- the skeleton image extraction network is obtained by training the preset neural network based on the sample frame images marked with the position information of the key parts and key points of the sample actors in advance.
- the sample frame images are extracted from the action videos taken by the sample actors in the preset background frame image.
- generating the standard action material library based on the standard action material of at least one first preset actor includes: establishing a first correspondence, the first correspondence including each standard action material and corresponding action key text information; A standard action material library is constructed based on the standard action material of at least one first preset actor and the first corresponding relationship.
- the standard action material library is generated by extracting the standard action material from the standard action video, so as to ensure that the standard action material is reused in the video production process, and the action material of the selected actor is corrected, thereby greatly improving Video production quality and efficiency.
- the above-mentioned video generation method further includes the step of pre-generating a preset action material library.
- the step of pre-generating a preset action material library includes:
- the skeleton sequence image corresponding to each second preset actor is determined as the action material of each second preset actor.
- a preset action material library is generated based on action materials of a plurality of second preset actors.
- the second preset actor is a user with acting capabilities. From the action video of each second preset actor, the step of extracting the skeletal sequence image corresponding to each second preset actor, refer to the above-mentioned extracting the skeletal sequence image corresponding to the first preset actor from the standard action video refinement and will not be repeated here.
- generating the preset action material library includes: establishing a second correspondence, the second correspondence includes the action materials of the second preset actors and the corresponding actors Attribute information; based on the action materials of multiple second preset actors and the second corresponding relationship, the above-mentioned preset action material library is constructed.
- the preset action material library is generated by extracting the action materials from the action videos shot by multiple second preset actors in the preset background, so as to ensure the multiplexing of the action materials during the video production process. , no need to shoot video every time video is produced, which greatly improves the efficiency of video production and effectively reduces the cost of video production.
- generating the target video includes the following steps:
- a preset action material is determined from a preset action material library.
- the split-shot character material is generated.
- a target video is generated according to the storyboard character material and the target character material.
- the preset action material library includes a second correspondence between action materials of a plurality of second preset actors and actor attribute information of the second preset actor; therefore, based on the second correspondence, it can be determined The default action material corresponding to actor attribute information.
- the action attribute information corresponds to one or more standard action materials
- the target standard action material is generated based on the one or more standard action materials.
- the action attribute information "salute while running” corresponds to the standard action materials "running” and “salute”
- the target standard action material includes the standard action material "running” and the standard action material "salute”
- the action attribute information "shaking head left and right” corresponds to the standard action material "look left”
- the standard action material "look right” is derived according to the symmetrical operation
- the standard action material "look left” and the standard action material "look right” are combined into an action attribute
- the action attribute information "brisk walking” corresponds to the standard action material "walking”.
- the target standard action material corresponding to "fast walking” is generated by modifying the amplitude and position of the standard action material "walking”.
- each standard action material in the standard action material library corresponds to an action key text information (keyword).
- the action is determined from the standard action material library. The standard action material corresponding to the attribute information.
- the preset action materials corresponding to an actor include a large number of materials corresponding to the actions.
- bone matching is performed on the target standard action material and the preset action material to obtain a bone matching result.
- the above-mentioned skeleton matching process includes but is not limited to extracting distance distribution diagrams of skeleton points (key points of key parts) in standard action materials and preset action materials, combining the similarity between the distance distribution diagrams of skeleton points degree (skeleton matching result) to select the target action material.
- the above action calibration process includes but is not limited to combining the Laplacian algorithm to perform grid transformation on the bone points in the target action material to map to the corresponding bone points of the standard action material to obtain the split character material.
- the target action material corresponding to the action attribute information is accurately selected from a large number of action materials of the target actor, and combined with the target standard action material Perform motion calibration on the target action material to obtain the storyboard character material for constructing the storyboard screen, which can effectively improve the quality of video production.
- the above-mentioned generation of the target video according to the storyboard character material includes: generating a corresponding storyboard picture based on each storyboard character material and the corresponding first display position information, and according to the sequence corresponding to the multiple storyboard pictures information, and synthesize the split-mirror images to obtain the target video.
- the target script includes a plurality of storyboard script information
- each storyboard script information based on the storyboard character material and the first display position information, generate the storyboard picture corresponding to the storyboard script information
- For a plurality of storyboard script information a plurality of storyboard pictures corresponding to the plurality of storyboard script information are generated in a similar manner, and then according to the order of the multiple storyboard script information in the target script, the multiple storyboard script information
- the corresponding multiple mirror images are synthesized to obtain the target video.
- the first display position information is the position information of the target character material in the storyboard display area, since the position in the storyboard display area corresponds to the position in the storyboard to be generated, Therefore, the first display position information also indicates the position of the target character material in the storyboard to be generated.
- the preset action material of the target actor is obtained from the preset action material library in combination with the actor attribute information
- the target standard action material is obtained from the standard action material library in combination with the action attribute information, realizing the multiplexing of the action material.
- the storyboard materials further include scene materials, and correspondingly, at least one scene material is displayed in the storyboard material display area.
- at least one scene material refers to one scene material or more than one scene material.
- the scene material includes a background scene material and a foreground scene material. The size corresponding to the mirror image is the same, and the foreground scene material is the image or wireframe outline of the scene object required by the storyboard corresponding to the plot, such as an image of a table.
- the above video generation method further includes: determining the target scene material corresponding to the storyboard script information.
- generating the target video based on the storyboard character material includes: generating the target video based on the storyboard character material and the target scene material.
- determining the target scene material corresponding to the storyboard script information includes: responding to a scene material addition instruction triggered based on any scene material, displaying the target scene material corresponding to the scene material addition instruction in the storyboard display area.
- the scene material adding instruction is triggered by dragging the scene material to the scene display area, or the scene material adding instruction is triggered by clicking the scene material or other operations.
- one mirror image corresponds to one or more scene materials, in some embodiments, there is one background scene image, and one or more foreground scene images.
- the target scene materials are added sequentially.
- only foreground scene material or background scene material is added during the process of adding scene material.
- the above-mentioned generation of the target video based on the storyboard character material and the target scene material includes: based on each storyboard character material, the first display position information corresponding to each storyboard character material, and the target scene material corresponding to each storyboard character material, and the second display position information corresponding to the target scene material, generate corresponding storyboard pictures, and for multiple storyboard script information, use a similar method to generate multiple storyboard pictures corresponding to the multiple storyboard script information, and according to multiple The timing information corresponding to each sub-mirror picture is synthesized to obtain the target video.
- the first display position corresponding to the storyboard character material refers to the position of the target character material corresponding to the storyboard character material.
- the second display position information includes position information of the foreground scene material in the display area of the storyboard.
- the second display location information includes position information of the background scene material in the display area of the storyboard.
- the background scene material is distributed in the entire display area of the storyboard.
- the position information of the foreground scene material in the mirror image display area is the position information of the foreground scene material in the subsequent mirror image. location information.
- scene materials are also added, which greatly enriches the storyboard images and further improves the video quality.
- the above-mentioned storyboard editing page also includes other configuration areas corresponding to other materials.
- Other materials are materials other than character materials and scene materials in the information required for building a storyboard screen.
- Other materials include but not It is limited to lines, background music, etc., and correspondingly, configure other materials based on the configuration commands triggered by other configuration areas.
- generating the target video based on the storyboard character material and the target scene material includes: generating the target video based on the storyboard character material, the target scene material and other materials; generating the target video based on the storyboard character material, the target scene material and other materials.
- the target video includes: generating corresponding storyboards based on each storyboard character material, first display position information, target scene elements, second display position information and other materials, and according to the timing information corresponding to multiple storyboards, the The split-mirror images are synthesized to obtain the target video.
- generating the target video based on the preset action material corresponding to the actor attribute information, the target standard action material and the target character material corresponding to the action attribute information includes: combining the actor attribute information, the action attribute information and the target character material Send it to the server, so that the server can determine the preset action material from the preset action material library based on the actor attribute information; and determine the target standard action material from the standard action material library based on the action attribute information; and based on the target standard action material and the preset action material Set the action material, generate the storyboard character material; and generate the target video according to the storyboard character material.
- generating the target video further includes: sending actor attribute information, action attribute information, target role material, target scene material, and other materials to the server, so that the server determines the preset action material from the preset action material library based on the actor attribute information; And based on the action attribute information, determine the target standard action material from the standard action material library; and based on the target standard action material and preset action material, generate the storyboard character material; and according to the storyboard character material, target scene material and other materials, Generate target video.
- the server after the server generates the target video, it sends it to the fourth terminal so that the user can view the target video. In some embodiments, after the target video is generated, or the user views the target video and confirms that the target video After the release of , the target video can be released to the corresponding display platform.
- the technical solutions provided by the embodiments of the present disclosure can generate target videos directly based on the target character material corresponding to the target script, the preset action material corresponding to the actor attribute information, and the target standard action material corresponding to the action attribute information, realizing screenwriting and performance 1.
- the decoupling of these three video production links, and the action materials can be reused, without the need for video shooting for each video production, which greatly improves the video production efficiency and effectively reduces the video production cost.
- Fig. 11 is a block diagram of a video generating device according to an exemplary embodiment. Referring to Figure 11, the device includes:
- the storyboard script information acquisition module 1110 is configured to acquire the storyboard script information of the target script
- the information determination module 1120 is configured to determine the target character material corresponding to the storyboard script information, and the actor attribute information and action attribute information corresponding to the target character material;
- the target video generation module 1130 is configured to generate the target video based on the preset action material corresponding to the actor attribute information, the target standard action material and the target role material corresponding to the action attribute information.
- the target video generation module 1130 includes:
- the preset action material determination unit is configured to determine the preset action material from the preset action material library based on actor attribute information
- the target standard action material determination unit is configured to determine the target standard action material from the standard action material library based on the action attribute information
- the storyboard character material generation unit is configured to generate storyboard character material based on target standard action materials and preset action materials;
- the target video generating unit is configured to generate the target video based on the storyboard character material and the target character material.
- the storyboard role material generation unit includes:
- the bone matching unit is configured to perform bone matching on the target standard action material and the preset action material to obtain a bone matching result
- a target action material determination unit configured to determine a target action material matching the target standard action material from preset action materials based on the skeleton matching result
- the action calibration unit is configured to perform action calibration on the target action material based on the target standard action material to obtain the split-shot character material.
- the target script includes a plurality of storyboard script information
- the target video generation unit is configured to:
- each storyboard script information based on the storyboard character material and the first display position information, generate a storyboard picture corresponding to the storyboard script information, and the first display position information indicates the position of the target character material in the storyboard picture to be generated :
- the multiple storyboard pictures corresponding to the plurality of storyboard script information are synthesized to obtain the target video.
- the above-mentioned device also includes:
- the target scene material determination unit is configured to determine the target scene material corresponding to the storyboard script information
- the target video generation unit is further configured to generate the target video based on the storyboard character material and the target scene material.
- the preset action material is the action material extracted from the action video of the target actor, and the target actor is an actor corresponding to the actor attribute information.
- the above-mentioned device also includes:
- An action video acquisition module configured to acquire action videos taken by a plurality of second preset actors under a preset background
- the first skeletal sequence image extraction module is configured to extract the skeletal sequence image corresponding to each second preset actor from the action video of each second preset actor;
- the action material determination module is configured to determine the skeletal sequence image corresponding to each second preset actor as the action material of each second preset actor;
- the preset action material library generation module is configured to generate a preset action material library based on the action materials of a plurality of second preset actors.
- the target standard action material is the action material matching the action attribute information among the standard action materials
- the standard action material is the action material extracted from the standard action video of at least one first preset actor
- the above-mentioned device also includes :
- a standard action video acquisition module configured to acquire a standard action video taken by at least one first preset actor in a preset background
- the second skeletal sequence image extraction module is configured to extract the skeletal sequence image corresponding to each first preset actor from the standard action video;
- the standard action material determination module is configured to determine the skeletal sequence image corresponding to each first preset actor as the standard action material of each first preset actor;
- the standard action material library generating module is configured to generate a standard action material library based on the standard action material of at least one first preset actor.
- the storyboard script information acquisition module 1110 includes:
- a script content information acquisition unit configured to acquire script content information of a target script
- the semantic identification unit is configured to perform semantic identification on script content information to obtain a semantic identification result
- the splitting processing unit is configured to split the target script based on the semantic recognition result to obtain splitting script information.
- the storyboard script information acquisition module 1110 is also configured to:
- the storyboard editing page also includes a storyboard material display area, a character material configuration area, and a storyboard screen display area, and the information determination module 1120 is configured to:
- actor attribute information and action attribute information are configured.
- Fig. 12 is a block diagram of an electronic device for video generation according to an exemplary embodiment.
- the electronic device may be a terminal, and its internal structure may be as shown in Fig. 12 .
- the electronic device includes a processor, a memory, a network interface, a display screen and an input device connected through a system bus. Wherein, the processor of the electronic device is used to provide calculation and control capabilities.
- the memory of the electronic device includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium stores an operating system and computer programs.
- the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
- the network interface of the electronic device is used to communicate with an external terminal through a network connection.
- the display screen of the electronic device may be a liquid crystal display screen or an electronic ink display screen
- the input device of the electronic device may be a touch layer covered on the display screen, or a button, a trackball or a touch pad provided on the housing of the electronic device , and can also be an external keyboard, touchpad, or mouse.
- FIG. 12 is only a block diagram of a partial structure related to the disclosed solution, and does not constitute a limitation on the electronic device to which the disclosed solution is applied.
- the specific electronic device can be More or fewer components than shown in the figures may be included, or some components may be combined, or have a different arrangement of components.
- an electronic device including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to implement the following steps:
- the target video is generated based on the preset action material corresponding to the actor attribute information, the target standard action material and the target character material corresponding to the action attribute information.
- the processor is configured to execute instructions to implement the following steps:
- a target video is generated.
- the processor is configured to execute instructions to implement the following steps:
- Skeleton matching is performed on the target standard action material and the preset action material to obtain the bone matching result
- the processor is configured to execute instructions to implement the following steps:
- each storyboard script information based on the storyboard character material and the first display position information, generate a storyboard picture corresponding to the storyboard script information, and the first display position information indicates the position of the target character material in the storyboard picture to be generated ;
- the multiple storyboard pictures corresponding to the plurality of storyboard script information are synthesized to obtain the target video.
- the processor is configured to execute instructions to implement the following steps:
- the target video is generated.
- the preset action material is the action material extracted from the action video of the target actor, the target actor is an actor corresponding to actor attribute information, and the processor is configured to execute instructions to implement the following steps:
- a preset action material library is generated.
- the target standard action material is the action material matching the action attribute information in the standard action material
- the standard action material is the action material extracted from the standard action video of at least one first preset actor
- the processor is configured In order to execute the instruction, to achieve the following steps:
- a standard action material library is generated based on the standard action material of at least one first preset actor.
- the processor is configured to execute instructions to implement the following steps:
- the target script is segmented to obtain the information of the segmented script.
- the processor is configured to execute instructions to implement the following steps:
- the storyboard editing page also includes a storyboard material display area, a role material configuration area, and a storyboard screen display area, and the processor is configured to execute instructions to implement the following steps:
- actor attribute information and action attribute information are configured.
- a non-volatile computer-readable storage medium is also provided, and when the instructions in the storage medium are executed by the processor of the electronic device, the electronic device can execute the video in the embodiment of the present disclosure. generate method.
- a computer program product including a computer program, and when the computer program is executed by a processor, the video generation method in the embodiment of the present disclosure is implemented.
- Nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory can include random access memory (RAM) or external cache memory.
- RAM random access memory
- RAM is available in many forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Chain Synchlink DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
一种视频生成方法及电子设备,属于视频处理技术领域,该方法包括获取目标剧本的分镜剧本信息(S201);确定分镜剧本信息对应的目标角色素材,以及目标角色素材对应的演员属性信息和动作属性信息(S203);基于演员属性信息对应的预设动作素材、动作属性信息对应的目标标准动作素材和目标角色素材,生成目标视频(S205)。
Description
本公开基于申请日为2021年07月29日、申请号为202110862793.8的中国专利申请,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
本公开涉及视频处理技术领域,尤其涉及一种视频生成方法及电子设备。
相关技术中影视制作主要包括以下步骤:(1)找到合适的剧本;(2)组织拍摄团队,对剧本进行分镜设计,然后按镜头进行拍摄;(3)对所有镜头进行后期制作(包括剪辑和特效等)。
发明内容
本公开提供一种视频生成方法及电子设备。本公开的技术方案如下:
根据本公开实施例的一方面,提供一种视频生成方法,包括:
获取目标剧本的分镜剧本信息;
确定所述分镜剧本信息对应的目标角色素材,以及所述目标角色素材对应的演员属性信息和动作属性信息;
基于所述演员属性信息对应的预设动作素材、所述动作属性信息对应的目标标准动作素材和所述目标角色素材,生成目标视频。
根据本公开实施例的另一方面,提供一种视频生成装置,包括:
分镜剧本信息获取模块,被配置为执行获取目标剧本的分镜剧本信息;
信息确定模块,被配置为执行确定所述分镜剧本信息对应的目标角色素材,以及所述目标角色素材对应的演员属性信息和动作属性信息;
目标视频生成模块,被配置为执行基于所述演员属性信息对应的预设动作素材、所述动作属性信息对应的目标标准动作素材和所述目标角色素材,生成目标视频。
根据本公开实施例的另一方面,提供一种电子设备,包括:处理器;用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,以实现如下步骤:
获取目标剧本的分镜剧本信息;
确定所述分镜剧本信息对应的目标角色素材,以及所述目标角色素材对应的演员属性 信息和动作属性信息;
基于所述演员属性信息对应的预设动作素材、所述动作属性信息对应的目标标准动作素材和所述目标角色素材,生成目标视频。
根据本公开实施例的另一方面,提供一种非易失性计算机可读存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行如下步骤:
获取目标剧本的分镜剧本信息;
确定所述分镜剧本信息对应的目标角色素材,以及所述目标角色素材对应的演员属性信息和动作属性信息;
基于所述演员属性信息对应的预设动作素材、所述动作属性信息对应的目标标准动作素材和所述目标角色素材,生成目标视频。
根据本公开实施例的另一方面,提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如下步骤:
获取目标剧本的分镜剧本信息;
确定所述分镜剧本信息对应的目标角色素材,以及所述目标角色素材对应的演员属性信息和动作属性信息;
基于所述演员属性信息对应的预设动作素材、所述动作属性信息对应的目标标准动作素材和所述目标角色素材,生成目标视频。
本公开实施例提供的技术方案,能够直接基于目标剧本对应的目标角色素材,以及演员属性信息对应的预设动作素材和动作属性信息对应的目标标准动作素材,生成目标视频,实现了编剧、表演、制作这三个视频制作环节的解耦,且动作素材能够复用,无需每次视频制作都进行视频拍摄,大大提升了视频制作效率,同时也有效降低了视频制作成本。
图1是根据一示例性实施例示出的一种应用环境的示意图;
图2是根据一示例性实施例示出的一种视频生成方法的流程图;
图3是根据一示例性实施例示出的一种分镜编辑页面的示意图;
图4是根据一示例性实施例示出的一种分镜编辑页面的示意图;
图5是根据一示例性实施例示出的一种分镜编辑页面的示意图;
图6是根据一示例性实施例示出的一种分镜编辑页面的示意图;
图7是根据一示例性实施例示出的一种预先生成标准动作素材库的流程图;
图8是根据一示例性实施例示出的一种预先生成标准动作素材库的流程图;
图9是根据一示例性实施例示出的一种基于演员属性信息对应的预设动作素材、动作属性信息对应的目标标准动作素材和目标角色素材,生成目标视频的流程图;
图10是根据一示例性实施例示出的一种基于目标标准动作素材和预设动作素材,生成分镜角色素材的流程图;
图11是根据一示例性实施例示出的一种视频生成装置框图;
图12是根据一示例性实施例示出的一种用于视频生成的电子设备的框图。
本公开所涉及的用户信息(包括但不限于用户设备信息、用户个人信息等)和数据(包括但不限于用于展示的数据、分析的数据等),均为经用户授权或者经过各方充分授权的信息和数据。
请参阅图1,图1是根据一示例性实施例示出的一种应用环境的示意图,如图1所示,该应用环境包括第一终端100、第二终端200、第三终端300、第四终端400和服务器500。
在一些实施例中,第一终端100为剧本制作者对应的终端,用于面向任一用户提供剧本的上传服务,相应的,剧本制作者基于第一终端100向服务器500发送创作好的剧本。第二终端200为拍摄标准动作视频的用户(至少一个第一预设演员)对应的终端,用于面向任一用户提供标准动作视频的上传服务,相应的,至少一个第一预设演员基于第二终端200向服务器500发送标准动作视频。第三终端300为第二预设演员对应的终端,面向任一用户提供动作视频(非标准动作视频)的上传服务,相应的,任一第二预设演员基于第三终端300向服务器500发送预设动作视频。第四终端400为任一导演对应的终端,用于面向任一用户提供基于剧本的视频创作服务。服务器500为上述第一终端100、第二终端200、第三终端300、第四终端400的后台服务器。其中,至少一个第一预设演员是指一个第一预设演员或者一个以上第一预设演员。
在一些实施例中,上述第一终端100、第二终端200、第三终端300和第四终端400包括但不限于智能手机、台式计算机、平板电脑、笔记本电脑、智能音箱、数字助理、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、智能可穿戴设备等类型的电子设备。在一些实施例中,上述电子设备上运行有软件,例如应用程序等,第一终端 100、第二终端200、第三终端300和第四终端400能够通过运行的软件实现对应的操作。在一些实施例中,电子设备上运行的操作系统包括但不限于安卓系统、IOS系统、linux、windows等。
在一些实施例中,上述服务器500是独立的物理服务器,或者是多个物理服务器构成的服务器集群或者分布式系统,或者是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN(Content Delivery Network,内容分发网络)、以及大数据和人工智能平台等基础云计算服务的云服务器。
此外,需要说明的是,图1所示的仅是本公开提供的一种应用环境,在实际应用中,还包括其他应用环境,例如第一终端100、第二终端200、第三终端300、第四终端400分别对应不同的服务器,相应的,第一终端100、第二终端200、第三终端300对应的服务器将创作的剧本、标准动作视频和预设动作视频发送给第四终端400对应的服务器。
本公开实施例中,上述第一终端100、第二终端200、第三终端300、第四终端400和服务器500通过有线或无线通信方式进行直接或间接地连接,本公开在此不做限制。
图2是根据一示例性实施例示出的一种视频生成方法的流程图,如图2所示,该视频生成方法的执行主体为第四终端,包括以下步骤。
在S201中,获取目标剧本的分镜剧本信息。
在S203中,确定分镜剧本信息对应的目标角色素材,以及目标角色素材对应的演员属性信息和动作属性信息。
在S205中,基于演员属性信息对应的预设动作素材、动作属性信息对应的目标标准动作素材和目标角色素材,生成目标视频。
其中,目标剧本为第一终端预先提供的,或者目标剧本为其他设备提供的,本公开实施例对此不做限制。
在一些实施例中,平台通过主动推送的方式,将目标剧本推送给需要进行视频创作的用户对应的终端。或者,需要进行视频创作的用户(例如导演)通过主动搜索的方式,来挑选出目标剧本。
在一些实施例中,第四终端显示目标剧本对应的分镜编辑页面,该分镜编辑页面用于对目标剧本对应的分镜画面进行编辑配置,进而生成与目标剧本对应的目标视频。
在一些实施例中,分镜编辑页面包括分镜画面展示区域,该分镜画面展示区域用于展示分镜画面,该分镜画面展示区域在初始状态下为空白画板,在制作视频的过程中,通过 在分镜画面展示区域中添加分镜素材来填充空白画板,进而形成分镜画面。相应的,分镜画面包括该分镜画面中每一帧对应的信息,每一帧对应的信息包括图像、音乐、台词等。
在一些实施例中,目标剧本包括多个分镜剧本信息。上述分镜编辑页面还包括分镜剧本展示区域,分镜剧本展示区域用于展示目标剧本对应的多个分镜剧本信息,一个分镜剧本信息对应一个分镜画面,相应的,由于目标剧本对应多个分镜画面,某一时刻的分镜画面展示区域展示的分镜画面为当前分镜画面。当前分镜画面为多个分镜剧本信息中任一分镜剧本信息对应的分镜画面。在一些实施例中,通过点击某一分镜剧本信息所在区域等方式,来选中某一分镜剧本信息,相应的,被选中的分镜剧本信息对应的分镜画面为当前分镜画面。
在一些实施例中,在显示目标剧本对应的分镜编辑页面之前,上述视频生成方法还包括:显示剧本展示页面,剧本展示页面展示有至少一个剧本的剧本概要信息;响应于基于任一剧本的剧本概要信息触发的剧本选择指令,对目标剧本进行分镜处理,得到多个分镜剧本信息,目标剧本为剧本选择指令对应的剧本。其中,至少一个剧本是指一个剧本或者一个以上剧本。
在一些实施例中,通过剧本展示页面,展示可供选择的至少一个剧本的剧本概要信息,剧本概要信息为能够描述剧本的主要信息。例如,剧本概要信息包括剧本名、简介等。相应的,用户(导演)结合需求,通过点击某一剧本的剧本概要信息等操作,触发剧本选择指令。在一些实施例中,在剧本选择指令触发后,对目标剧本进行分镜处理,得到多个分镜剧本信息,进而将页面跳转至分镜编辑页面。
上述实施例中,通过剧本展示页面将可供选择的至少一个剧本的剧本概要信息展示给用户,便于用户直观按需选取目标剧本,且在选中目标剧本之后,对目标剧本进行分镜处理,得到多个分镜剧本信息,进而便于后续用户在分镜编辑页面,结合每个分镜剧本信息进行分镜编辑。
在一些实施例中,上述获取目标剧本的分镜剧本信息包括:获取目标剧本的剧本内容信息;对剧本内容信息进行语义识别,得到语义识别结果;基于语义识别结果对目标剧本进行分镜处理,得到分镜剧本信息。
在一些实施例中,基于语义识别技术,对目标剧本的剧本内容信息进行语义识别,并基于语义识别结果,对目标剧本进行分镜处理。其中,语义识别结果为识别出的多个语义对应的多个剧本内容信息,相应的,将每一语义对应的剧本内容信息作为分镜剧本信息。
上述实施例中,通过对目标剧本进行语义识别,实现对目标剧本的自动分镜,大大提 升了剧本分镜的效率。
在一些实施例中,上述获取目标剧本的分镜剧本信息包括:在剧本展示页面展示目标剧本的剧本内容信息;响应于基于剧本内容信息触发的分镜指令,获取多个分镜剧本信息。
在一些实施例中,结合预设分割符对剧本内容信息,进行分镜处理。结合需求,在相邻两个分镜剧本信息间插入预设分割符,并设置好全部预设分割符之后,通过预设的按钮等控件,触发分镜指令;相应的,结合触发分镜指令时,能够基于剧本内容信息中的预设分割符,提取出多个分镜剧本信息。预设分割符能够结合实际应用预先设置。
在另一些实施例中,通过依次选中每一分镜剧本信息的方式,触发分镜指令,进而得到多个分镜剧本信息。
上述实施例中,通过在剧本展示页面展示剧本内容信息,提供由用户按需触发分镜指令,实现分镜处理的方案,能够大大提升分镜处理的精准性和合理性。
在一些实施例中,由用户基于第四终端提供的分镜编辑页面来确定目标角色素材,以及目标角色素材对应的演员属性信息和动作属性信息。
相应的,上述确定分镜剧本信息对应的目标角色素材,以及目标角色素材对应的演员属性信息和动作属性信息包括:响应于角色素材添加指令,在分镜画面展示区域展示角色素材添加指令对应的目标角色素材;以及在分镜编辑页面展示目标角色素材对应的角色属性的配置操作信息;响应于基于配置操作信息触发的属性配置指令,在分镜编辑页面展示目标角色素材对应的演员属性信息和动作属性信息。
在一些实施例中,分镜编辑页面还包括分镜素材展示区域和角色素材配置区域,分镜素材展示区域展示有用于构建分镜画面的分镜素材。在一些实施例中,分镜素材包括角色素材,相应的,分镜素材展示区域展示有至少一个角色素材;角色素材为人、动物等构成分镜画面的主要剧情(分镜剧本信息对应主要剧情)的对象的图像。其中,至少一个角色素材是指一个角色素材或者一个以上角色素材,角色素材为上述对象(角色)的线框轮廓。在一些实施例中,以人为角色为例,角色素材包括但不限于年轻女性、年轻男性、儿童女性、儿童男性、中年女性、中年男性等。
在一些实施例中,角色素材配置区域用于展示目标角色素材对应的角色属性的配置操作信息。配置操作信息用于触发对目标角色素材对应角色属性进行配置。角色属性包括演员属性信息和动作属性信息。
在一些实施例中,在分镜编辑页面中还包括用于展示构建分镜画面的分镜素材的情况下,上述响应于角色素材添加指令,在分镜画面展示区域展示角色素材添加指令对应的目 标角色素材;以及在分镜编辑页面展示目标角色素材对应的角色属性的配置操作信息包括:响应于基于任一角色素材触发的角色素材添加指令,在分镜画面展示区域展示目标角色素材;以及在角色素材配置区域展示目标角色素材对应的角色属性的配置操作信息。
也即是,响应于对分镜素材展示区域中的角色素材的触发操作,将所触发的角色素材,确定为目标角色素材;在分镜画面展示区域展示目标角色素材;在角色素材配置区域展示目标角色素材对应的角色属性的配置操作信息;基于配置操作信息,配置演员属性信息和动作属性信息。其中,对分镜素材展示区域中的角色素材的触发操作,能够触发角色素材添加指令,该触发操作为将角色素材拖拽到分镜画面展示区域的操作,或者该触发操作为点击角色素材的操作,或者该触发操作为其他操作。
在一些实施例中,第四终端基于分镜剧本信息,确定该分镜剧本信息中所涉及的目标角色,用户根据该目标角色,拖动分镜素材展示区域中该目标角色对应的角色素材,从而第四终端将拖动到分镜画面展示区域的角色素材确定为目标角色素材;或者,用户点击分镜素材展示区域中该目标角色对应的角色素材,从而第四终端将用户点击的角色素材确定为目标角色素材,并将该目标角色素材显示在分镜画面展示区域。
在一些实施例中,如图3所示,图3是根据一示例性实施例示出的一种分镜编辑页面的示意图。图3中区域301为分镜画面展示区域,区域302为分镜素材展示区域,区域303为角色素材配置区域。
进一步的,在角色素材添加指令触发后,在分镜画面展示区域展示目标角色素材;以及在角色素材配置区域展示目标角色素材对应的角色属性的配置操作信息,如图4所示,图4是根据一示例性实施例示出的一种分镜编辑页面的示意图。在一些实施例中,信息401为角色属性的配置操作信息。
在一些实施例中,一个分镜画面对应一个或多个目标角色素材。在一个分镜画面对应多个目标角色素材的情况下,依次进行目标角色素材的添加,每个目标角色素材的添加过程与上述添加目标角色素材的过程类似,在此不再赘述。
在一些实施例中,在将角色素材添加到分镜画面展示区域后,角色素材在分镜画面展示区域中的位置信息,为该角色素材在后续的分镜画面中的位置信息。
上述实施例中,通过分镜素材展示区域展示至少一个角色素材,用户通过对角素材的挑选实现角色素材的添加,能够大大提升角色素材添加操作的便利性,进而提升分镜画面的编辑效率。
此外,需要说明的是,上述添加角色素材的实施例仅仅是一种示例,在实际应用中, 还能够通过其他方式,来进行角色素材的添加,例如分镜编辑页面没有分镜素材展示区域,但设置有用于展示分镜素材的绘制工具的区域,相应的,通过在分镜画面展示区域绘制角色素材的等操作,触发上述角色素材添加指令,进而进行角色素材的添加。
在一些实施例中,动作属性信息为表征角色在对应的分镜画面中的动作的信息。在一些实施例中,在角色素材添加指令被触发后,通过对配置操作信息执行相应的操作,以触发在分镜编辑页面展示动作属性信息的文本输入框,相应的,在文本输入框中,输入动作属性信息,例如动作关键词等。在一些实施例中,如图5所示,图5是根据一示例性实施例提供的一种分镜编辑页面的示意图。
在一些实施例中,在角色素材添加指令被触发后,通过对配置操作信息执行相应的操作,以触发在分镜编辑页面展示设置有多个标准动作选项的筛选框,相应的,能够通过选中至少一个标准动作选项的方式,来确定动作属性信息。其中,至少一个标准动作选项是指一个标准动作选项或者一个以上标准动作选项,在一些实施例中,多个标准动作选项可以包括但不限于“跑步”、“敬礼”、“向左看”和“向右看”等动作信息。
在一些实施例中,演员属性信息为扮演角色的演员的信息,演员属性信息包括但不限于演员名字、年龄,衣着等信息。在一些实施例中,对于演员的选择,通过在分镜编辑页面中展示可选择演员的基本信息的方式,以供用户按需进行选择。在一些实施例中,通过弹窗的方式将可选择演员的基本信息展示在分镜编辑页面。在一些实施例中,如图6所示,图6是根据一示例性实施例提供的一种分镜编辑页面的示意图。
在一些实施例中,上述预设动作素材为从演员属性信息对应目标演员的动作视频中提取的动作素材,上述目标标准动作素材为标准动作素材中与动作属性信息匹配的动作素材,上述标准动作素材为从至少一个第一预设演员的标准动作视频中提取的动作素材。
在一些实施例中,目标视频为影视作品,或者为具有一定故事情节的短视频等。
在一些实施例中,上述视频生成方法还包括预先生成标准动作素材库的步骤,如图7所示,预先生成标准动作素材库的步骤包括:
在S701中,获取至少一个第一预设演员在预设背景下拍摄的标准动作视频。
在S703中,从标准动作视频中,提取每个第一预设演员对应的骨骼序列图像。
在S705中,将每个第一预设演员对应的骨骼序列图像,确定为每个第一预设演员的标准动作素材。
在S707中,基于至少一个第一预设演员的标准动作素材,生成标准动作素材库。
在一些实施例中,第一预设演员为拍摄标准动作视频的演员。在一些实施例中,在至 少一个第一预设演员为多个第一预设演员的情况下,多个第一预设演员拍摄不同标准动作对应的标准动作视频。标准动作视频中提取的标准动作素材用于对非标准动作素材(非第一预设演员拍摄的标准动作视频中提取的动作素材)进行校正。预设背景包括但不限于用于视频抠图制作的绿色幕布、蓝色幕布、红色幕布或其他颜色的适宜于抠图的背景。骨骼序列图像为与某一标准动作对应的多个骨骼图像,骨骼图像为包括演员做相应动作时,关键部位关键点的图像。提取标准动作视频中的每一帧图像,结合预先训练好的骨骼图像提取网络,从每一帧图像中提取相应的第一预设演员的关键部位关键点的位置信息,并基于该位置信息从相应的帧图像中提取骨骼图像,得到上述骨骼序列图像。骨骼图像提取网络为预先基于标注有样本演员关键部位关键点的位置信息的样本帧图像,对预设神经网络进行训练得到的,样本帧图像为样本演员在预设背景下拍摄的动作视频中提取的帧图像。
在一些实施例中,基于至少一个第一预设演员的标准动作素材生成标准动作素材库包括:建立第一对应关系,该第一对应关系包括每个标准动作素材与对应的动作关键文本信息;基于至少一个第一预设演员的标准动作素材和第一对应关系,构建标准动作素材库。
上述实施例中,通过从标准动作视频中提取的标准动作素材,来生成标准动作素材库,保证在视频制作过程中,复用标准动作素材,对选定演员的动作素材进行校正,进而大大提升视频制作质量和效率。
在一些实施例中,上述视频生成方法还包括预先生成预设动作素材库的步骤,如图8所示,预先生成预设动作素材库的步骤包括:
在S801中,获取多个第二预设演员各自在预设背景下拍摄的动作视频。
在S803中,从每个第二预设演员的动作视频中,提取每个第二预设演员对应的骨骼序列图像。
在S805中,将每个第二预设演员对应的骨骼序列图像,确定为每个第二预设演员的动作素材。
在S807中,基于多个第二预设演员的动作素材,生成预设动作素材库。
在一些实施例中,第二预设演员为具有表演能力的用户。从每个第二预设演员的动作视频中,提取每个第二预设演员对应的骨骼序列图像的步骤,参见上述从标准动作视频中,提取第一预设演员对应的骨骼序列图像的相关细化,在此不再赘述。
在一些实施例中,基于多个第二预设演员的动作素材,生成预设动作素材库包括:建立第二对应关系,该第二对应关系包括第二预设演员的动作素材和对应的演员属性信息;基于多个第二预设演员的动作素材和第二对应关系,构建上述预设动作素材库。
上述实施例中,通过从多个第二预设演员在预设背景下拍摄的动作视频中提取的动作素材,来生成预设动作素材库,保证在视频制作过程中,实现动作素材的复用,无需每次视频制作都进行视频拍摄,大大提升了视频制作效率,同时也有效降低了视频制作成本。
在一些实施例中,如图9所示,基于演员属性信息对应的预设动作素材、动作属性信息对应的目标标准动作素材和目标角色素材,生成目标视频包括以下步骤:
在S901中,基于演员属性信息,从预设动作素材库中确定预设动作素材。
在S903中,基于动作属性信息,从标准动作素材库中确定目标标准动作素材。
在S905中,基于目标标准动作素材和预设动作素材,生成分镜角色素材。
在S907中,根据分镜角色素材和目标角色素材,生成目标视频。
在一些实施例中,预设动作素材库包括多个第二预设演员的动作素材和该第二预设演员的演员属性信息间的第二对应关系;因此,能够基于第二对应关系,确定演员属性信息对应的预设动作素材。
在一些实施例中,动作属性信息对应一个或多个标准动作素材,基于一个或多个标准动作素材生成目标标准动作素材。例如:动作属性信息“跑步过程中敬礼”对应标准动作素材“跑步”和“敬礼”,则目标标准动作素材包括标准动作素材“跑步”和标准动作素材“敬礼”;例如动作属性信息“左右摇头”对应标准动作素材“向左看”,并根据对称操作衍生出标准动作素材“向右看”,并由标准动作素材“向左看”和标准动作素材“向右看”,组合成动作属性信息“左右摇头”对应的目标标准动作素材。例如动作属性信息“快走”对应标准动作素材“走路”,相应的,通过对标准动作素材“走路”的幅度、位置等进行修改,生成“快走”对应的目标标准动作素材。
在一些实施例中,标准动作素材库中每一标准动作素材对应一个动作关键文本信息(关键词),在一些实施例中,基于对动作属性信息的语义识别,从标准动作素材库中确定动作属性信息对应的标准动作素材。
在一些实施例中,一个演员对应的预设动作素材中包括大量动作对应的素材,相应的,如图10所示,上述基于目标标准动作素材和预设动作素材,生成分镜角色素材包括以下步骤:
在S1001中,对目标标准动作素材与预设动作素材进行骨骼匹配,得到骨骼匹配结果。
在S1003中,基于骨骼匹配结果,从预设动作素材中确定与目标标准动作素材匹配的目标动作素材。
在S1005中,基于目标标准动作素材对目标动作素材进行动作校准,得到分镜角色素材。
在一些实施例中,上述骨骼匹配过程中,包括但不限于提取标准动作素材和预设动作素材中骨骼点(关键部位的关键点)的距离分布图,结合骨骼点的距离分布图间的相似度(骨骼匹配结果)来选取目标动作素材。
在一些实施例中,上述动作校准过程中,包括但不限于结合拉普拉斯算法等对目标动作素材中骨骼点进行网格变换,以映射到标准动作素材对应的骨骼点,得到分镜角色素材。
上述实施例中,结合目标标准动作素材与预设动作素材间的骨骼匹配结果,从目标演员的大量动作素材中,精准的筛选出与动作属性信息对应的目标动作素材,并结合目标标准动作素材对目标动作素材进行动作校准,得到构建分镜画面的分镜角色素材,能够有效提升视频的制作质量。
在一些实施例中,上述根据分镜角色素材,生成目标视频包括:基于每一分镜角色素材和对应的第一展示位置信息生成相应的分镜画面,并按照多个分镜画面对应的时序信息,将分镜画面进行合成,得到目标视频。也即是,在目标剧本包括多个分镜剧本信息的情况下,对于每个分镜剧本信息,基于分镜角色素材和第一展示位置信息,生成该分镜剧本信息对应的分镜画面;对于多个分镜剧本信息,采用类似的方式生成该多个分镜剧本信息对应的多个分镜画面,然后按照多个分镜剧本信息在目标剧本中的顺序,将多个分镜剧本信息对应的多个分镜画面进行合成,得到目标视频。
在一些实施例中,第一展示位置信息为目标角色素材在分镜画面展示区域中的位置信息,由于分镜画面展示区域中的位置与待生成的分镜画面中的位置指相对应的,因此该第一展示位置信息也表示目标角色素材在待生成的分镜画面中的位置。
上述实施例中,结合演员属性信息从预设动作素材库中获取目标演员的预设动作素材,以及结合动作属性信息从标准动作素材库中获取目标标准动作素材,实现了动作素材的复用,无需每次视频制作都进行视频拍摄,大大提升了视频制作效率,同时也有效降低了视频制作成本。
在一些实施例中,为了丰富分镜画面,分镜素材还包括场景素材,相应的,分镜素材展示区域还展示有至少一个场景素材。其中,至少一个场景素材是指一个场景素材或者一个以上场景素材,场景素材包括背景场景素材和前景场景素材,背景场景素材可以为分镜画面对应的整体背景图像,背景场景素材的尺寸大小与分镜画面对应的尺寸大小一致,前景场景素材为分镜剧本对应剧情所需的场景对象的图像或线框轮廓,例如一张桌子的图 像。
在一些实施例中,上述视频生成方法还包括:确定分镜剧本信息对应的目标场景素材。相应的,上述根据分镜角色素材,生成目标视频包括:基于分镜角色素材和目标场景素材,生成目标视频。
在一些实施例中,上述确定分镜剧本信息对应的目标场景素材包括:响应于基于任一场景素材触发的场景素材添加指令,在分镜画面展示区域展示场景素材添加指令对应的目标场景素材。
在一些实施例中,通过将场景素材拖拽到分镜画面展示区域的操作,来触发场景素材添加指令,或者通过点击场景素材等操作,来触发场景素材添加指令。
在一些实施例中,一个分镜画面对应一个或多个场景素材,在一些实施例中,背景场景图像为一个,前景场景图像为一个或多个。
在一些实施例中,在一个分镜画面对应的多个目标场景素材的情况下,依次进行目标场景素材的添加。
在一些实施例中,在进行场景素材添加的过程中,仅添加前景场景素材或背景场景素材。
上述根据分镜角色素材和目标场景素材,生成目标视频包括:基于每一分镜角色素材、每一分镜角色素材对应的第一展示位置信息、每一分镜角色素材对应的目标场景素材,以及目标场景素材对应的第二展示位置信息,生成相应的分镜画面,对于多个分镜剧本信息,采用类似的方式生成该多个分镜剧本信息对应的多个分镜画面,并按照多个分镜画面对应的时序信息,将多个分镜画面进行合成,得到目标视频。其中,分镜角色素材对应的第一展示位置是指对应于该分镜角色素材的目标角色素材的位置。
在一些实施例中,在目标场景素材包括前景场景素材的情况下,第二展示位置信息包括前景场景素材在分镜画面展示区域的位置信息。在目标场景素材包括背景场景素材的情况下,第二展示位置信息包括背景场景素材在分镜画面展示区域的位置信息,一般的,背景场景素材分布在整个分镜画面展示区域。
在一些实施例中,在将场景素材中的前景场景素材添加到分镜画面展示区域后,前景场景素材在分镜画面展示区域中的位置信息为该前景场景素材在后续的分镜画面中的位置信息。
上述实施例中,在生成目标视频的过程中,还添加了场景素材,大大丰富了分镜画面,进而提升视频质量。
在一些实施例中,上述分镜编辑页面还包括其他素材对应的其他配置区域,其他素材为用于构建分镜画面所需信息中除角色素材、场景素材以外的其他素材,其他素材包括但不限于台词、背景音乐等,相应的,基于其他配置区域触发的配置指令,以进行其他素材的配置。
相应的,上述根据分镜角色素材和目标场景素材,生成目标视频包括:根据分镜角色素材、目标场景素材和其他素材,生成目标视频;根据分镜角色素材、目标场景素材和其他素材,生成目标视频包括:基于每一分镜角色素材、第一展示位置信息、目标场景素、第二展示位置信息和其他素材生成相应的分镜画面,并按照多个分镜画面对应的时序信息,将分镜画面进行合成,得到目标视频。
在另一些实施例中,上述基于演员属性信息对应的预设动作素材、动作属性信息对应的目标标准动作素材和目标角色素材,生成目标视频包括:将演员属性信息、动作属性信息和目标角色素材发送给服务器,以便服务器基于演员属性信息,从预设动作素材库中确定预设动作素材;以及基于动作属性信息,从标准动作素材库中确定目标标准动作素材;以及基于目标标准动作素材和预设动作素材,生成分镜角色素材;以及根据分镜角色素材,生成目标视频。
在一些实施例中,在还需要结合目标场景素材、其他素材来构建分镜画面的情况下,上述基于演员属性信息对应的预设动作素材、动作属性信息对应的目标标准动作素材和目标角色素材,生成目标视频还包括:将演员属性信息、动作属性信息、目标角色素材、目标场景素材、其他素材发送给服务器,以便服务器基于演员属性信息,从预设动作素材库中确定预设动作素材;以及基于动作属性信息,从标准动作素材库中确定目标标准动作素材;以及基于目标标准动作素材和预设动作素材,生成分镜角色素材;以及根据分镜角色素材、目标场景素材和其他素材,生成目标视频。
在一些实施例中,服务器在生成目标视频之后,发送给第四终端,以便用户进行目标视频的查看,在一些实施例中,在目标视频生成之后,或用户查看目标视频,并确认进行目标视频的发布之后,可以将目标视频发布到相应的展示平台。
本公开实施例提供的技术方案,能够直接基于目标剧本对应的目标角色素材,以及演员属性信息对应的预设动作素材和动作属性信息对应的目标标准动作素材,生成目标视频,实现了编剧、表演、制作这三个视频制作环节的解耦,且动作素材能够复用,无需每次视频制作都进行视频拍摄,大大提升了视频制作效率,同时也有效降低了视频制作成本。
图11是根据一示例性实施例示出的一种视频生成装置框图。参照图11,该装置包括:
分镜剧本信息获取模块1110,被配置为获取目标剧本的分镜剧本信息;
信息确定模块1120,被配置为确定分镜剧本信息对应的目标角色素材,以及目标角色素材对应的演员属性信息和动作属性信息;
目标视频生成模块1130,被配置为基于演员属性信息对应的预设动作素材、动作属性信息对应的目标标准动作素材和目标角色素材,生成目标视频。
在一些实施例中,目标视频生成模块1130包括:
预设动作素材确定单元,被配置为基于演员属性信息,从预设动作素材库中确定预设动作素材;
目标标准动作素材确定单元,被配置为基于动作属性信息,从标准动作素材库中确定目标标准动作素材;
分镜角色素材生成单元,被配置为基于目标标准动作素材和预设动作素材,生成分镜角色素材;
目标视频生成单元,被配置为基于分镜角色素材和目标角色素材,生成目标视频。
在一些实施例中,分镜角色素材生成单元包括:
骨骼匹配单元,被配置为对目标标准动作素材与预设动作素材进行骨骼匹配,得到骨骼匹配结果;
标动作素材确定单元,被配置为基于骨骼匹配结果,从预设动作素材中确定与目标标准动作素材匹配的目标动作素材;
动作校准单元,被配置为基于目标标准动作素材对目标动作素材进行动作校准,得到分镜角色素材。
在一些实施例中,目标剧本包括多个分镜剧本信息,目标视频生成单元,被配置为:
对于每个分镜剧本信息,基于分镜角色素材和第一展示位置信息,生成分镜剧本信息对应的分镜画面,第一展示位置信息表示目标角色素材在待生成的分镜画面中的位置:
按照多个分镜剧本信息在目标剧本中的顺序,将多个分镜剧本信息对应的多个分镜画面进行合成,得到目标视频。
在一些实施例中,上述装置还包括:
目标场景素材确定单元,被配置为确定分镜剧本信息对应的目标场景素材;
目标视频生成单元,还被配置为基于分镜角色素材和目标场景素材,生成目标视频。
在一些实施例中,预设动作素材为从目标演员的动作视频中提取的动作素材,目标演 员为演员属性信息对应的演员,上述装置还包括:
动作视频获取模块,被配置为获取多个第二预设演员分别在预设背景下拍摄的动作视频;
第一骨骼序列图像提取模块,被配置为从每个第二预设演员的动作视频中,提取每个第二预设演员对应的骨骼序列图像;
动作素材确定模块,被配置为将每个第二预设演员对应的骨骼序列图像,确定为每个第二预设演员的动作素材;
预设动作素材库生成模块,被配置为基于多个第二预设演员的动作素材,生成预设动作素材库。
在一些实施例中,目标标准动作素材为标准动作素材中与动作属性信息匹配的动作素材,标准动作素材为从至少一个第一预设演员的标准动作视频中提取的动作素材,上述装置还包括:
标准动作视频获取模块,被配置为获取至少一个第一预设演员在预设背景下拍摄的标准动作视频;
第二骨骼序列图像提取模块,被配置为从标准动作视频中,提取每个第一预设演员对应的骨骼序列图像;
标准动作素材确定模块,被配置为将每个第一预设演员对应的骨骼序列图像,确定为每个第一预设演员的标准动作素材;
标准动作素材库生成模块,被配置为基于至少一个第一预设演员的标准动作素材,生成标准动作素材库。
在一些实施例中,分镜剧本信息获取模块1110包括:
剧本内容信息获取单元,被配置为获取目标剧本的剧本内容信息;
语义识别单元,被配置为对剧本内容信息进行语义识别,得到语义识别结果;
分镜处理单元,被配置为基于语义识别结果对目标剧本进行分镜处理,得到分镜剧本信息。
在一些实施例中,分镜剧本信息获取模块1110,还被配置为:
显示目标剧本对应的分镜编辑页面,分镜编辑页面包括分镜剧本展示区域;
在分镜剧本展示区域展示分镜剧本信息。
在一些实施例中,分镜编辑页面还包括分镜素材展示区域、角色素材配置区域和分镜画面展示区域,信息确定模块1120,被配置为:
响应于对分镜素材展示区域中的角色素材的触发操作,将所触发的角色素材,确定为目标角色素材;
在分镜画面展示区域展示目标角色素材;
在角色素材配置区域展示目标角色素材对应的角色属性的配置操作信息;
基于配置操作信息,配置演员属性信息和动作属性信息。
图12是根据一示例性实施例示出的一种用于视频生成的电子设备的框图,该电子设备可以是终端,其内部结构图可以如图12所示。该电子设备包括通过系统总线连接的处理器、存储器、网络接口、显示屏和输入装置。其中,该电子设备的处理器用于提供计算和控制能力。该电子设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该电子设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种视频生成方法。该电子设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该电子设备的输入装置可以是显示屏上覆盖的触摸层,也可以是电子设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图12中示出的结构,仅仅是与本公开方案相关的部分结构的框图,并不构成对本公开方案所应用于其上的电子设备的限定,具体的电子设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在示例性实施例中,还提供了一种电子设备,包括:处理器;用于存储该处理器可执行指令的存储器;其中,该处理器被配置为执行该指令,以实现如下步骤:
获取目标剧本的分镜剧本信息;
确定分镜剧本信息对应的目标角色素材,以及目标角色素材对应的演员属性信息和动作属性信息;
基于演员属性信息对应的预设动作素材、动作属性信息对应的目标标准动作素材和目标角色素材,生成目标视频。
在一些实施例中,处理器被配置为执行指令,以实现如下步骤:
基于演员属性信息,从预设动作素材库中确定预设动作素材;
基于动作属性信息,从标准动作素材库中确定目标标准动作素材;
基于目标标准动作素材和预设动作素材,生成分镜角色素材;
基于分镜角色素材和目标角色素材,生成目标视频。
在一些实施例中,处理器被配置为执行指令,以实现如下步骤:
对目标标准动作素材与预设动作素材进行骨骼匹配,得到骨骼匹配结果;
基于骨骼匹配结果,从预设动作素材中确定与目标标准动作素材匹配的目标动作素材;
基于目标标准动作素材,对目标动作素材进行动作校准,得到分镜角色素材。
在一些实施例中,处理器被配置为执行指令,以实现如下步骤:
对于每个分镜剧本信息,基于分镜角色素材和第一展示位置信息,生成分镜剧本信息对应的分镜画面,第一展示位置信息表示目标角色素材在待生成的分镜画面中的位置;
按照多个分镜剧本信息在目标剧本中的顺序,将多个分镜剧本信息对应的多个分镜画面进行合成,得到目标视频。
在一些实施例中,处理器被配置为执行指令,以实现如下步骤:
确定分镜剧本信息对应的目标场景素材;
基于分镜角色素材和目标场景素材,生成目标视频。
在一些实施例中,预设动作素材为从目标演员的动作视频中提取的动作素材,目标演员为演员属性信息对应的演员,处理器被配置为执行指令,以实现如下步骤:
获取多个第二预设演员分别在预设背景下拍摄的动作视频,多个第二预设演员包括目标演员;
从每个第二预设演员的动作视频中,提取每个第二预设演员对应的骨骼序列图像;
将每个第二预设演员对应的骨骼序列图像,确定为每个第二预设演员的动作素材;
基于多个第二预设演员的动作素材,生成预设动作素材库。
在一些实施例中,目标标准动作素材为标准动作素材中与动作属性信息匹配的动作素材,标准动作素材为从至少一个第一预设演员的标准动作视频中提取的动作素材,处理器被配置为执行指令,以实现如下步骤:
获取至少一个第一预设演员在预设背景下拍摄的标准动作视频;
从标准动作视频中,提取每个第一预设演员对应的骨骼序列图像;
将每个第一预设演员对应的骨骼序列图像,确定为每个第一预设演员的标准动作素材;
基于至少一个第一预设演员的标准动作素材,生成标准动作素材库。
在一些实施例中,处理器被配置为执行指令,以实现如下步骤:
获取目标剧本的剧本内容信息;
对剧本内容信息进行语义识别,得到语义识别结果;
基于语义识别结果对目标剧本进行分镜处理,得到分镜剧本信息。
在一些实施例中,处理器被配置为执行指令,以实现如下步骤:
显示目标剧本对应的分镜编辑页面,分镜编辑页面包括分镜剧本展示区域;
在分镜剧本展示区域展示分镜剧本信息。
在一些实施例中,分镜编辑页面还包括分镜素材展示区域、角色素材配置区域和分镜画面展示区域,处理器被配置为执行指令,以实现如下步骤:
响应于对分镜素材展示区域中的角色素材的触发操作,将所触发的角色素材,确定为目标角色素材;
在分镜画面展示区域展示目标角色素材;
在角色素材配置区域展示目标角色素材对应的角色属性的配置操作信息;
基于配置操作信息,配置演员属性信息和动作属性信息。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,当该存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行本公开实施例中的视频生成方法。
在示例性实施例中,还提供了一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现本公开实施例中的视频生成方法。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,该计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
本公开所有实施例均可以单独被执行,也可以与其他实施例相结合被执行,均视为本公开要求的保护范围。
Claims (32)
- 一种视频生成方法,包括:获取目标剧本的分镜剧本信息;确定所述分镜剧本信息对应的目标角色素材,以及所述目标角色素材对应的演员属性信息和动作属性信息;基于所述演员属性信息对应的预设动作素材、所述动作属性信息对应的目标标准动作素材和所述目标角色素材,生成目标视频。
- 根据权利要求1所述的方法,其中,所述基于所述演员属性信息对应的预设动作素材、所述动作属性信息对应的目标标准动作素材和所述目标角色素材,生成目标视频包括:基于所述演员属性信息,从预设动作素材库中确定所述预设动作素材;基于所述动作属性信息,从标准动作素材库中确定所述目标标准动作素材;基于所述目标标准动作素材和所述预设动作素材,生成分镜角色素材;基于所述分镜角色素材和所述目标角色素材,生成所述目标视频。
- 根据权利要求2所述的方法,其中,所述基于所述目标标准动作素材和所述预设动作素材,生成分镜角色素材包括:对所述目标标准动作素材与所述预设动作素材进行骨骼匹配,得到骨骼匹配结果;基于所述骨骼匹配结果,从所述预设动作素材中确定与所述目标标准动作素材匹配的目标动作素材;基于所述目标标准动作素材,对所述目标动作素材进行动作校准,得到所述分镜角色素材。
- 根据权利要求2所述的方法,其中,所述目标剧本包括多个分镜剧本信息,所述根据所述分镜角色素材和所述目标角色素材,生成所述目标视频包括:对于每个所述分镜剧本信息,基于所述分镜角色素材和第一展示位置信息,生成所述分镜剧本信息对应的分镜画面,所述第一展示位置信息表示所述目标角色素材在待生成的分镜画面中的位置;按照所述多个分镜剧本信息在所述目标剧本中的顺序,将所述多个分镜剧本信息对应 的多个分镜画面进行合成,得到所述目标视频。
- 根据权利要求2所述的方法,其中,所述方法还包括:确定所述分镜剧本信息对应的目标场景素材;所述根据所述分镜角色素材,生成所述目标视频包括:基于所述分镜角色素材和所述目标场景素材,生成所述目标视频。
- 根据权利要求2至5任一所述的方法,其中,所述预设动作素材为从目标演员的动作视频中提取的动作素材,所述目标演员为所述演员属性信息对应的演员,所述方法还包括:获取多个第二预设演员分别在预设背景下拍摄的动作视频,所述多个第二预设演员包括所述目标演员;从每个第二预设演员的动作视频中,提取所述每个第二预设演员对应的骨骼序列图像;将所述每个第二预设演员对应的骨骼序列图像,确定为所述每个第二预设演员的动作素材;基于所述多个第二预设演员的动作素材,生成所述预设动作素材库。
- 根据权利要求2至5任一所述的方法,其中,所述目标标准动作素材为标准动作素材中与所述动作属性信息匹配的动作素材,所述标准动作素材为从至少一个第一预设演员的标准动作视频中提取的动作素材,所述方法还包括:获取所述至少一个第一预设演员在预设背景下拍摄的所述标准动作视频;从所述标准动作视频中,提取每个第一预设演员对应的骨骼序列图像;将所述每个第一预设演员对应的骨骼序列图像,确定为所述每个第一预设演员的标准动作素材;基于所述至少一个第一预设演员的标准动作素材,生成所述标准动作素材库。
- 根据权利要求1至5任一所述的方法,其中,所述获取目标剧本的分镜剧本信息包括:获取所述目标剧本的剧本内容信息;对所述剧本内容信息进行语义识别,得到语义识别结果;基于所述语义识别结果对所述目标剧本进行分镜处理,得到所述分镜剧本信息。
- 根据权利要求1至5任一所述的方法,其中,所述方法还包括:显示所述目标剧本对应的分镜编辑页面,所述分镜编辑页面包括分镜剧本展示区域;在所述分镜剧本展示区域展示所述分镜剧本信息。
- 根据权利要求9所述的方法,其中,所述分镜编辑页面还包括分镜素材展示区域、角色素材配置区域和分镜画面展示区域,所述确定所述分镜剧本信息对应的目标角色素材,以及所述目标角色素材对应的演员属性信息和动作属性信息,包括:响应于对所述分镜素材展示区域中的角色素材的触发操作,将所触发的角色素材,确定为所述目标角色素材;在所述分镜画面展示区域展示所述目标角色素材;在所述角色素材配置区域展示所述目标角色素材对应的角色属性的配置操作信息;基于所述配置操作信息,配置所述演员属性信息和所述动作属性信息。
- 一种视频生成装置,包括:分镜剧本信息获取模块,被配置为获取目标剧本的分镜剧本信息;信息确定模块,被配置为确定所述分镜剧本信息对应的目标角色素材,以及所述目标角色素材对应的演员属性信息和动作属性信息;目标视频生成模块,被配置为基于所述演员属性信息对应的预设动作素材、所述动作属性信息对应的目标标准动作素材和所述目标角色素材,生成目标视频。
- 根据权利要求11所述的装置,其中,所述目标视频生成模块包括:预设动作素材确定单元,被配置为基于所述演员属性信息,从预设动作素材库中确定所述预设动作素材;目标标准动作素材确定单元,被配置为基于所述动作属性信息,从标准动作素材库中确定所述目标标准动作素材;分镜角色素材生成单元,被配置为基于所述目标标准动作素材和所述预设动作素材,生成分镜角色素材;目标视频生成单元,被配置为基于所述分镜角色素材和所述目标角色素材,生成所述目标视频。
- 根据权利要求12所述的装置,其中,所述分镜角色素材生成单元包括:骨骼匹配单元,被配置为对所述目标标准动作素材与所述预设动作素材进行骨骼匹配,得到骨骼匹配结果;标动作素材确定单元,被配置为基于所述骨骼匹配结果,从所述预设动作素材中确定与所述目标标准动作素材匹配的目标动作素材;动作校准单元,被配置为基于所述目标标准动作素材,对所述目标动作素材进行动作校准,得到所述分镜角色素材。
- 根据权利要求12所述的装置,其中,所述目标剧本包括多个分镜剧本信息,目标视频生成单元,被配置为:对于每个所述分镜剧本信息,基于所述分镜角色素材和第一展示位置信息,生成所述分镜剧本信息对应的分镜画面,所述第一展示位置信息表示所述目标角色素材在待生成的分镜画面中的位置:按照所述多个分镜剧本信息在所述目标剧本中的顺序,将所述多个分镜剧本信息对应的多个分镜画面进行合成,得到所述目标视频。
- 根据权利要求12所述的装置,其中,所述装置还包括:目标场景素材确定单元,被配置为确定所述分镜剧本信息对应的目标场景素材;所述目标视频生成单元,还被配置为基于所述分镜角色素材和所述目标场景素材,生成所述目标视频。
- 根据权利要求12至15任一所述的装置,其中,所述预设动作素材为从目标演员的动作视频中提取的动作素材,所述目标演员为所述演员属性信息对应的演员,所述装置还包括:动作视频获取模块,被配置为获取多个第二预设演员分别在预设背景下拍摄的动作视频;第一骨骼序列图像提取模块,被配置为从每个第二预设演员的动作视频中,提取所述每个第二预设演员对应的骨骼序列图像;动作素材确定模块,被配置为将所述每个第二预设演员对应的骨骼序列图像,确定为所述每个第二预设演员的动作素材;预设动作素材库生成模块,被配置为基于所述多个第二预设演员的动作素材,生成所述预设动作素材库。
- 根据权利要求12至15任一所述的装置,其中,所述目标标准动作素材为标准动作素材中与所述动作属性信息匹配的动作素材,所述标准动作素材为从至少一个第一预设演员的标准动作视频中提取的动作素材,所述装置还包括:标准动作视频获取模块,被配置为获取所述至少一个第一预设演员在预设背景下拍摄的所述标准动作视频;第二骨骼序列图像提取模块,被配置为从所述标准动作视频中,提取每个第一预设演员对应的骨骼序列图像;标准动作素材确定模块,被配置为将所述每个第一预设演员对应的骨骼序列图像,确定为所述任一第一预设演员的标准动作素材;标准动作素材库生成模块,被配置为基于所述至少一个第一预设演员的标准动作素材,生成所述标准动作素材库。
- 根据权利要求11至15任一所述的装置,其中,所述分镜剧本信息获取模块包括:剧本内容信息获取单元,被配置为获取所述目标剧本的剧本内容信息;语义识别单元,被配置为对所述剧本内容信息进行语义识别,得到语义识别结果;分镜处理单元,被配置为基于所述语义识别结果对所述目标剧本进行分镜处理,得到所述分镜剧本信息。
- 根据权利要求11至15任一所述的装置,其中,所述分镜剧本信息获取模块,还被配置为:显示所述目标剧本对应的分镜编辑页面,所述分镜编辑页面包括分镜剧本展示区域;在所述分镜剧本展示区域展示所述分镜剧本信息。
- 根据权利要求19所述的装置,其中,所述分镜编辑页面还包括分镜素材展示区域、角色素材配置区域和分镜画面展示区域,所述信息确定模块,被配置为:响应于对所述分镜素材展示区域中的角色素材的触发操作,将所触发的角色素材,确定为所述目标角色素材;在所述分镜画面展示区域展示所述目标角色素材;在所述角色素材配置区域展示所述目标角色素材对应的角色属性的配置操作信息;基于所述配置操作信息,配置所述演员属性信息和所述动作属性信息。
- 一种电子设备,包括:处理器;用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,以实现如下步骤:获取目标剧本的分镜剧本信息;确定所述分镜剧本信息对应的目标角色素材,以及所述目标角色素材对应的演员属性信息和动作属性信息;基于所述演员属性信息对应的预设动作素材、所述动作属性信息对应的目标标准动作素材和所述目标角色素材,生成目标视频。
- 根据权利要求21所述的电子设备,其中,所述处理器被配置为执行所述指令,以实现如下步骤:基于所述演员属性信息,从预设动作素材库中确定所述预设动作素材;基于所述动作属性信息,从标准动作素材库中确定所述目标标准动作素材;基于所述目标标准动作素材和所述预设动作素材,生成分镜角色素材;基于所述分镜角色素材和所述目标角色素材,生成所述目标视频。
- 根据权利要求22所述的电子设备,其中,所述处理器被配置为执行所述指令,以实现如下步骤:对所述目标标准动作素材与所述预设动作素材进行骨骼匹配,得到骨骼匹配结果;基于所述骨骼匹配结果,从所述预设动作素材中确定与所述目标标准动作素材匹配的目标动作素材;基于所述目标标准动作素材,对所述目标动作素材进行动作校准,得到所述分镜角色素材。
- 根据权利要求22所述的电子设备,其中,所述处理器被配置为执行所述指令,以实现如下步骤:对于每个所述分镜剧本信息,基于所述分镜角色素材和第一展示位置信息,生成所述 分镜剧本信息对应的分镜画面,所述第一展示位置信息表示所述目标角色素材在待生成的分镜画面中的位置;按照所述多个分镜剧本信息在所述目标剧本中的顺序,将所述多个分镜剧本信息对应的多个分镜画面进行合成,得到所述目标视频。
- 根据权利要求22所述的电子设备,其中,所述处理器被配置为执行所述指令,以实现如下步骤:确定所述分镜剧本信息对应的目标场景素材;基于所述分镜角色素材和所述目标场景素材,生成所述目标视频。
- 根据权利要求22至25任一所述的电子设备,其中,所述预设动作素材为从目标演员的动作视频中提取的动作素材,所述目标演员为所述演员属性信息对应的演员,所述处理器被配置为执行所述指令,以实现如下步骤:获取多个第二预设演员分别在预设背景下拍摄的动作视频,所述多个第二预设演员包括所述目标演员;从每个第二预设演员的动作视频中,提取所述每个第二预设演员对应的骨骼序列图像;将所述每个第二预设演员对应的骨骼序列图像,确定为所述每个第二预设演员的动作素材;基于所述多个第二预设演员的动作素材,生成所述预设动作素材库。
- 根据权利要求22至25任一所述的电子设备,其中,所述目标标准动作素材为标准动作素材中与所述动作属性信息匹配的动作素材,所述标准动作素材为从至少一个第一预设演员的标准动作视频中提取的动作素材,所述处理器被配置为执行所述指令,以实现如下步骤:获取所述至少一个第一预设演员在预设背景下拍摄的所述标准动作视频;从所述标准动作视频中,提取每个第一预设演员对应的骨骼序列图像;将所述每个第一预设演员对应的骨骼序列图像,确定为所述每个第一预设演员的标准动作素材;基于所述至少一个第一预设演员的标准动作素材,生成所述标准动作素材库。
- 根据权利要求21至25任一所述的电子设备,其中,所述处理器被配置为执行所述指令,以实现如下步骤:获取所述目标剧本的剧本内容信息;对所述剧本内容信息进行语义识别,得到语义识别结果;基于所述语义识别结果对所述目标剧本进行分镜处理,得到所述分镜剧本信息。
- 根据权利要求21至25任一所述的电子设备,其中,所述处理器被配置为执行所述指令,以实现如下步骤:显示所述目标剧本对应的分镜编辑页面,所述分镜编辑页面包括分镜剧本展示区域;在所述分镜剧本展示区域展示所述分镜剧本信息。
- 根据权利要求29所述的电子设备,其中,所述分镜编辑页面还包括分镜素材展示区域、角色素材配置区域和分镜画面展示区域,所述处理器被配置为执行所述指令,以实现如下步骤:响应于对所述分镜素材展示区域中的角色素材的触发操作,将所触发的角色素材,确定为所述目标角色素材;在所述分镜画面展示区域展示所述目标角色素材;在所述角色素材配置区域展示所述目标角色素材对应的角色属性的配置操作信息;基于所述配置操作信息,配置所述演员属性信息和所述动作属性信息。
- 一种非易失性计算机可读存储介质,其中,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行如下步骤:获取目标剧本的分镜剧本信息;确定所述分镜剧本信息对应的目标角色素材,以及所述目标角色素材对应的演员属性信息和动作属性信息;基于所述演员属性信息对应的预设动作素材、所述动作属性信息对应的目标标准动作素材和所述目标角色素材,生成目标视频。
- 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现如下步骤:获取目标剧本的分镜剧本信息;确定所述分镜剧本信息对应的目标角色素材,以及所述目标角色素材对应的演员属性信息和动作属性信息;基于所述演员属性信息对应的预设动作素材、所述动作属性信息对应的目标标准动作素材和所述目标角色素材,生成目标视频。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110862793.8 | 2021-07-29 | ||
CN202110862793.8A CN113727039B (zh) | 2021-07-29 | 2021-07-29 | 视频生成方法、装置、电子设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023005194A1 true WO2023005194A1 (zh) | 2023-02-02 |
Family
ID=78674340
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/076700 WO2023005194A1 (zh) | 2021-07-29 | 2022-02-17 | 视频生成方法及电子设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113727039B (zh) |
WO (1) | WO2023005194A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113727039B (zh) * | 2021-07-29 | 2022-12-27 | 北京达佳互联信息技术有限公司 | 视频生成方法、装置、电子设备及存储介质 |
CN114567819B (zh) * | 2022-02-23 | 2023-08-18 | 中国平安人寿保险股份有限公司 | 视频生成方法、装置、电子设备及存储介质 |
CN114862991A (zh) * | 2022-04-24 | 2022-08-05 | 北京达佳互联信息技术有限公司 | 动画生成方法、装置及电子设备 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120311448A1 (en) * | 2011-06-03 | 2012-12-06 | Maha Achour | System and methods for collaborative online multimedia production |
US20130151970A1 (en) * | 2011-06-03 | 2013-06-13 | Maha Achour | System and Methods for Distributed Multimedia Production |
US8988611B1 (en) * | 2012-12-20 | 2015-03-24 | Kevin Terry | Private movie production system and method |
CN107067450A (zh) * | 2017-04-21 | 2017-08-18 | 福建中金在线信息科技有限公司 | 一种视频的制作方法和装置 |
CN108549655A (zh) * | 2018-03-09 | 2018-09-18 | 阿里巴巴集团控股有限公司 | 一种影视作品的制作方法、装置及设备 |
CN113014832A (zh) * | 2019-12-19 | 2021-06-22 | 志贺司 | 影像编辑系统及影像编辑方法 |
CN113727039A (zh) * | 2021-07-29 | 2021-11-30 | 北京达佳互联信息技术有限公司 | 视频生成方法、装置、电子设备及存储介质 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102184200A (zh) * | 2010-12-13 | 2011-09-14 | 中国人民解放军国防科学技术大学 | 一种计算机辅助的动画图文分镜半自动生成方法 |
WO2013019638A1 (en) * | 2011-07-29 | 2013-02-07 | Cisco Technology, Inc. | Method, computer- readable storage medium, and apparatus for modifying the layout used by a video composing unit to generate a composite video signal |
CN108124187A (zh) * | 2017-11-24 | 2018-06-05 | 互影科技(北京)有限公司 | 交互视频的生成方法及装置 |
US10789755B2 (en) * | 2018-04-03 | 2020-09-29 | Sri International | Artificial intelligence in interactive storytelling |
CN108989705B (zh) * | 2018-08-31 | 2020-05-22 | 百度在线网络技术(北京)有限公司 | 一种虚拟形象的视频制作方法、装置和终端 |
US11288880B2 (en) * | 2019-01-18 | 2022-03-29 | Snap Inc. | Template-based generation of personalized videos |
CN110708596A (zh) * | 2019-09-29 | 2020-01-17 | 北京达佳互联信息技术有限公司 | 生成视频的方法、装置、电子设备及可读存储介质 |
CN112734883A (zh) * | 2021-01-25 | 2021-04-30 | 腾讯科技(深圳)有限公司 | 一种数据处理方法、装置、电子设备和存储介质 |
-
2021
- 2021-07-29 CN CN202110862793.8A patent/CN113727039B/zh active Active
-
2022
- 2022-02-17 WO PCT/CN2022/076700 patent/WO2023005194A1/zh unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120311448A1 (en) * | 2011-06-03 | 2012-12-06 | Maha Achour | System and methods for collaborative online multimedia production |
US20130151970A1 (en) * | 2011-06-03 | 2013-06-13 | Maha Achour | System and Methods for Distributed Multimedia Production |
US8988611B1 (en) * | 2012-12-20 | 2015-03-24 | Kevin Terry | Private movie production system and method |
CN107067450A (zh) * | 2017-04-21 | 2017-08-18 | 福建中金在线信息科技有限公司 | 一种视频的制作方法和装置 |
CN108549655A (zh) * | 2018-03-09 | 2018-09-18 | 阿里巴巴集团控股有限公司 | 一种影视作品的制作方法、装置及设备 |
CN113014832A (zh) * | 2019-12-19 | 2021-06-22 | 志贺司 | 影像编辑系统及影像编辑方法 |
CN113727039A (zh) * | 2021-07-29 | 2021-11-30 | 北京达佳互联信息技术有限公司 | 视频生成方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN113727039A (zh) | 2021-11-30 |
CN113727039B (zh) | 2022-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023005194A1 (zh) | 视频生成方法及电子设备 | |
CN109120866B (zh) | 动态表情生成方法、装置、计算机可读存储介质和计算机设备 | |
US9507506B2 (en) | Automatic target box in methods and systems for editing content-rich layouts in media-based projects | |
US11317139B2 (en) | Control method and apparatus | |
US20190164322A1 (en) | Presenting multiple image segmentations | |
CN108762505B (zh) | 基于手势的虚拟对象控制方法、装置、存储介质和设备 | |
WO2019089097A1 (en) | Systems and methods for generating a summary storyboard from a plurality of image frames | |
WO2019114328A1 (zh) | 一种基于增强现实的视频处理方法及其装置 | |
WO2021159992A1 (zh) | 图片文本处理方法、装置、电子设备和存储介质 | |
US20180114546A1 (en) | Employing live camera feeds to edit facial expressions | |
CN110930325B (zh) | 基于人工智能的图像处理方法、装置及存储介质 | |
CN113709545A (zh) | 视频的处理方法、装置、计算机设备和存储介质 | |
de Abreu et al. | Toward content-driven intelligent authoring of mulsemedia applications | |
US11581018B2 (en) | Systems and methods for mixing different videos | |
US20170186207A1 (en) | Information processing method and terminal, and computer storage medium | |
EP4099711A1 (en) | Method and apparatus and storage medium for processing video and timing of subtitles | |
CN112585646A (zh) | 用于对媒体执行编辑操作的方法与系统 | |
CN113709575B (zh) | 视频编辑处理方法、装置、电子设备及存储介质 | |
KR20190094879A (ko) | 실외 증강현실 서비스를 위한 모듈식 콘텐츠 제작 방법 및 장치 | |
CN114285988A (zh) | 显示方法、装置、电子设备及存储介质 | |
CN116991513A (zh) | 配置文件生成方法、装置、电子设备、介质及程序产品 | |
CN114047979A (zh) | 展示项目配置及显示方法、装置、设备、存储介质 | |
TWM560053U (zh) | 線上整合擴增實境的編輯裝置 | |
US20240202232A1 (en) | Methods and Systems for Processing Imagery | |
TW201905639A (zh) | 線上整合擴增實境的編輯裝置與系統 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22847814 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.05.2024) |