WO2014091484A1 - A system and method for creating a video - Google Patents

A system and method for creating a video Download PDF

Info

Publication number
WO2014091484A1
WO2014091484A1 PCT/IL2013/051018 IL2013051018W WO2014091484A1 WO 2014091484 A1 WO2014091484 A1 WO 2014091484A1 IL 2013051018 W IL2013051018 W IL 2013051018W WO 2014091484 A1 WO2014091484 A1 WO 2014091484A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
performer
recording
script
recording system
Prior art date
Application number
PCT/IL2013/051018
Other languages
French (fr)
Inventor
Eran POLACK
Amit FARBMAN
Original Assignee
Scooltv, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Scooltv, Inc. filed Critical Scooltv, Inc.
Publication of WO2014091484A1 publication Critical patent/WO2014091484A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • H04N9/8211Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being a sound signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera

Definitions

  • the subject matter relates generally to a method of recording of a video using a video recording system.
  • Directing and recording a video can be done in a person's home or some other location using a video camera. Editing the video can be done on a home computer or laptop.
  • the person recording the video can choose one of two approaches to record a video. The first is to record from beginning to end the entire video in one take. The second approach is to record multiple takes of different scenes of the video and then edit the various scenes together to create the video. Editing the video is done after recording of the multiple takes is complete and an editor chooses frames from various takes recorded, which are combined to make the final video. The editor may also choose music and noise effects to be added to the video. The editor may choose frames from various cameras that capture the frames simultaneously from different angles.
  • the editor or director of the video may also determine the arrangement, position and location of the camera or various cameras. The director records as many takes from as many camera views required for editing and then splices the selected scenes from the various takes and camera views.
  • a video recording system comprising a textual script input configured to receive a textual script and a recording collector configured to collect video data received from one or more video cameras.
  • the video recording system further comprises a script unit for determining a predefined part of the textual script is shown on a display device to a performer of a video, when recording the video based on the textual script; and wherein when recording the video based on the textual script, said display device further displays the performer inside the virtual environment.
  • the video recording system further comprising an editing unit configured to edit a video content containing one or more video-takes collected from the one or more video cameras.
  • the video recording system further comprising a background audio sample database
  • the background audio sample database comprises audio samples to be added as background audio samples into the video.
  • the video recording system further comprises a motion receiver configured to detect a location of the performer while collecting video data received from one or more video cameras enabling the video recording system to display a correction instruction to the performer when the performer is not in a required location.
  • the video recording system further comprising a video storage for storing the video and relevant media connected to the video.
  • the virtual environment database comprises one or more virtual environments.
  • the display device displays a virtual reenactment of the video.
  • the script unit recognizes that the textual script is in a language that is not English; the script unit is enabled to translate the textual script to one or more languages; the display device displays the textual script in the one or more languages.
  • the method further comprises designating a background audio sample from a background audio sample database. In some cases, the method further comprises determining scene characteristics. In some cases, the method further comprises performing one or more live reviews of the virtual enactment.
  • the method further comprising automatically searching for relevant media to insert into a video; and inserting relevant media into the video.
  • the method further comprises displaying a demonstrative virtual enactment of the textual script.
  • the method further comprises editing the video from one or more video-takes.
  • the image lighting is digitally enhanced during the recording of the one or more video-takes.
  • the method further comprises recognizing a performer's facial feature; analyzing the performer's facial feature; determining a standard for a performer's facial feature; automatically determining that the facial feature is out of a predefined standard; automatically determining a need for make-up; displaying a message to the performer to improve the performer's facial feature.
  • the facial features comprise sweating, discoloration of skin, nervous ticks, and a combination thereof.
  • the method further comprises designating an avatar that appears in the video instead of the performer.
  • the method further comprises inserting a guest performer into the video during the recording of the video.
  • Figure 1 shows a video recording system, according to some exemplary embodiments of the subject matter
  • Figures 2A-2B show a method for recording a video, according to some exemplary embodiments of the subject matter
  • Figure 3 shows a recording environment, according to some exemplary embodiments of the subject matter
  • Figure 4A shows a performer of a video on a display screen without a virtual environment, according to some exemplary embodiments of the subject matter, and;
  • Figure 4B shows a performer of a video on a display screen in a virtual environment, according to some exemplary embodiments of the subject matter.
  • FIG. 1 shows a video recording system for recording a video, according to some exemplary embodiments of the subject matter.
  • the video recording system 100 may comprise of a textual script input 125 to receive a textual script, for example an OCR unit to convert an image of the textual script into text data.
  • the textual script is transmitted from the textual script input 125 to a script unit 105, which transmits the textual script to be displayed on a display device 130, such as a computer screen, a smartphone screen, or the like.
  • the script unit 105 may receive the textual script as a file, for example as a text file, a Microsoft Word document, or the like.
  • the display device 130 displays the textual script so a performer may read the words the performer has to say while recording the video.
  • the textual script is displayed on the bottom of the display device 130.
  • the script unit 105 may determine the language of the textual script and be enabled to translate the textual script into one or more languages.
  • the textual script is in Russian but the performer wishes the textual script to appear in Spanish on the display device 130.
  • the script unit 105 then translates the textual script into Spanish and the textual script is displayed in Spanish on the display device 130.
  • the video recording system 100 may also comprise a recording collector 160, which receives video data, for example from a video camera or a smartphone camera.
  • the recording collector 160 receives the video data from one or more cameras that record video-takes.
  • the video recording system 100 comprises a video processor 145, which designates scene characteristics of the video.
  • the scene characteristics may comprise scene tempo, camera cues, lighting, and the like.
  • the video recording system 100 may further comprise a motion receiver 155, which receives movements collected by a motion sensor, such as motion sensor suit, an Xbox Kinetic, or the like.
  • the video recording system 100 comprises a virtual environment database 110, which stores one or more virtual environments that the performer can elect from to be used as a virtual environment of the video.
  • the performer may elect to use a newsroom virtual environment in which the performer performs as a news anchor.
  • the performer inputs the choice to the video recording system 100.
  • the video recording system 100 receives the performer's inputted virtual environment and obtains the virtual environment and transfers the virtual environment to the display device 130.
  • the video recording system 100 may also comprise an editing unit 165, which edits the video.
  • the performer may record one or more video-takes of the textual script.
  • the editing unit 165 uses the one or more video-takes to designate preferred video-takes or scenes from the one or more video-takes that may be spliced together to for the video.
  • the editing unit 165 may also comprise a plurality of filters, which enable enhancing the video, for example a color filter, which may change the color of the video and designate certain scenes to appear only in blue and white, while other scenes are in full color.
  • the video recording system 100 may communicate with an editor input 135, such as a keyboard, which enables the performer to make manual editing changes to the video.
  • the video recording system 100 may also comprise of a relevant media input unit 170, which searches for relevant media to be added into the video.
  • the relevant media input unit 170 searches for relevant media that may be related to the content of the video, for example searching for internet videos that discuss the content of the video.
  • the relevant media input unit 170 obtains the relevant media and transfers the relevant media to the editing unitl65, which adds the video into a designated time mark of the video.
  • the time mark at which the relevant media is added may be designated by the performer using the editor input 135.
  • the editing unit 165 may designate the time marker according to key words in the textual script.
  • the editing unit 165 may add the relevant media as a content layer in the video, which enables a viewer of the video to watch the video with or without the relevant media.
  • the video and the relevant media are stored in a video storage unit 185.
  • the relevant media may be stored with time marker data, which designates the time marker at which the relevant media is displayed in the video.
  • the video recording system 100 comprises of a video player 180, which plays the video on the display device 130 to enable the performer to watch the video.
  • the video processor 145 may analyze the recordings obtained by the recording collector 160 to determine if some video characteristics, such as the lighting, are sufficient or if additional lighting is required. The video processer 145 then transfers a message to the display device 130 to inform the performer that lighting is insufficient for recording. In some cases, the video processor 145 may comprise lighting filters which may increase or decrease lighting or may be able to alter the contrast of the recorded data. The video processor 145 may analyze the recordings obtained by the recording collector 160 to determine if a performer's facial features are of a predefined standard.
  • the video processor 145 displays a message on the display device 130 to the performer of that the performer's facial features are not of the predefined standard.
  • the video processor 145 may determine whether the performer is wearing sufficient makeup according to a predefined rule and display a message on the display device 130 to inform the performer that the performer requires more makeup.
  • the video processor 145 recognizes that the performer's facial features are not of the predefined standard or that the performer is not wearing enough makeup and corrects the image by digitally enhancing the image to improve the performer's appearance.
  • the video processor 145 recognizes that the performer is wearing clothing which conflict with the virtual environment used to record the video and displays a message on the display device 130 to inform the performer that the clothes conflict with the virtual environment.
  • the performer is wearing a green shirt that is very similar to the virtual environment of a jungle. The color of the green shirt blends into the virtual environment and the performer cannot be distinguished from the background.
  • the video processor 145 recognizes that the performer is indistinguishable from the background and the display device 130 displays a background conflict message to the performer to inform the performer that the performer's clothing is inappropriate for the virtual background.
  • the video processor 145 may display a recommendation of proper clothing that may be worn for recording the video in the virtual environment, for example, displaying a message recommending an orange shirt to the performer in the jungle virtual environment.
  • Step 200 discloses receiving a textual script.
  • the textual script includes the dialogue of the recorded video.
  • the textual script comprises words that the performer says during the recording of the video.
  • the textual script is received through the textual script input 125 of figure 1, for example, as an OCR reading of an Adobe Acrobat file.
  • the textual script is transferred from the textual script input 125 to the script unit 105 of figure 1.
  • the script unit 105 analyzes the textual script and determines the time in which each part of the text is to be displayed to the performer. For example, the script unit 105 may determine that 5 words are displayed per second. The timing may change from one take to another.
  • the script unit 105 transmits text parts from the textual script to the display device 130 of figure 1, such that the text part of the textual script is displayed on the display device 130 of figure 1 while the video is recorded.
  • the textual script may be written in a language other than English.
  • the script unit 105 may recognize which language the textual script is written in and be enabled to translate the textual script into a different language, for example, translating the script from Russian to Spanish.
  • Step 205 discloses designating a virtual environment used for recording the video.
  • the performer elects a virtual environment to use as a background for recording the video.
  • the virtual environment is elected out of one or more virtual environments stored in the virtual environment database 1 10 of Figure 1.
  • the video recording system 100 receives the performer's input of the virtual environment elected from the one or more virtual environments. Once the virtual environment is elected the video recording system 100 designates the virtual environment to be the background of the recorded video. In some cases, the video recording system 100 may designate more than one virtual environment to be used on the same recorded video according to the textual script and video that is recorded by the performer. For example, the performer is recording a survival video comparing survival tips in different geographical locations.
  • the survival video shows the performer giving survival tips in a jungle, where the virtual environment is a jungle, and then the performer gives survival tips in a desert, where the virtual environment is a desert.
  • the virtual environment may be obtained from an online virtual environment database, for example, downloading the virtual environment from a website.
  • Step 208 discloses designating an avatar.
  • the performer may elect to have the avatar appear in the video instead of the performer.
  • the performer is recorded performing the role in the video but the avatar is displayed in the video instead of the performer doing all of the performer's actions.
  • the performer may have the option to see the performer in the video displayed on the display device 130, and the avatar is only shown after the recording of the video is complete and the video is edited.
  • the performer may elect to have the avatar shown instead of the performer on the display device 130 during the recording of the video.
  • the performer is making a video about life on Mars.
  • the performer elects a martian avatar that is displayed in the video instead of the performer.
  • the martian avatar performs the movements of the performer according to the performer's moves collected by the motion receiver 155 of figure 1.
  • Step 210 discloses designating a background audio sample for the video.
  • the performer designates background audio samples that will be used in the video.
  • the background audio samples are designated from the background audio sample database 1 15 of Figure 1.
  • the performer may import the background audio samples from an external database not located in the video recording system 100, for example designating the background audio samples from an internet website, FTP server, external hard drive or the like.
  • the background audio samples may comprise of music or sound effects, which are cued to sound at designated time markers during the recording.
  • Step 215 discloses determining scene characteristics of the recorded video.
  • the scene characteristics are determined according to the virtual environment the performer elects.
  • the scene characteristics may comprise the cues for camera changes, cuing the audio samples to appear at the desired time marks of the video, the virtual environment the location the performer should be in during recording of the video, and the like.
  • the performer may be enabled by the video recording system 100 to alter the scene characteristics by inputting modified scene characteristics.
  • the performer may designate one or more additional characters into the video.
  • the video processor 145 may designate the dialogue and actions of the one or more additional characters. In some cases the dialogue of the one or more additional characters are determined by the textual script.
  • the video processor 145 designates a voice to an additional character of the one or more additional characters, which is the voice an additional character dialogue is pronounced in.
  • Step 220 discloses displaying the textual script on the display screen.
  • the textual script is transferred from the script unit 105 of Figure 1 to the display device 130 of Figure 1, which displays text of the textual script on a predetermined area of the display device 130 of Figure 1. For example, the text is displayed on the bottom of the display device 130.
  • Step 225 discloses displaying a demonstrative virtual enactment of the textual script.
  • the performer may elect to watch a virtual enactment of the textual script on the display device 130 prior to performing one or more live reviews or recording several video-takes.
  • the video processor 145 may enable the performer to view a demonstrative virtual reenactment of the textual script.
  • the virtual reenactment is displayed on the display device 130 for the performer to view an estimation of what the video should be.
  • the virtual reenactment enables the performer to determine if anything in the textual script needs to be modified prior to recording the video, and so the performer can see the performance requirements.
  • Step 230 discloses performing one or more live reviews of the video.
  • the video recording system 100 enables the performer to do one or more live reviews of the textual script before recording the video-takes.
  • the performer may perform the textual script during a live review of the one or more live reviews, And may also any revisions are necessary in the textual script.
  • the text of the textual script needs to be modified to make the performance easier for the performer to say the words.
  • the live review may enable the performer to determine that the scene characteristics need to be modified to improve the quality of the video. For example, the performer determines from the live review that the scene tempo needs to change to enable a viewer of the video to more easily follow video content when the viewer watches the video.
  • the video recording system 100 may receive a number of live reviews the performer wishes to perform prior to recording one or more video takes.
  • the one or more live reviews are recorded by the video processor 145 to be used during editing in case some scenes were recorded more successfully during the one or more live reviews without satisfactory live video-takes.
  • Step 235 discloses recording one or more live video-takes of the video.
  • the video recording system 100 may automatically begin recording the one or more video-takes, or wait to receive a command from the performer to being recording the one or more video-takes.
  • the video processor 145 operates the recording collector 160 of figure 1 to receive video-takes from the one or more cameras.
  • the display device 130 shows the recorded video to the performer while a video-take is being recorded.
  • the display device 130 shows the performer the text of the textual script while recording the video takes.
  • the video may comprise receiving a guest performer appearance in the video.
  • the guest performer has a guest performance system which may transfer a guest performance video feed to the video recording system 100.
  • a guest performance is displayed in the video to show the guest performer in the video.
  • the video recording system 100 receives the guest performance the video recording system 100 determines where in the video the guest performance is inserted.
  • the motion receiver 155 of figure 1 detects whether the performer is in a required location, for example in front of the one or more cameras. When the motion receiver 155 detects that the performer moves out of the required location, a message to return to the required location is displayed on the display device 130. The message may be in a format of arrows or a correction instruction on where is the required location relative to the current location of the performer as received from the motion receiver 155.
  • a camera of the one or more video cameras is designated to record particular frames or scenes according to the scene characteristics. For example, the performer is performing a news show, where the performer tells the news. During each news story, the performer looks into a differently located camera. During a first news story, the performer stares into a camera in front of the performer, and during a second news story the performer stares into a camera that is on a performer's left side.
  • the video processor 145 determines whether the lighting is sufficient. In cases where the lighting is insufficient, the video recording system 100 may transmit a message to the display device 130 informing the performer or another person assisting the performer to create the video to turn on more lights. In some exemplary embodiments, the video processor 145 may be enabled to adjust brightness and contrast while recording to improve the lighting and clarity of a video-take of the one or more video takes. In some exemplary embodiments of the subject matter, the video processor 145 may recognize unwanted facial features on the performer, for example, sweat, facial discoloration, or nervous ticks. The video recording system 100 transmits a message to the display device 130 to inform the performer of the unwanted facial features.
  • Step 240 discloses editing the video which was recorded during the one or more video-takes.
  • the video recording system 100 obtains the recorded video-takes.
  • the editing unit 165 receives inputs from the editor input 135 regarding which scenes to keep from the one or more video-takes.
  • the editing unit 165 enables splicing a scene from the video-take of the one or more video-takes into a different video-take of the one or more video-takes, until the video is a final video.
  • editing the video is performed while recording the one or more video-takes. After a scene is recorded, the scene may then be edited according to previous video-takes of the scene.
  • the editing unit 165 automatically determines a video-take of the one or more video-takes and inserts the video-take into the video. Determination of the video take to be inserted into the video may be performed according to the quality of speech, light, performer location and the like.
  • Step 245 discloses searching for relevant media to be added to the video.
  • the video recording system 100 transfers the video to the relevant media input unit 170 of Figure 1.
  • the relevant media input unit 170 detects relevant media, for example, from the internet to input into the video.
  • Step 250 discloses adding relevant media into the video. Once relevant media has been obtained by the relevant media input unit 170, the relevant media input unit 170 adds the obtained relevant media into the video.
  • the relevant media input unit 170 adds the relevant media by placing time marks where the relevant media begins being displayed in the video. The time mark discloses the location the video is paused and the relevant media is displayed when the video is played by the video player 180 of figure 1.
  • the relevant media and the time marks are stored in the video storage unit 185 of figure 1 with the video.
  • a particular related media may be added into two or more different videos, while the data of the relevant media and the time markers for each video of the two or more videos is stored with the relevant media in the video storage unit 185.
  • the performer wishes to improve or correct a live review of a video take while recording the one or more live reviews of step 230 or recording of the one or more video-takes of step 235.
  • the performer wishes to rewind and rerecord a particular part a live review of the one or more live reviews or a video-take of the one or more video-takes.
  • the video recording system 100 performs a method for rewinding and rerecording a video as disclosed in figure 2B, according to exemplary embodiments of the subject matter.
  • Step 260 discloses receiving a command from the performer or a person assisting the performer while recording a live review or a video take.
  • the command may include stopping the recording of the video-take.
  • the command may be a speech command detected by an audio input, such as a microphone connected to the video recording system 100.
  • the video recording system 100 receives the command.
  • Step 265 discloses stopping the video-take.
  • the video recording system 100 stops recording the video-take at the time marker at which the video processor 145 was last recording.
  • Step 270 discloses receiving a command to rewind the video-take.
  • the command may be in a format of audio speech, or inputted into a keyboard or an interface of the display device 130.
  • the performer may input a command to the video recording system 100 rewind the video-take to a designated rewind time marker from which the performers wants to go back.
  • the video recording system 100 receives the performer's command to rewind the video-take to the designated rewind time marker. For example, the system receives a command to rewind the from a current time mark of 5:23 to the previous time mark of 4:19.
  • Step 275 discloses rewinding the video-take to the designated rewind time marker.
  • the video processor 145 rewinds the video-take to the designated rewind time marker and stops once the system reaches the designated rewind time marker.
  • step 270 and 275 may be done simultaneously, where the video processor 145 is receiving a continuous rewind command and rewinds until no more commands are received.
  • the video processor 145 receives the rewind command from time marker 5:23 and continues receiving the rewind command until time marker 4:19, at which point the rewind commands are no longer received by the video recording system 100 and the video recording system 100 stops rewinding the video-take.
  • Step 280 discloses resuming recording the video-take from the designated rewind time marker. Once the video recording system 100 reaches the designated rewind time- marker the video recording system 100 may resume recording the video take from the designated rewind time marker and display the text associated with the previous time marker on the display device.
  • FIG. 3 shows a video recording environment, according to some exemplary embodiments of the subject matter.
  • the recording environment comprises of a video recording system 300, such as a PC computer or a laptop.
  • the video recording system 300 communicates with one or more cameras arranged at different locations to obtain different video takes.
  • a direct camera 310 may be located in front of the performer to obtain a front video-take, while a left camera 312 and a right camera 314 obtain side view video-takes.
  • a camera of the one or more cameras may be placed in a different location for recording a different view of a scene.
  • the video recording system 300 further communicates with a display device 350, which displays a video take on a display device screen 355.
  • the video recording system 300 may display text on a teleprompter 325 to enable a performer to see a predefined part of the textual script.
  • the video recording system 300 further communicates with an audio input 340, such as a microphone, for collecting audio data.
  • the audio data may comprise of a performer's spoken words.
  • the video recording system 300 further communicates with one or more motion sensors connected to the body of a performer 301.
  • the performer 301 wears the one or more motion sensors on a performer's head 360, on a performer's left hand 368, a performer's right arm 362, a performer's left leg 366 and a performer's right leg 364.
  • the one or more motion sensors transfer the movements the performer performs to the video recording system 300, for example to the motion receiver 155 of figure 1.
  • the one or more sensors may be used to detect that the performer 301 located at the right place, for example in front of the one or more cameras.
  • the motion receiver 155 of figure 1 receives a performer's location from the one or more motion sensors and transfers a message to the display device 350 to inform the performer to return to the designated location.
  • the performer 301 may move out of the scope the camera.
  • the video recording system 300 recognizes that the performer 301 is not in the correct location based on data collected by the motion sensing input.
  • the video recording system 300 transmits a message on the display device screen 355 informing the performer 301 to move to the correct location.
  • the video processor 145 displays on the display device 130 a preferred position for the performer to be in, for example the performer should sit in a chair or stand slightly off camera.
  • FIG. 4A shows a performer of a video on a display device without a virtual environment, according to some exemplary embodiments of the subject matter.
  • the display device 400 comprises of a display device 420, which shows a performer 405 in front of a camera (not shown).
  • the display device 420 is located so the performer 405 may view what is being recorded during the recording of the video-take.
  • the display device 420 shows the performer 405 prior to a virtual environment being inserted into a video-take.
  • the performer 405 may be instructing how to play basketball and has designated the space in a room where the performer explains the rules of basketball.
  • the bottom of the display device 400 comprises a teleprompter 410 for displaying text for the performer 405 while recording the video-take.
  • Figure 4B shows a performer of a video on a display screen in a virtual environment, according to some exemplary embodiments of the subject matter.
  • the display device 400 shows the performer in a virtual environment 430.
  • the virtual environment comprises a virtual basket 450, which the performer 405 may use to demonstrate how to throw a basketball 445.
  • the video may comprise an additional character 460, for example an opponent basketball player.
  • the video processor 145 of figure 1 designates the actions and dialogue of the additional character 460.
  • the dialogue and the actions of the additional character 460 may be predetermined by the video processor 145 according to the designated virtual environment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The subject matter discloses a video recording system, comprising a textual script input configured to receive a textual script and a recording collector configured to collect video data received from one or more video cameras. The system further comprises a script unit for determining a predefined part of the textual script is shown on a display device to a performer of a video, when recording the video based on the textual script; and when recording the video based on the textual script, said display device further displays the performer inside the virtual environment

Description

A SYSTEM AND METHOD FOR CREATING A VIDEO
FIELD OF THE INVENTION
The subject matter relates generally to a method of recording of a video using a video recording system.
BACKGROUND OF THE INVENTION
Online videos have become increasingly popular and the quality of videos posted online has improved. Uploaders are constantly pursuing better quality production while trying to maintain low costs in recording and editing videos that can later be posted online or shared with others, for example via e-mail. Uploaders have the ability to create videos at home and use low cost equipment to produce high quality videos that are, easy to upload, and enjoyable to watch.
Directing and recording a video can be done in a person's home or some other location using a video camera. Editing the video can be done on a home computer or laptop. The person recording the video can choose one of two approaches to record a video. The first is to record from beginning to end the entire video in one take. The second approach is to record multiple takes of different scenes of the video and then edit the various scenes together to create the video. Editing the video is done after recording of the multiple takes is complete and an editor chooses frames from various takes recorded, which are combined to make the final video. The editor may also choose music and noise effects to be added to the video. The editor may choose frames from various cameras that capture the frames simultaneously from different angles. The editor or director of the video may also determine the arrangement, position and location of the camera or various cameras. The director records as many takes from as many camera views required for editing and then splices the selected scenes from the various takes and camera views. SUMMARY
It is an object of the subject matter to disclose a video recording system, comprising a textual script input configured to receive a textual script and a recording collector configured to collect video data received from one or more video cameras. The video recording system further comprises a script unit for determining a predefined part of the textual script is shown on a display device to a performer of a video, when recording the video based on the textual script; and wherein when recording the video based on the textual script, said display device further displays the performer inside the virtual environment.
In some cases, the video recording system further comprising an editing unit configured to edit a video content containing one or more video-takes collected from the one or more video cameras.
In some cases, the video recording system further comprising a background audio sample database, the background audio sample database comprises audio samples to be added as background audio samples into the video.
In some cases, the video recording system further comprises a motion receiver configured to detect a location of the performer while collecting video data received from one or more video cameras enabling the video recording system to display a correction instruction to the performer when the performer is not in a required location.
In some cases, the video recording system further comprising a video storage for storing the video and relevant media connected to the video.
In some cases, the virtual environment database comprises one or more virtual environments. In some cases, the display device displays a virtual reenactment of the video.
In some cases, the script unit recognizes that the textual script is in a language that is not English; the script unit is enabled to translate the textual script to one or more languages; the display device displays the textual script in the one or more languages.
It is another object of the subject matter to disclose a method, comprising receiving a textual script; designating a virtual environment from a virtual environment database; recording one or more video-takes of the textual script, wherein a performer is viewing the performer in the virtual environment during the recording; displaying the textual script on a teleprompter while one or more video-takes are recorded.
In some cases, the method further comprises designating a background audio sample from a background audio sample database. In some cases, the method further comprises determining scene characteristics. In some cases, the method further comprises performing one or more live reviews of the virtual enactment.
In some cases, the method further comprising automatically searching for relevant media to insert into a video; and inserting relevant media into the video. In some cases, the method further comprises displaying a demonstrative virtual enactment of the textual script. In some cases, the method further comprises editing the video from one or more video-takes. In some cases, the image lighting is digitally enhanced during the recording of the one or more video-takes.
In some cases, the method further comprises recognizing a performer's facial feature; analyzing the performer's facial feature; determining a standard for a performer's facial feature; automatically determining that the facial feature is out of a predefined standard; automatically determining a need for make-up; displaying a message to the performer to improve the performer's facial feature.
In some cases, the facial features comprise sweating, discoloration of skin, nervous ticks, and a combination thereof. In some cases, the method further comprises designating an avatar that appears in the video instead of the performer. In some cases, the method further comprises inserting a guest performer into the video during the recording of the video.
It is another object of the subject matter to disclose a method, comprising receiving a command to stop recording a video; stopping a video-take used for recording the video; receiving a command to rewind the video; rewinding the take to a designated time mark; resuming recording the take of the one or more takes from the designated time mark. BRIEF DESCRIPTION OF THE DRAWINGS
Exemplary non-limited embodiments of the disclosed subject matter will be described, with reference to the following description of the embodiments, in conjunction with the figures. The figures are generally not shown to scale and any sizes are only meant to be exemplary and not necessarily limiting. Corresponding or like elements are optionally designated by the same numerals or letters.
Figure 1 shows a video recording system, according to some exemplary embodiments of the subject matter;
Figures 2A-2B show a method for recording a video, according to some exemplary embodiments of the subject matter;
Figure 3 shows a recording environment, according to some exemplary embodiments of the subject matter;
Figure 4A shows a performer of a video on a display screen without a virtual environment, according to some exemplary embodiments of the subject matter, and;
Figure 4B shows a performer of a video on a display screen in a virtual environment, according to some exemplary embodiments of the subject matter.
DETAILED DESCRIPTION
The subject matter discloses a video recording system and method for recording a video, according to some exemplary embodiments. Figure 1 shows a video recording system for recording a video, according to some exemplary embodiments of the subject matter. The video recording system 100 may comprise of a textual script input 125 to receive a textual script, for example an OCR unit to convert an image of the textual script into text data. The textual script is transmitted from the textual script input 125 to a script unit 105, which transmits the textual script to be displayed on a display device 130, such as a computer screen, a smartphone screen, or the like. The script unit 105 may receive the textual script as a file, for example as a text file, a Microsoft Word document, or the like. The display device 130 displays the textual script so a performer may read the words the performer has to say while recording the video. For example, the textual script is displayed on the bottom of the display device 130. In some exemplary embodiments, the script unit 105 may determine the language of the textual script and be enabled to translate the textual script into one or more languages. For example, the textual script is in Russian but the performer wishes the textual script to appear in Spanish on the display device 130. The script unit 105 then translates the textual script into Spanish and the textual script is displayed in Spanish on the display device 130.
The video recording system 100 may also comprise a recording collector 160, which receives video data, for example from a video camera or a smartphone camera. The recording collector 160 receives the video data from one or more cameras that record video-takes. The video recording system 100 comprises a video processor 145, which designates scene characteristics of the video. The scene characteristics may comprise scene tempo, camera cues, lighting, and the like. The video recording system 100 may further comprise a motion receiver 155, which receives movements collected by a motion sensor, such as motion sensor suit, an Xbox Kinetic, or the like.
The video recording system 100 comprises a virtual environment database 110, which stores one or more virtual environments that the performer can elect from to be used as a virtual environment of the video. For example, the performer may elect to use a newsroom virtual environment in which the performer performs as a news anchor. Once the virtual environment is designated, the performer inputs the choice to the video recording system 100. The video recording system 100 receives the performer's inputted virtual environment and obtains the virtual environment and transfers the virtual environment to the display device 130. The video recording system 100 may also comprise an editing unit 165, which edits the video. The performer may record one or more video-takes of the textual script. The editing unit 165 uses the one or more video-takes to designate preferred video-takes or scenes from the one or more video-takes that may be spliced together to for the video. The editing unit 165 may also comprise a plurality of filters, which enable enhancing the video, for example a color filter, which may change the color of the video and designate certain scenes to appear only in blue and white, while other scenes are in full color. In some cases, the video recording system 100 may communicate with an editor input 135, such as a keyboard, which enables the performer to make manual editing changes to the video. The video recording system 100 may also comprise of a relevant media input unit 170, which searches for relevant media to be added into the video. The relevant media input unit 170 searches for relevant media that may be related to the content of the video, for example searching for internet videos that discuss the content of the video. The relevant media input unit 170 obtains the relevant media and transfers the relevant media to the editing unitl65, which adds the video into a designated time mark of the video. The time mark at which the relevant media is added may be designated by the performer using the editor input 135. In other cases, the editing unit 165 may designate the time marker according to key words in the textual script. The editing unit 165 may add the relevant media as a content layer in the video, which enables a viewer of the video to watch the video with or without the relevant media. The video and the relevant media are stored in a video storage unit 185. The relevant media may be stored with time marker data, which designates the time marker at which the relevant media is displayed in the video. The video recording system 100 comprises of a video player 180, which plays the video on the display device 130 to enable the performer to watch the video.
During the recording of the video, the video processor 145 may analyze the recordings obtained by the recording collector 160 to determine if some video characteristics, such as the lighting, are sufficient or if additional lighting is required. The video processer 145 then transfers a message to the display device 130 to inform the performer that lighting is insufficient for recording. In some cases, the video processor 145 may comprise lighting filters which may increase or decrease lighting or may be able to alter the contrast of the recorded data. The video processor 145 may analyze the recordings obtained by the recording collector 160 to determine if a performer's facial features are of a predefined standard. If the performer's facial features are not of the predefined standard, for example due to sweating, discoloration, flushing, nervous ticks and the like, the video processor 145 displays a message on the display device 130 to the performer of that the performer's facial features are not of the predefined standard. The video processor 145 may determine whether the performer is wearing sufficient makeup according to a predefined rule and display a message on the display device 130 to inform the performer that the performer requires more makeup. In some cases, the video processor 145 recognizes that the performer's facial features are not of the predefined standard or that the performer is not wearing enough makeup and corrects the image by digitally enhancing the image to improve the performer's appearance. In some exemplary embodiments of the subject matter, the video processor 145 recognizes that the performer is wearing clothing which conflict with the virtual environment used to record the video and displays a message on the display device 130 to inform the performer that the clothes conflict with the virtual environment.. For example, the performer is wearing a green shirt that is very similar to the virtual environment of a jungle. The color of the green shirt blends into the virtual environment and the performer cannot be distinguished from the background. The video processor 145 recognizes that the performer is indistinguishable from the background and the display device 130 displays a background conflict message to the performer to inform the performer that the performer's clothing is inappropriate for the virtual background. In some cases the video processor 145 may display a recommendation of proper clothing that may be worn for recording the video in the virtual environment, for example, displaying a message recommending an orange shirt to the performer in the jungle virtual environment.
Figures 2A-2B show a method for recording a video, according to some exemplary embodiments of the subject matter. Step 200 discloses receiving a textual script. The textual script includes the dialogue of the recorded video. The textual script comprises words that the performer says during the recording of the video. The textual script is received through the textual script input 125 of figure 1, for example, as an OCR reading of an Adobe Acrobat file. The textual script is transferred from the textual script input 125 to the script unit 105 of figure 1. In step 202, the script unit 105 analyzes the textual script and determines the time in which each part of the text is to be displayed to the performer. For example, the script unit 105 may determine that 5 words are displayed per second. The timing may change from one take to another. In step 203, the script unit 105 transmits text parts from the textual script to the display device 130 of figure 1, such that the text part of the textual script is displayed on the display device 130 of figure 1 while the video is recorded. In some case, the textual script may be written in a language other than English. In such cases, the script unit 105 may recognize which language the textual script is written in and be enabled to translate the textual script into a different language, for example, translating the script from Russian to Spanish.
Step 205 discloses designating a virtual environment used for recording the video. The performer elects a virtual environment to use as a background for recording the video. The virtual environment is elected out of one or more virtual environments stored in the virtual environment database 1 10 of Figure 1. The video recording system 100 receives the performer's input of the virtual environment elected from the one or more virtual environments. Once the virtual environment is elected the video recording system 100 designates the virtual environment to be the background of the recorded video. In some cases, the video recording system 100 may designate more than one virtual environment to be used on the same recorded video according to the textual script and video that is recorded by the performer. For example, the performer is recording a survival video comparing survival tips in different geographical locations. The survival video shows the performer giving survival tips in a jungle, where the virtual environment is a jungle, and then the performer gives survival tips in a desert, where the virtual environment is a desert. In some exemplary embodiments of the subject matter, the virtual environment may be obtained from an online virtual environment database, for example, downloading the virtual environment from a website. Step 208 discloses designating an avatar. The performer may elect to have the avatar appear in the video instead of the performer. The performer is recorded performing the role in the video but the avatar is displayed in the video instead of the performer doing all of the performer's actions. During the recording of the video the performer may have the option to see the performer in the video displayed on the display device 130, and the avatar is only shown after the recording of the video is complete and the video is edited. The performer may elect to have the avatar shown instead of the performer on the display device 130 during the recording of the video. For example, the performer is making a video about life on Mars. The performer elects a martian avatar that is displayed in the video instead of the performer. The martian avatar performs the movements of the performer according to the performer's moves collected by the motion receiver 155 of figure 1.
Step 210 discloses designating a background audio sample for the video. The performer designates background audio samples that will be used in the video. In some cases the background audio samples are designated from the background audio sample database 1 15 of Figure 1. In some cases, the performer may import the background audio samples from an external database not located in the video recording system 100, for example designating the background audio samples from an internet website, FTP server, external hard drive or the like. The background audio samples may comprise of music or sound effects, which are cued to sound at designated time markers during the recording.
Step 215 discloses determining scene characteristics of the recorded video. The scene characteristics are determined according to the virtual environment the performer elects. The scene characteristics may comprise the cues for camera changes, cuing the audio samples to appear at the desired time marks of the video, the virtual environment the location the performer should be in during recording of the video, and the like. In some cases, the performer may be enabled by the video recording system 100 to alter the scene characteristics by inputting modified scene characteristics. In some exemplary embodiments of the subject matter, the performer may designate one or more additional characters into the video. The video processor 145 may designate the dialogue and actions of the one or more additional characters. In some cases the dialogue of the one or more additional characters are determined by the textual script. The video processor 145 designates a voice to an additional character of the one or more additional characters, which is the voice an additional character dialogue is pronounced in.
Step 220 discloses displaying the textual script on the display screen. The textual script is transferred from the script unit 105 of Figure 1 to the display device 130 of Figure 1, which displays text of the textual script on a predetermined area of the display device 130 of Figure 1. For example, the text is displayed on the bottom of the display device 130.
Step 225 discloses displaying a demonstrative virtual enactment of the textual script.
The performer may elect to watch a virtual enactment of the textual script on the display device 130 prior to performing one or more live reviews or recording several video-takes. After the video processor 145 received the scene characteristics, the virtual environment and the background audio samples, the video processor 145 may enable the performer to view a demonstrative virtual reenactment of the textual script. The virtual reenactment is displayed on the display device 130 for the performer to view an estimation of what the video should be. The virtual reenactment enables the performer to determine if anything in the textual script needs to be modified prior to recording the video, and so the performer can see the performance requirements.
Step 230 discloses performing one or more live reviews of the video. The video recording system 100 enables the performer to do one or more live reviews of the textual script before recording the video-takes. The performer may perform the textual script during a live review of the one or more live reviews, And may also any revisions are necessary in the textual script. For example, the text of the textual script needs to be modified to make the performance easier for the performer to say the words. The live review may enable the performer to determine that the scene characteristics need to be modified to improve the quality of the video. For example, the performer determines from the live review that the scene tempo needs to change to enable a viewer of the video to more easily follow video content when the viewer watches the video. The video recording system 100 may receive a number of live reviews the performer wishes to perform prior to recording one or more video takes. In some exemplary embodiments, the one or more live reviews are recorded by the video processor 145 to be used during editing in case some scenes were recorded more successfully during the one or more live reviews without satisfactory live video-takes.
Step 235 discloses recording one or more live video-takes of the video. Once the performer completes the one or more live reviews, the video recording system 100 may automatically begin recording the one or more video-takes, or wait to receive a command from the performer to being recording the one or more video-takes. The video processor 145 operates the recording collector 160 of figure 1 to receive video-takes from the one or more cameras. When recording the video, the display device 130 shows the recorded video to the performer while a video-take is being recorded. The display device 130 shows the performer the text of the textual script while recording the video takes. In some exemplary embodiments of the subject matter, the video may comprise receiving a guest performer appearance in the video. The guest performer has a guest performance system which may transfer a guest performance video feed to the video recording system 100. A guest performance is displayed in the video to show the guest performer in the video. After the video recording system 100 receives the guest performance the video recording system 100 determines where in the video the guest performance is inserted.
When recording the video-take of the one or more video takes, the motion receiver 155 of figure 1 detects whether the performer is in a required location, for example in front of the one or more cameras. When the motion receiver 155 detects that the performer moves out of the required location, a message to return to the required location is displayed on the display device 130. The message may be in a format of arrows or a correction instruction on where is the required location relative to the current location of the performer as received from the motion receiver 155. A camera of the one or more video cameras is designated to record particular frames or scenes according to the scene characteristics. For example, the performer is performing a news show, where the performer tells the news. During each news story, the performer looks into a differently located camera. During a first news story, the performer stares into a camera in front of the performer, and during a second news story the performer stares into a camera that is on a performer's left side.
In some exemplary embodiments of the subject matter, the video processor 145 determines whether the lighting is sufficient. In cases where the lighting is insufficient, the video recording system 100 may transmit a message to the display device 130 informing the performer or another person assisting the performer to create the video to turn on more lights. In some exemplary embodiments, the video processor 145 may be enabled to adjust brightness and contrast while recording to improve the lighting and clarity of a video-take of the one or more video takes. In some exemplary embodiments of the subject matter, the video processor 145 may recognize unwanted facial features on the performer, for example, sweat, facial discoloration, or nervous ticks. The video recording system 100 transmits a message to the display device 130 to inform the performer of the unwanted facial features.
Step 240 discloses editing the video which was recorded during the one or more video-takes. The video recording system 100 obtains the recorded video-takes. The editing unit 165 receives inputs from the editor input 135 regarding which scenes to keep from the one or more video-takes. The editing unit 165 enables splicing a scene from the video-take of the one or more video-takes into a different video-take of the one or more video-takes, until the video is a final video. In some exemplary embodiments of the subject matter, editing the video is performed while recording the one or more video-takes. After a scene is recorded, the scene may then be edited according to previous video-takes of the scene. The editing unit 165 automatically determines a video-take of the one or more video-takes and inserts the video-take into the video. Determination of the video take to be inserted into the video may be performed according to the quality of speech, light, performer location and the like.
Step 245 discloses searching for relevant media to be added to the video. After the video is edited by the editing unit 165, the video recording system 100 transfers the video to the relevant media input unit 170 of Figure 1. The relevant media input unit 170 detects relevant media, for example, from the internet to input into the video. Step 250 discloses adding relevant media into the video. Once relevant media has been obtained by the relevant media input unit 170, the relevant media input unit 170 adds the obtained relevant media into the video. The relevant media input unit 170 adds the relevant media by placing time marks where the relevant media begins being displayed in the video. The time mark discloses the location the video is paused and the relevant media is displayed when the video is played by the video player 180 of figure 1. The relevant media and the time marks are stored in the video storage unit 185 of figure 1 with the video. In some cases a particular related media may be added into two or more different videos, while the data of the relevant media and the time markers for each video of the two or more videos is stored with the relevant media in the video storage unit 185.
In some cases, the performer wishes to improve or correct a live review of a video take while recording the one or more live reviews of step 230 or recording of the one or more video-takes of step 235. The performer wishes to rewind and rerecord a particular part a live review of the one or more live reviews or a video-take of the one or more video-takes. In such cases the video recording system 100 performs a method for rewinding and rerecording a video as disclosed in figure 2B, according to exemplary embodiments of the subject matter. Step 260 discloses receiving a command from the performer or a person assisting the performer while recording a live review or a video take. The command may include stopping the recording of the video-take. The command may be a speech command detected by an audio input, such as a microphone connected to the video recording system 100. The video recording system 100 receives the command.
Step 265 discloses stopping the video-take. The video recording system 100 stops recording the video-take at the time marker at which the video processor 145 was last recording. Step 270 discloses receiving a command to rewind the video-take. The command may be in a format of audio speech, or inputted into a keyboard or an interface of the display device 130. The performer may input a command to the video recording system 100 rewind the video-take to a designated rewind time marker from which the performers wants to go back. The video recording system 100 receives the performer's command to rewind the video-take to the designated rewind time marker. For example, the system receives a command to rewind the from a current time mark of 5:23 to the previous time mark of 4:19.
Step 275 discloses rewinding the video-take to the designated rewind time marker. The video processor 145 rewinds the video-take to the designated rewind time marker and stops once the system reaches the designated rewind time marker. In some cases, step 270 and 275 may be done simultaneously, where the video processor 145 is receiving a continuous rewind command and rewinds until no more commands are received. For example the video processor 145 receives the rewind command from time marker 5:23 and continues receiving the rewind command until time marker 4:19, at which point the rewind commands are no longer received by the video recording system 100 and the video recording system 100 stops rewinding the video-take.
Step 280 discloses resuming recording the video-take from the designated rewind time marker. Once the video recording system 100 reaches the designated rewind time- marker the video recording system 100 may resume recording the video take from the designated rewind time marker and display the text associated with the previous time marker on the display device.
Figure 3 shows a video recording environment, according to some exemplary embodiments of the subject matter. The recording environment comprises of a video recording system 300, such as a PC computer or a laptop. The video recording system 300 communicates with one or more cameras arranged at different locations to obtain different video takes. For example, a direct camera 310 may be located in front of the performer to obtain a front video-take, while a left camera 312 and a right camera 314 obtain side view video-takes. A camera of the one or more cameras may be placed in a different location for recording a different view of a scene.
The video recording system 300 further communicates with a display device 350, which displays a video take on a display device screen 355. The video recording system 300 may display text on a teleprompter 325 to enable a performer to see a predefined part of the textual script. The video recording system 300 further communicates with an audio input 340, such as a microphone, for collecting audio data. The audio data may comprise of a performer's spoken words.
The video recording system 300 further communicates with one or more motion sensors connected to the body of a performer 301. For example, the performer 301 wears the one or more motion sensors on a performer's head 360, on a performer's left hand 368, a performer's right arm 362, a performer's left leg 366 and a performer's right leg 364. The one or more motion sensors transfer the movements the performer performs to the video recording system 300, for example to the motion receiver 155 of figure 1. The one or more sensors may be used to detect that the performer 301 located at the right place, for example in front of the one or more cameras. When the performer 301 moves away from the designated location, the motion receiver 155 of figure 1 receives a performer's location from the one or more motion sensors and transfers a message to the display device 350 to inform the performer to return to the designated location. In some cases, the performer 301 may move out of the scope the camera. The video recording system 300 recognizes that the performer 301 is not in the correct location based on data collected by the motion sensing input. The video recording system 300 transmits a message on the display device screen 355 informing the performer 301 to move to the correct location. In some cases, the video processor 145 displays on the display device 130 a preferred position for the performer to be in, for example the performer should sit in a chair or stand slightly off camera. Figure 4A shows a performer of a video on a display device without a virtual environment, according to some exemplary embodiments of the subject matter. The display device 400 comprises of a display device 420, which shows a performer 405 in front of a camera (not shown). The display device 420 is located so the performer 405 may view what is being recorded during the recording of the video-take. The display device 420 shows the performer 405 prior to a virtual environment being inserted into a video-take. For example, the performer 405 may be instructing how to play basketball and has designated the space in a room where the performer explains the rules of basketball. The bottom of the display device 400 comprises a teleprompter 410 for displaying text for the performer 405 while recording the video-take. Figure 4B shows a performer of a video on a display screen in a virtual environment, according to some exemplary embodiments of the subject matter. The display device 400 shows the performer in a virtual environment 430. The virtual environment comprises a virtual basket 450, which the performer 405 may use to demonstrate how to throw a basketball 445. In some cases the video may comprise an additional character 460, for example an opponent basketball player. The video processor 145 of figure 1 designates the actions and dialogue of the additional character 460. The dialogue and the actions of the additional character 460 may be predetermined by the video processor 145 according to the designated virtual environment.
While the disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the subject matter. In addition, many modifications may be made to adapt a particular situation or material to the teachings without departing from the essential scope thereof. Therefore, it is intended that the disclosed subject matter not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this subject matter, but only by the claims that follow.

Claims

1. A video recording system, comprising:
a textual script input configured to receive a textual script;
a recording collector configured to collect video data received from one or more video cameras;
a script unit for determining a predefined part of the textual script is shown on a display device to a performer of a video, when recording the video based on the textual script; and
wherein when recording the video based on the textual script, said display device further displays the performer inside the virtual environment.
2. The video recording system of claim 1 , further comprising an editing unit configured to edit a video content containing one or more video-takes collected from the one or more video cameras.
3. The video recording system of claim 1, further comprising a background audio sample database, the background audio sample database comprises audio samples to be added as background audio samples into the video.
4. The video recording system of claim 1, further comprises a motion receiver configured to detect a location of the performer while collecting video data received from one or more video cameras enabling the video recording system to display a correction instruction to the performer when the performer is not in a required location.
5. The video recording system of claim 1, further comprising a video storage for storing the video and relevant media connected to the video.
6. The video recording system of claim 1, wherein the virtual environment database comprises one or more virtual environments.
7. The video recording system of claim 1 , wherein the display device displays a virtual reenactment of the video.
8. The video recording system of claim 1, wherein:
the script unit recognizes that the textual script is in a language that is not English; the script unit is enabled to translate the textual script to one or more languages;
the display device displays the textual script in the one or more languages.
9. A method, comprising:
receiving a textual script; designating a virtual environment from a virtual environment database; recording one or more video-takes of the textual script, wherein a performer is viewing the performer in the virtual environment during the recording;
displaying the textual script on a teleprompter while one or more video-takes are recorded.
10. The method according to claim 9, further comprising designating a background audio sample from a background audio sample database.
11. The method of claim 9, further comprising determining scene characteristics.
12. The method according to claim 9, further comprising performing one or more live reviews of the virtual enactment.
13. The method according to claim 9, further comprising automatically searching for relevant media to insert into a video; and inserting relevant media into the video.
14. The method according to claim 9, further comprising displaying a demonstrative virtual enactment of the textual script.
15. The method according to claim 9, further comprising editing the video from one or more video-takes.
16. The method according to claim 9, wherein image lighting is digitally enhanced during the recording of the one or more video-takes.
17. The method according to claim 9, further comprising
recognizing a performer's facial feature;
analyzing the performer's facial feature;
determining a standard for a performer's facial feature;
determining that the facial feature is out of a predefined standard;
determining a need for make-up;
displaying a message to the performer to improve the performer's facial feature.
18. The method according to claim 9, wherein the facial features comprises sweating, discoloration of skin, nervous ticks, and a combination thereof.
19. The method according to claim 9, further comprising designating an avatar that appears in the video instead of the performer.
20. The method according to claim 9, further comprising inserting a guest performer into the video during the recording of the video.
21. A method, comprising:
receiving a command to stop recording a video;
stopping a video-take used for recording the video; receiving a command to rewind the video;
rewinding the take to a designated time mark;
resuming recording the take of the one or more takes from the designated time mark.
PCT/IL2013/051018 2012-12-11 2013-12-11 A system and method for creating a video WO2014091484A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261735567P 2012-12-11 2012-12-11
US61/735,567 2012-12-11

Publications (1)

Publication Number Publication Date
WO2014091484A1 true WO2014091484A1 (en) 2014-06-19

Family

ID=50933837

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2013/051018 WO2014091484A1 (en) 2012-12-11 2013-12-11 A system and method for creating a video

Country Status (1)

Country Link
WO (1) WO2014091484A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022142871A1 (en) * 2020-12-28 2022-07-07 北京达佳互联信息技术有限公司 Video recording method and apparatus

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020090123A1 (en) * 2000-12-21 2002-07-11 Roland Bazin Methods for enabling evaluation of typological characteristics of external body portion, and related devices
US20020186233A1 (en) * 1998-12-18 2002-12-12 Alex Holtz Real time video production system and method
US7184047B1 (en) * 1996-12-24 2007-02-27 Stephen James Crampton Method and apparatus for the generation of computer graphic representations of individuals
US20080075436A1 (en) * 2006-04-05 2008-03-27 Ryckman Lawrence G Studio booth configured to produce illusion that customer is photographed in different locale
US20080106614A1 (en) * 2004-04-16 2008-05-08 Nobukatsu Okuda Imaging Device and Imaging System
US20080239104A1 (en) * 2007-04-02 2008-10-02 Samsung Techwin Co., Ltd. Method and apparatus for providing composition information in digital image processing device
US20100031149A1 (en) * 2008-07-01 2010-02-04 Yoostar Entertainment Group, Inc. Content preparation systems and methods for interactive video systems

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7184047B1 (en) * 1996-12-24 2007-02-27 Stephen James Crampton Method and apparatus for the generation of computer graphic representations of individuals
US20020186233A1 (en) * 1998-12-18 2002-12-12 Alex Holtz Real time video production system and method
US20020090123A1 (en) * 2000-12-21 2002-07-11 Roland Bazin Methods for enabling evaluation of typological characteristics of external body portion, and related devices
US20080106614A1 (en) * 2004-04-16 2008-05-08 Nobukatsu Okuda Imaging Device and Imaging System
US20080075436A1 (en) * 2006-04-05 2008-03-27 Ryckman Lawrence G Studio booth configured to produce illusion that customer is photographed in different locale
US20080239104A1 (en) * 2007-04-02 2008-10-02 Samsung Techwin Co., Ltd. Method and apparatus for providing composition information in digital image processing device
US20100031149A1 (en) * 2008-07-01 2010-02-04 Yoostar Entertainment Group, Inc. Content preparation systems and methods for interactive video systems

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022142871A1 (en) * 2020-12-28 2022-07-07 北京达佳互联信息技术有限公司 Video recording method and apparatus

Similar Documents

Publication Publication Date Title
JP4261644B2 (en) Multimedia editing method and apparatus
JP4599244B2 (en) Apparatus and method for creating subtitles from moving image data, program, and storage medium
CN102209184B (en) Electronic apparatus, reproduction control system, reproduction control method
JP6947985B2 (en) Game video editing program and game video editing system
US20120308209A1 (en) Method and apparatus for dynamically recording, editing and combining multiple live video clips and still photographs into a finished composition
TWI538498B (en) Methods and apparatus for keyword-based, non-linear navigation of video streams and other content
JP2001333379A (en) Device and method for generating audio-video signal
US10645468B1 (en) Systems and methods for providing video segments
US8600991B2 (en) Contents information reproducing apparatus, contents information reproducing system, contents information reproducing method, contents information reproducing program, recording medium and information processing apparatus
JP2007280486A (en) Recording device, reproduction device, recording and reproducing device, recording method, reproducing method, recording and reproducing method, and recording medium
JP2016119600A (en) Editing device and editing method
CN104995639A (en) Terminal and method for managing video file
JP2010035118A (en) Image capturing apparatus and information processing method
JP2018078402A (en) Content production device, and content production system with sound
US20190019533A1 (en) Methods for efficient annotation of audiovisual media
US20220157347A1 (en) Generation of audio-synchronized visual content
JP2008178090A (en) Video processing apparatus
WO2014091484A1 (en) A system and method for creating a video
JP2005286378A (en) Moving picture reproduction system and moving picture reproduction method
US20080159584A1 (en) Information processing apparatus and information processing method
JP2009232114A (en) Image reproducing apparatus, its method and image reproducing program
CN115315960A (en) Content correction device, content distribution server, content correction method, and recording medium
CN107750009A (en) A kind of method that the plug-in captions of video file are synchronously read aloud using Android device
JP2008125050A (en) Video image reproducing apparatus and video image reproducing method
KR20170075321A (en) Karaoke system for providing augmented reality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13862711

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13862711

Country of ref document: EP

Kind code of ref document: A1