CN101563698A - Personalizing a video - Google Patents

Personalizing a video Download PDF

Info

Publication number
CN101563698A
CN101563698A CNA2006800341565A CN200680034156A CN101563698A CN 101563698 A CN101563698 A CN 101563698A CN A2006800341565 A CNA2006800341565 A CN A2006800341565A CN 200680034156 A CN200680034156 A CN 200680034156A CN 101563698 A CN101563698 A CN 101563698A
Authority
CN
China
Prior art keywords
video
new
image
performer
disposal route
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2006800341565A
Other languages
Chinese (zh)
Inventor
布莱克·森夫特纳
利兹·拉尔斯顿
迈尔斯·莱特伍德
托德·希夫利特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Flixor Inc
Original Assignee
Flixor Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Flixor Inc filed Critical Flixor Inc
Publication of CN101563698A publication Critical patent/CN101563698A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

Processes and apparatus for personalizing video through partial image replacement are disclosed. Personalization may include partial or full replacement of the image of an actor. Personalization may also include insertion or replacement of an object, and full or partial replacement of the background and/or sound track. A video preparation process may be used to create a library of personalization-ready videos.

Description

Individualized video
Copyright and trade dress bulletin
The part of patent document disclosure comprises content protected by copyright.Patent document may show and/or describe as the content that maybe may become possessory trade dress.The copyright and the trade dress owner do not oppose that anyone faxs and copies it the form with the performance in the patent file of patent and trademark office or record of this patent disclosure, but keep in addition other all copyrights and trade dress right.
Related application data
This patent requires the right of priority of following application, and all these applications all are herein incorporated with way of reference: the name of submitting on September 16th, 2005 is called the application No.60/717 of " Facial image replacement ", 852; The name of submitting on September 16th, 2005 is called the application No.60/717 of " Customized product marketingimages ", 937; The name of submitting on September 16th, 2005 is called the application No.60/717 of " Call andmessage notification ", 938.
Technical field
Present disclosure relates to disposal route and the device that is used for replacing by parts of images the video of creating personalization (personalized).
Background technology
Now, digital image recording, storage and syntheticly be widely used in TV, film and video-game.Digital video comes down to a series of digital photos with the scene of cycle interval shooting (scene), is called " frame " usually.Digital video can use digital camera to come record, can create by digitized analog video record or film digitizer Film Recording, can be by playing up 2D and the 3D computer graphical is created, even can be that the hybrid combining of the compound various elements of all foregoings and analog or digital is to realize final digital video.In order to give the beholder with impression level and smooth, continuous motion, numeral or analog video image generally by per second 25,30,60 or more multiframe constitute.But the frame number of per second should not be counted as the limiting factor of discriminating digit video; Some video format supports are low to moderate the frame rate every N frame second 1, even support variable frame-rate where necessary, so that reach the effect of perception (perceived) motion when attempting to reduce the final storage size of consequent digital video.No matter how many frame rate is, each frame all can be divided into a plurality of horizontal line, and each row typically is divided into a plurality of picture elements, is commonly referred to " pixel " of every row.The Unite States Standard (USS) broadcast video is recorded as every frame 525 row, and HDTV is recorded as every frame 1080 row.Yet for the purpose that here is described, term " digital video " has the implication on the wide sense more, promptly refers to a series of images of describing the performance of time lapse in one or more backgrounds (settings) when watching in proper order simply.The speed that the quantity of image, image show and the dimension of image have nothing to do.But still can being identified as in a conventional manner by row and pixel, image forms, although run through each step of disposal route disclosed herein, can carry out statistical resampling to the row of every frame and the number of pixel as required at the precision of the desired accuracy of each step.
Therefore, each frame of digital video is made up of the pixel of a certain sum, and each pixel is represented the brightness of the counterpart of this information representation image and color by the bit information of a certain number.Can create the several different methods of digital video for all, in fact all be a series of images, the serve as reasons a series of frame of row and pixel composition of these graphical representations.Existence is with the variety of way of bit and byte representation digital video, but all these can be called as frame, row and pixel in some sense.
Digital video requires show media sequentially to present frame.Show media typically is electronics, such as TV, computing machine and monitor, cell phone or PDA(Personal Digital Assistant).These equipment receive or handle the digital video of document form, sequentially to user's display frame.Also have other possible nonelectronic show media, they also are the modes of user experience digital video.The example of such media is: 1) in credit card/Payment Card and the printed holograms that can collect the nature that occurs on the sports cards, 2) adopt the digital paper of chemistry and other non-electronic image coding method, 3) simply print page turning book (flipbook).
Description of drawings
Fig. 1 is a process flow diagram of creating the processing of personal and digital video.
Fig. 2 is the expansion of the processing of Fig. 1.
Fig. 3 is a process flow diagram of creating the optional step in the processing of individualized video.
Fig. 4 is a process flow diagram of creating the optional step in the processing of individualized video.
Fig. 5 is a process flow diagram of creating the optional step in the processing of individualized video.
Fig. 6 is a process flow diagram of creating the optional step in the processing of individualized video.
Fig. 7 is a process flow diagram of creating the optional step in the processing of individualized video.
Fig. 8 is the process flow diagram that is used to provide another processing of individualized video.
Fig. 9 is the process flow diagram that is used to provide the processing of personalized advertisement.
Figure 10 is the block diagram of computer installation.
Figure 11 is the block diagram of another computer installation.
Embodiment
Run through this instructions, embodiment that illustrates and example should be counted as exemplary, rather than to the restriction of disclosed or claimed apparatus and method.
Run through this instructions, term " digital video montage (clip) ", " video clipping ", " montage " and " digital video " all are meant the numerical coding of a series of images, watch image to be used for order.Duration for digital video does not all have implicit restriction with the final media that can show digital video.The example of digital video includes but not limited to: the part of current or classical movie or television programs, whole movie or television program, advertisement, music video or be the personalized special montage of making (specialtyclip) (for example, can by personalized so that show new performer's montage with " friend famous person ") specially.Digital video can be to use the digital camera record, can form from analog video camera or Film Recording digitizing, can be to recover from the digital media such as DVD, can be compounding method or alternate manner establishment by other disposal route that has adopted any above-mentioned disposal route and do not described here.
The establishment of individualized video is the combination in a plurality of fields, on the whole, described a plurality of fields allow to change video sequence, so as the individual can be enough themselves, their friend, their kinsfolk or any individuality real or that the imagination is come out that they have its image participant of replacing original video.Since block in the framework of view and/or the video sequence person of covering and the occluding objects of the whole person's of being replaced health view, the dress ornament that in the plot of video sequence, is replaced role's dress that the personage portrays and/or costume, or the like, this replacement to participant in the original video can only require but be not limited to replace visible connection skin in face, head and/or the original video.The content that depends on the plot of describing in the original video, the replacement of the participant in the video can comprise their other visible parts of skin, such as hand, arm, leg etc.
In addition, the replacement of expectation can be carried out terrifically, basically remove the original actor in the video sequence, remove their shadow (shadow), image (reflection) and to the visual impact of other object in the scene, replace to fully synthetic replacement person's version (" digital scapegoat (digital double) "), and add their distinctive shadow, image and other visual impact of other object in the scene.
The quality that depends on the replacement of expectation, and the quality of this expectation is to the influence of original video plot can change the essential element that is replaced actor (interpretation) and the implication in the plot context.For example, replace to petite women by the male sex with strong bravery in the video, plot is constant basically, but the concert party of plot is changed up hill and dale, thereby stays deep impression to the people.For such change takes place, replace face and head and be not enough to the result that reaches such.In this case, remove original actor fully, their key operations is kept in second storage medium, then as petite women's numeral scapegoat's the cartoon making and the reference of insertion.
Face/head and connect skin and these two of digital completely scapegoats extreme between, exist the infinite range that the performer replaces the degree that can carry out.Notice that in all examples of this infinite range, change can not be made in plot in the original video and performer's main action.
Specific theatre system uses stereopsis (stereopsis) that the illusion of three-dimensional (3D) image is provided.These systems present independent image or film 2D channel to every eyes of beholder.These two picture channels can be present on the public surface, separate at beholder's eyes place by the special spectacles with polarized lenses or colored lens.Also can adopt other optical technology, so that each picture channel is only presented to suitable eyes.Although the discussion in the present disclosure relates generally to personalized traditional (2D) video, personalization technology disclosed herein also can be applied to two picture channels of stereopsis 3D display system.
Individualized video can provide with multiple possible form, includes but not limited to following form:
(a) allow video is downloaded and the form of the nil copyright management (DRM-free) of free exchange and dealing.In advertisement applications, because the potential product that exists is placed, advertiser can exchange, buy and sell individualized video and it is shown in place as much as possible, and therefrom benefits.
(b) only allow the unique individual on particular device, to download and the effective form of (DRM-enabled) of the digital copyright management of resetting, such as buying individualized video on particular device, to reset.
(c) can cell phone, computing machine and similarly communicate by letter and computing equipment on 3gp, 3gpp, mv4, gif or other public or private digital video or the digital image format play.In this example, individualized video can be viewed simply, can perhaps can be used for the in fact any event notice in the sight of this equipment of use as the video bell tone with the alternate audio ring tone.
The hologram image of the sort of printing that (d) on Payment Card/credit card, occurs.In this example, individualized video no longer exists with digital image format; It is converted into a series of hologram images and is embedded in the image sequence of hologram.With this form, need not to use any electronics evaluation equipment just can watch individualized video at all.
(e) image and image sequence are electronically or chemically encoded into digital paper medium in the potential non-electronic paper medium.
(f) digital fabric medium, wherein, luminous and switch technology is embedded into fabric fibre to the fabric of other conventional use that is used for clothes, furniture overcover and fabric with LED, OLED or other, so that can embed on fabric face, launch or otherwise display image and animation.
Some kinds of video formats allow to embed the logic of the playback triggering of passing through digital video.In some instances, may wish such logic is embedded in the individualized video, so that each displaying video hour counter all increases progressively.Similarly, the logical triggering device can be embedded in the individualized video, when showing specific product image, sign or other image, be triggered.Counter can be positioned on the Internet, and/or on the viewed equipment of individualized video.When counter is not to be positioned on the Internet, but when being positioned on the evaluation equipment, can adopting some means that Counter Value is sent to these are worth interested people, for example when equipment is connected to the Internet with the video of retrieving novel or out of Memory next time.
The description of handling
Must remember that the treatment step that is applied to video relates to and changing or processing is stored in real data in the digital video on by pixel and basis frame by frame.For fear of repeating this notion in this manual too much, according to the part of action and related image treatment step is described here.For example, the step that is described to " replacing primary object with new object " is not actually to relate to object itself, and the image of the object that relates in video, describe.The action of " replacement " can relate to all pixels that show the image of the primary object that will be replaced in each frame of video of identification, handle the numerical data change those pixels by two steps then: 1) with the pixel overwrite primary object of performance primary object background behind, and 2) with the image overwrite of new object image with new background replacement.Also can change data by one step with new data overwrite raw data.When the shape of replacing object may not adopt described two steps to handle with primary object simultaneously.Then each frame of video is repeated the step discerning and change.
With utilizing illustrative case to come initial description is carried out in processing, in this illustrative case, replace the face of one of video original actor to come individualized video by image with new performer's face.In this manual, term face and face should be interpreted into the visible part that comprises ear, neck and other adjacent skin area, unless additionally explanation is arranged.Can use the appropriate section that identical processing comes to replace with new performer's bigger part original actor, up to and comprise the replacement of whole health.Can use identical base conditioning within the bounds of possibility, the complexity of processing, time and cost can increase along with the increase of superseded video section.Similarly, can carry out this identical base conditioning, thereby obtain describing a plurality of new performers' individualized video a plurality of original actor in the video.
Fig. 1 creates by the image with new performer's face to replace at least a portion of face image of one of the original actor of video to create the process flow diagram of processing of the video of personalization.Described new performer can be the individual who wants this individualized video, and his friend or kinsfolk, or any other real or the imagination is come out individual are as long as can provide at least one 2D image.
Processing shown in Fig. 1 is divided into performer's modeling processing 100, video preparation process 200 and personalisation process 300.Notice that it is separate handling 100 and 200.The video that personalisation process 300 need have been prepared (handling 200) and these two results of at least one new actor model (handling 100).For any particular video frequency, must carry out and handle 200.For any specific individualized video, handle 200 result and at least one result's pairing of handling 100, and they are together by handling 300 to create individualized video.Handling 200 only needs each video is carried out once.Handling 100 only needs each new performer is used once.Therefore, in case pass through processing 200 video all set, this video just can match with the new actor model of any number, creates the personalized version of this video to utilize this performer.Similarly, in case utilize processing 100 to create actor model, this actor model just can be matched with the video of having prepared of any number, to create the different individualized video of being acted the leading role by this performer.
Video preparation process 200 and personalisation process 300 can almost be carried out concurrently, but such restriction is arranged, and promptly the video preparation process of each frame of video may be finished before this frame in that personalisation process is applied to.Yet handling 200 can be personnel, labour-intensive processing, may need finish very long period.In the reality, may be required in processing 300 and can begin to finish before processing 200.
In Fig. 1 and diagrammatic sketch subsequently, the treatment step in 100 is handled in the reference designator indication performer modeling between 101 and 199.In addition, will add letter suffix (100A, 100B etc.), handle 100 optional expansion with indication performer modeling to reference designator 100.Video preparation process 200 and personalisation process 300 also will be followed similar rule.
One or more two dimensions (2D) digital picture of the new performer of 100 acceptance is handled in performer's modeling, and relevant support information, and create new performer's digital model in step 110, it is by three-dimensional model and optional demographic profile (demographic profile) and describe other personal information formation of this new performer.Preferred 2D image is mainly caught new performer's face, their top of head and bottom, ears, their part of neck, and as seen eyes and are no more than 30 degree with respect to the rotation of video camera simultaneously.Because the rotation with respect to video camera may surpass 30 degree, so the part possibility crested of face or head, in this case, can use statistical information to provide can not be by analyzing the information that photograph image recovers.From the technology of 2D image creation 3D rendering is known, and is to can be used for the computing machine diagrammatic sketch field of security system and the branch of face recognition technology equally.Minimum relevant support information only is the name of consequent new actor model.Additional relevant support information can comprise demographic profile and/or describe other personal information of this new performer.This information can be by asking this information to obtain from the user simply, and/or by determining that by means of the people information subscription service information obtains, and/or follow the tracks of and keep this information and obtain by observing the activity of user when using personal media services.
Position, direction (orientation) and the expression of original actor discerned and followed the tracks of to video preparation process 200 in step 210, in step 210 beginning.This step exploitation (develop) and preservation are used for the additional data of each frame of video.These data can comprise position and the relative size the coordinate space of the analog digital video camera of observing this scene in of face in frame of video of original actor, the performer's who quantizes according to some standard sets facial expression, and the direction of original actor, or head rotation and inclination relatively.Face location is followed the tracks of and direction is estimated and can be finished under the assisting of automated graphics handling implement by digital artist (digital artist).The expression of original actor can carry out geometry deformation (morphing) by the reference 3D model to original or similar performer's head or conversion quantizes with the expression in the match video image.Subsequently, the 3D model of the new performer's head of conversion be can come, thereby new performer's the image and the expression coupling of original actor made in conversion like step 320 application class.
Because the variation (natural variability) naturally of the size of ear, nose and other face feature, new performer's face possibly can't accurately be replaced the face of original actor.In many cases, some residual pixels that simply new performer's image are placed on the face that may make original actor on the existing image still as seen.Residual pixels may make the image fault of new performer's face, if particularly there is marked difference in original actor and new performer's skin color, then described residual pixels can be horrible.At present, can be along with the image that inserts new performer in each frame of video detects and eliminate residual pixels.Yet because the number of residual pixels and feature and the physics size that new performer will be depended in the position, therefore such processing may have to all carry out repetition at every turn at different new performer's individualized video the time.
The possibility that does not have residual pixels in order to guarantee to remove the face image of original actor fully, video preparation process 200 can continue in step 220, remove the key component of the image of this original actor at least therein, and use with the background continuous images of this performer back and replace.In the situation that aims at the video that is used for personalization and creates, can there be the scene of original actor that background image is provided by record simply.In the situation of existing video, removed in the image-region of face image of original actor background can by digital artist the automatic video frequency handling implement auxiliary down from around the scene continuity.The face image of removal original actor also carries out backfill with the continuity of background scene and has prepared such video, and it can use together with a plurality of different new performers, and the processing that need not to add removes residual pixels.
The key component of the original actor of replacing in step 220 can comprise face and adjacent skin area.Alternatively, this key component can comprise hair, clothes or extention, comprises whole performer at most.Realize suitable illusion if necessary, performer's shadow and image also can be removed and replace.Usually, performer's shadow is irreflexive, and reflecting surface is enough fuzzy, thereby does not need to replace.Yet, when having clearly shadow or press polished reflecting surface, just need replace shadow or image in step 220.The result of step 220 becomes for handling 300 background images that use.Step 220 is created and is placed the background image of all further personalized image thereon.
Video can comprise the visible skin areas of the original actor that will can not replaced by background image or new performer, such as one or two hand or arm.In step 230, can be by the auxiliary down visible non-replacement skin area of identification original actor of digital artist at the automated graphics handling implement.Non-replacement skin area can be discerned by the pixel of locating the suitable color with original actor skin simply.Can be each frame exploitation and the position of the non-replacement skin area of preservation definition and the data of scope of video.Step 230 can be created the frame that another one series has only skin, and it has coarse (matte) background, can be compound on the result of step 220 to allow this frame that has only skin set.Step 220 and 230 and step 320 and 330 can be according to carrying out with the opposite order shown in Fig. 1.
Each frame of video all is the 2D image of 3D scene.Illumination (illumination), shade, shadow and image (reflection) are important visual cues from the degree of depth (depth) of scene to the beholder that describe.Any image section of not rebuilding appropriate illumination, shade, shadow and image effect and substituting all may be immediately recognized out to be wrong or to forge.
Therefore, in step 240, video preparation process can continue to discern and follow the tracks of illumination, shade, shadow and the image that the existence owing to original actor in the scene exists.In order to rebuild these effects exactly in the alternative part of image, need the data of one of exploitation or the following at least parameter of estimation definition: video camera is with respect to the position of scene; The number of light source, type, intensity, color and with respect to the position of scene and video camera; The relative depth of object in scene; And the character of any visible reflecting surface, relative position and angle.In the situation that aims at the video that is used for personalization and writes down, the many data in the described data can be measured and file record (documented) when creating video simply.In the situation of existing video, described data can be estimated according to image down the auxiliary of automatic video frequency handling implement by digital artist.
In video preparation process 200, a plurality of copies that can be made of the image of digital video the digital artist utilization of execution in step 210,220,230 and 240 carry out work, finish these steps with any order.Notice that video preparation process 200 does not require relevant new performer's any information or data.Therefore, if be stored in the data of step 210,220,230 and 240 exploitations, then video preparation process only needs each video is carried out once.Described data are stored as a series of files of following of video.
Personalisation process is in step 320 beginning, and in step 320, new performer's image is inserted in the video.In Fig. 2, especially at length show and substitute new performer's treatment of picture.In step 322, new performer's 3D model can be carried out conversion with coupling by direction and expression from the defined original actor of data of the step 210 of video preparation process.This conversion can relate to any rotation on some axles in sequence and the geometry deformation of countenance.After with 3D model rotation and distortion, develop the 2D image of 3D model in step 324, and with its convergent-divergent (scale) to suitable size.Then, will be inserted in the video through the new performer's of conversion convergent-divergent 2D image, so that new performer's position, direction and expression are with position, direction and the expression of the original actor of removal are mated substantially before in step 326.In the present context, when individualized video presents like when creating the stylish performer of video with regard in esse compellent illusion, think " basic coupling ".
Referring again to Fig. 1, in step 330, the visible non-replacement skin area of original actor be changed into new performer's skin appearance coupling.Skin appearance can comprise the factor such as color, color harmony texture.Can carry out this change, so that after changing, the mean skin color in non-replacement zone is identical with the mean skin color of new performer's face area, remains on the variation that exists in the original image simultaneously.
In step 340, be created in the illumination, shade, shadow and the image that exist in the original video again.This processing can comprise that other that create new performer or image again is replaced illumination highlight and the shade on the zone, and any shadow or the image of creating or change new performer again.Therefore, step 340 is preferably carried out as the last step of personalisation process.
Fig. 3 is the process flow diagram of optional processing 100A, and optional processing 100A can be used to create and mixes new actor model, and this new actor model is the new performer's of definition parameter compound.Each new actor model is made up of three-dimensional geometric shapes, demographic profile and additional personal information, and described additional personal information is such as age, sex, body types etc.Each new actor model is preserved with the data layout that other new actor model is identical with each.This allows the user to select the new performer of any number, and allows to carry out the N dimension conversion and the distortion of user's control.Combination and parameter deformation process 110A allow the user to select the new performer of any number to handle for them, and create new mixing actor model, and this mixing actor model is to define new performer's the combination of any and/or all parameters or the result of conversion.This allow personal choice describe themselves or with one of other father and mother's of the same sex new performer as input, and by means of distortion to three-dimensional geometric shapes and age parameter, create themselves when old or with they other father of the same sex or mother's version at an early age.Similarly, this processing can be used to create offspring or other possible hybrid combining of the imagination between themselves and the famous person.
May wish the image of object is added in the individualized video, the image that perhaps will have object now replaces to different objects.For example, can insert a sports equipment, further to be fanatic sports fan's individualized video.Perhaps, can in individualized video, place or replace object, so that personalized (targeted) advertisement that target is arranged is provided.Similarly, can alternative to celebrate specific red-letter day, season or incident.The object that is added or substituted in the video can be selected according to new performer's people information, perhaps select according to relevant with new performer or irrelevant out of Memory.
Fig. 4 is the process flow diagram of optional processing 200A and 300A, and optional processing 200A and 300A can be integrated into respectively in video preparation process 200 and the personalisation process 300, so that place new object in video.In step 250, run through each frame of video and discern and follow the tracks of the object placement location that is fit to placing objects.For example, object placement location can be on the desk or ground open space.In particular video frequency, may discern and trace into such place, do not have such place or a plurality of such places are arranged.If object placement location is static with respect to the video camera real or simulation of watching scene, and if do not have performer or other situation elements between this object placement location and video camera, to move, it may be more common then following the tracks of this place.If video camera moves with respect to scene, if perhaps object placement location itself moves with respect to scene, for example held on by the performer, then the tracing object placement location can be comparatively complicated.
Image at the new object of step 350 is added in the scene.The processing of step 350 with before the processing of the step 320 described similar, just do not need the expression of new object is out of shape.The 3D model that rotates new object as required with the camera angle coupling, and with the 3D scaling of model of new object to suitable size.Then, according to the 3D model development 2D image behind the rotation convergent-divergent, and be inserted in the video image.
Except step 240A and 340A consider that step 240A and 340A are the continuity and the expansion of step 240 and 340 basically the effect of the shadow of illumination, shade and shadow on the image of new object and new object and image.At step 240A, exploitation is defined the data of at least one following parameters: video camera is with respect to the position of new object; The number of light source, type, intensity, color and light source are with respect to the position of new object and video camera; The new relative depth of object in scene; And character, relative position and the angle of any visible shadow receiving surface and/or reflecting surface.In the situation that aims at the video that is used for personalization and creates, file record can be measured and break to the many data in these data simply when creating video.In the situation of existing video, these data can be estimated from image down the auxiliary of automatic video frequency handling implement by digital artist.
At step 340A, illumination, shade, shadow and the image consistent with original video will be added.This processing can comprise illumination and the hatching effect of creating on the new object, and any shadow or the image of creating or change new object.Step 340A can carry out as the last step of personalisation process with step 340.
Fig. 5 is the process flow diagram of optional processing 200B and 300B, and optional processing 200B and 300B can merge to respectively in video preparation process 200 and the personalisation process 300, so that the primary object in the video is replaced to the replacement object.In step 255, run through each frame of video and position and the direction discerning primary object and follow the tracks of primary object.For example, primary object can be beverage can or the cereal preparation box on the desk.In particular video frequency, may discern and trace into a primary object, not have primary object or a plurality of primary objects are arranged.If the position of primary object is static with respect to the video camera real or simulation of watching scene, and if do not have performer or other situation elements between this primary object and video camera, to move, it may be more common then following the tracks of primary object.If video camera moves with respect to primary object, if perhaps primary object itself moves with respect to scene, then following the tracks of primary object can be complicated more.
Replace primary object with less replacement object and may cause residual pixels, discussed as the face of just replacing before the performer.For fear of residual pixels, video preparation process 200B can continue step 260, and in step 260, at least a portion of the image of primary object is removed, and replaces to and this primary object background scene continuous images behind.In the situation that aims at the video that is used for personalization and creates, can there be the version of the scene of primary object that background image is provided by establishment simply.In the situation of existing video, can continue this background scene from scene on every side down the auxiliary of automatic video frequency handling implement by digital artist.Remove the image of primary object and prepared such video with the background backfill, it can use together with a plurality of different replacement objects, and the processing that need not to add removes residual pixels.May not need the processing of step 260 under specific circumstances, for example one 12 ounces standard beverage can be replaced to the beverage can of various criterion.
In step 360, use and add in the scene at the image of the described essentially identical processing of step 350 with the replacement object.The 3D model of replacing object can be rotated as required to mate with the direction of primary object, can also be scaled to suitable size.Then, can and be inserted in the video image from 3D model development 2D image through the rotation convergent-divergent.
Except step 240B and 340B consider that step 240B and 340B are the continuity and the expansion of step 240 and 340 basically the effect of the shadow of illumination, shade and shadow on the image of new object and new object and image.At step 240B, can develop the data of at least one in the following parameters of definition: video camera is with respect to the position of new object; The number of light source, type, intensity, color and light source are with respect to the place of new object and video camera; The new relative depth of object in scene; And the character of any visible reflecting surface, relative position and angle.In the situation that aims at the video that is used for personalization and creates, the many data in these data can be measured and file record when creating video simply.In the situation of existing video, these data can be estimated from image down the auxiliary of automatic video frequency handling implement by digital artist.
At step 340B, illumination, shade, shadow and the image consistent with original video will be added.This processing can comprise the shadow of creating on the image of falling new object, and any shadow or the image of creating or change new object.Step 340B can carry out as the last step of personalisation process with step 340.
May wish background with scene, or video " place " that take place (set) replaces to different backgrounds, this different background describe the place relevant with new performer's oneself location, and newly performer's demographic profile more connect place or some other places that adds coupling.For example, original video may occur in the dining room, but after personalization, this dining room background can be replaced by the sign that comprises particular restaurant and/or dining room chain store and the similar dining room of recognition feature, even can be replaced by near the particular restaurant that is positioned at the present location of new performer.Similarly, may wish background scene is replaced to and the close scene of new object relationship that will be inserted into and be substituted in the video.
Fig. 6 is the process flow diagram of optional processing 200C and 300C, and optional processing 200C and 300C can merge to respectively in video preparation process 200 and the personalisation process 300, so that at least a portion of original background scene is replaced to new background scene.In step 270, video image can be divided into the prospect and the background scene zone of separation.Background scene generally is the part apart from video camera image farthest, and can be flat surfaces or backcloth (backdrop) usually.Foreground image areas generally is anything in background front, plane, and can comprise one or more performers, any object that may replace and/or new object and may be inserted into any place in the image.For aiming at the video that is used for personalization and creates, foreground/background separation can realize by record background scene under the situation of removing performer and foreground object, perhaps realizes so that can insert the known technology of background place and environment after scene writes down by write down scene under uniform " green screen " background.Under the situation of existing video, can be by auxiliary down separating background and the foreground image areas of digital artist at the automatic video frequency handling implement.
Step 265 can appear at before or after the step 270, in step 265, determines and the record camera location.For aiming at the video that is used for personalization and creates, can write down scene by the camera motion under computer control, so that keep the focal length and the position of video camera in each frame.This method is known, is used for computer graphical is incorporated in the videograph.In the situation that by means of the three-dimensional animation system is the personalized video of creating, can not take place physical object " record ", the focal length of imaginary digital camera and position are also remained similarly, so that consequent digital video can be processed the samely with the video of record.Under the situation of existing video, can utilize computer vision analysis to recover the place of video camera when watching scene at first.
In step 370, can replace at least a portion of original background scene with new background scene.New background scene must be placed on foreground image areas " behind ", and must be placed on any shadow that foreground actors and object cast below.
After having replaced the image of original actor, may also wish new performer's dialog modification or replace to replacement person's distinctive tone color more approximate with new performer's image.The replacement dialogue can be very simple, simply to can carry out record when newly talking to performer and audio video synchronization.Yet, possible desired modifications original dialogue with new performer's voice class seemingly, thereby do not have the possibility that new performer does not change dialogue wording or content.In addition, may wish the background audio elements of non-dialogue is revised or replaced to and new performer's environment or the replacement element that mate more in the location.For example, if new performer is positioned at Britain, the siren sound that then sound of U.S.'s siren is replaced to Britain can compare suitably.At least can replace some background audio elements, so that the distinctive audio frequency background of original audio is replaced to suitable new performer's peculiar audio frequency background.
Fig. 7 is the optional process flow diagram of handling 100B, 200D and 300D, optional processing 100B, 200D and 300D can merge to respectively in performer's modeling processing 100, video preparation process 200 and the personalisation process 300, to revise or to replace the dialogue or the background audio elements of original actor.In step 280 and 290, analysis video sound rail is to isolate the dialogue and the isolated background audio element of original actor, to be used for replacement or to revise.Digital artist, software processes or both combinations can be checked the sound rail of original video, and the independent track (individual track) and the sound element of identification formation sound rail.
In step 180, use known technology to receive and analyze new performer's speech sample, to extract as new at least one determinant attribute of performer's phonetic feature.This determinant attribute can be pitch (pitch), harmonic content (harmonic content) or other attributes.In step 380, the dialogue of original actor is transformed at least one determinant attribute coupling with new performer's voice, thereby dialogue after the conversion is sounded like be by replacement person say the same.
Can revise or replace the background audio elements of isolating in step 390 in step 290.Also can add additional audio element in step 390.Dialog modification process (step 180,280,380) and background audio modification process (step 290 and 390) are relatively independent, and wherein any one can be finished under another the situation not having.
As previously mentioned, replacing original actor can carry out terrifically, thereby remove original actor fully from original video, the key operations that keeps them, and can substitute the position of original actor with new performer's complete digital reconstruction, this new performer's complete digital reconstruction have necessary frame to the body position of frame, facial expression, environment light and shade to the two the influence of the scene of the people's of insertion profile and reconstruction.In this case, the relevant new performer's of available collection movable information, for example reference video or 3D movement capturing data, thus make that the image that is substituted into the new performer in the video has the distinctive expression of new performer, walks, running, stance or other personal touch.
Fig. 8 shows the process flow diagram that is used for creating and transmitting the processing 400 of (delivery) individualized video.Can be not about with replaced or be inserted under the situation of anticipatory knowledge of new performer's image in the video or product image and finish video preparation process 200, the step of describing before comprising 210,220,230 and 240, and optional step 250,240A, 255,260,240B, 265,270,280 and/or 290.Raw digital video 455 can obtain from the supplier of video 450.Raw digital video 455 can be delivered to video preparation process 200 on the digital storage media such as compact disk or disk, also can by means of such as the network delivery of the Internet or LAN (Local Area Network) to video preparation process 200.Can handle by 200 pairs of raw digital videos 455 of video preparation process, and the consequent digital video of having prepared 465 is kept in the video library 470, this video library 470 comprises at least one video that is ready for personalization.
Similarly, can under situation not, finish performer's modeling and handle 100, comprise step 110 and optional step 120,130 and/or 180 about the knowledge that new performer's image will be inserted into video wherein.Performer's modeling is handled 100 and is received and processing 2D digital picture and out of Memory 425, obtains actor model 435.2D digital picture 425 can be created by means of digital image recording apparatus 420, and described digital image recording apparatus for example is digital camera, digital camera or the cell phone that is equipped with camera.2D digital picture 425 also can be to obtain by the scanning traditional photo.2D digital picture 425 can be delivered to performer's modeling and handle 100 on the digital storage media such as compact disk or disk, also can handle 100 to performer's modeling by means of the network delivery such as the Internet or LAN (Local Area Network).2D digital picture 425 can be attended by name or identifier, and this name or identifier will use for personalized request later on as the reference (reference) of image.2D digital picture 425 can be attended by additional optional information, includes but not limited to other physical features of the individual that shows in sex, height, body weight, age, the overall bodily form and/or the image; This individual's general location is such as their postcode, location country, nationality or similar information; And/or the audio sample of specific a series of words of telling of speech at random that should the individual or this individual.
Actor model can be directly passed to personalisation process 300, perhaps can be kept in the actor model library 440.
The requestor of individualized video 410 will ask 415 to send to personalisation process.Requestor 410 can be or can not be that its image will be substituted into the new performer in the video, requestor 410 can be or can not be a side who receives the individualized video 490 that transmits, and the requestor might not be a human user, also can be some other not otherwise specified software or other processing.Request 415 can be via the Internet or some other network delivery, perhaps can be by means of the means transmission such as fax, phone or mail.This request can identify the particular video frequency that will retrieve from video library 470.This request can identify the actor model that will retrieve from actor model library 440.This request can comprise 2D digital picture 425, will carry out performer's modeling to this image before personalisation process 300 in this case and handle 100.Personalisation process 300 selected digital video of having prepared of retrieval and 3D actor model, and carry out the personalization of being asked.Can the individualized video of finishing 490 be passed to requestor 410 or some other sides by means of network, perhaps can on such as the storage medium of compact disk or digital video disc, transmit the individualized video of finishing 490 such as the Internet.
Personalisation process 300 can comprise optional personalization step, comprise create new actor model, the replacement after compound and/or the age conversion or add one or more objects, replace background scene at least a portion, revise dialogue and/or revise or add the background sound element.Optional personalization step can be in response to from requestor 410 or such as gray the opposing party's request and carry out, perhaps can be according to automatically selecting about requestor or selected new performer's people information.
The processing 400 that can create and transmit individualized video with being used to is embodied as the one or more website interface on the Internet.These website interface can visit via computing machine, cell phone, PDA or any other equipment with internet browsing function current or future.Handling 400 can be online shop, club or allow individual member to create, watch, buy or receive the part of individualized video with other corporations of being used for amusement, reference or educating.Handling 400 can be to provide individualized video for downloading and/or at least a portion of the online fund procurement website of watching for setting up to the motivation of charity organization and/or political activity and publication donations.
Fig. 9 is that another that be used to create individualized video handled 500 process flow diagram.Processing 500 is similar with the processing of describing in Fig. 8 400, handles 530 but added 3D product modeling processing 510, product model storehouse 520 and advertising strategy.Relevant new performer or requestor's people information 540 is provided for advertising strategy and handles 530, and advertising strategy processing 530 is made about selection is one or more from product model storehouse 520 will be inserted into or be substituted into the decision of the product in the individualized video.The processing 500 that can create and transmit individualized video with being used to is embodied as the one or more website interface on the Internet.These website interface can visit via computing machine, cell phone, PDA or any other equipment current or that have the internet browsing function in the future.Handling 500 can be the part of online advertisement propaganda, and this online advertisement propaganda for example provides them using the video of gray product to potential customers.Similarly, handling 500 can be inserted in the online advertisement activity pellucidly, just receive the individualized video advertisement so that the individual of browse the Internet web sites can need not specific request, and/or make the individual by cell phone, cable set top box or the amusement of other program request (on-demand) amusement equipment request order video can in its order video request, receive the individualized video advertisement.
The description of device
Form with block diagram in Figure 10 shows the computing equipment 600 that is used to create individualized video.Computing equipment 600 can comprise the processor 610 of communicating by letter with storage medium 630 with storer 620.Storage medium 630 can be held instruction, and when described instruction is performed, makes processor 610 carry out and creates the necessary processing of individualized video.Computing equipment 600 can be included in the interface of network 640, described network such as the Internet or LAN (Local Area Network) or they both.Computing equipment 600 can receive 2D digital picture and out of Memory, and can transmit individualized video via network 640.Computing equipment 600 can be via equipment and the requestor 650 and digital image source 660 interfaces of network 640 and remote personal computer 670 or other network-enabled.Computing equipment 600 can be by means of network 640 or second interface and video library 680 interfaces.Should be appreciated that network 640, computing machine 670, requestor 650, digital image device 660 and video library 680 are not the parts of computing equipment 600.
Computing equipment 600 can be divided into two or more physical locations, comprises one or more following physical locations: with the webserver of network 640 interfaces; File server with video library 680 and actor model library or product model storehouse (if present) interface; And dedicated video/graphics process computing machine, create at least a portion of disposal route in order to carry out the individualized video of describing before.Be divided into a plurality of physical locations if install 600, each physical location can comprise processor 610, storer 620 and storage medium 630 parts.Also can use more or unit still less, module or other software, hardware and data structure dispose and realize disposal route as described herein and device.
In Figure 11, show another computing equipment 700 that is used to create individualized video with the block diagram form.Computing equipment 700 can comprise the processor 710 of communicating by letter with storage medium 730 with storer 720.Storage medium 730 can be held instruction, and when described instruction is performed, makes processor 710 carry out and creates the necessary disposal route of individualized video.Computing equipment 700 can be included in requestor 650 interface, such as keyboard, mouse or other human-computer interface device.Computing equipment 700 can also have to the interface of digital image device 660, and can receive the 2D digital picture from vision facilities 660 via this interface.Computing equipment 700 can be included in the interface of network 740, described network such as the Internet or LAN (Local Area Network) or they both.Computing equipment 700 can be by means of network 740, and alternatively, receives the digital video of the personalizable of having prepared from the long-distance video storehouse by remote server 750.Then, computing equipment 700 can carry out personalization to this video.Then, can individualized video be presented to user 650, perhaps individualized video can be stored in storer 720 or the storage medium 730 by means of display device.Should be appreciated that network 740, requestor 650, digital image device 660, server 750 and video library 760 are not the parts of computing equipment 700.
In the computing equipment of Figure 10 and Figure 11, storage medium 630 or 730 can be to be included in the computing equipment or otherwise to couple or append to any storage medium in any memory device of computing equipment.These storage mediums for example comprise: such as the magnetic medium of hard disk, floppy disk and tape; Such as compact disk (CD-ROM and CD-RW) and digital versatile disc (the light medium of DVD and DVD ± RW); Flash memory cards; And any other storage medium.As used herein, memory device is the equipment that permission is read and/or write storage medium.Memory device comprises hard disk drive, DVD driver, flash memory device or the like.
Here employed computing equipment refers to any can execute instruction, equipment with processor, storer and memory device, and it includes but not limited to personal computer, server computer, calculating plate, set-top box, video game system, individual video camera, phone, PDA(Personal Digital Assistant), portable computer and laptop computer.These computing equipments can move any operating system, comprise for example Linux, Unix, MS-DOS, Microsoft Windows, Palm OS and the Apple MacOS X operating system of each version.
Computing equipment 600 or 700 can comprise software and/or the hardware that is fit to carry out function described herein.Therefore, computing equipment 600 can comprise one or more: logic array, storer, mimic channel, digital circuit, software, firmware and processor, and such as microprocessor, field programmable gate array (FPGA), special IC (ASIC), programmable logic device (PLD) (PLD) and programmable logic array (PLA).The hardware of computing equipment 600 and fastener components can comprise various special cells, circuit, software and the interface that is used to provide function described herein and feature.Described processing, function and feature can be embodied in the software that operates on the client computers whole or in part, perhaps, can adopt the form of firmware, application program, applet (for example Java applet), browser plug-in, com object, dynamic link library (DLL), script (script), one or more subroutine or operating system assembly or service.Hardware and software and their function can be distributed, thus some assemblies carry out by client computers, and other are carried out by miscellaneous equipment.
Conclusion
Above content only is illustrative, and nonrestrictive, and only provides by way of example.Although illustrate and described some examples, it will be apparent to those skilled in the art that to change, revise and/or change.
Although the many examples that provide here comprise the particular combinations of method action or system element, should be appreciated that those move can otherwise make up with element to realize identical purpose.For process flow diagram, can adopt more or step still less, and the step that illustrates can make up or further refinement to realize method as described herein.Action, element and the feature of describing that only combine with an embodiment is not to plan to be excluded outside the similar role of other embodiment.
Add the qualification of function for the device of describing in the claim, device wherein is not that intention is defined to the device that is used to carry out described function disclosed herein, and is intended to any device in the coverage, that be used to carry out described function, now known or later exploitation.
As used herein, " a plurality of " are meant two or more.
As used herein, be in the instructions or the term of mentioning in the claim " comprises ", " comprising ", " carrying ", " having ", " containing " etc. all should be understood that it is open, that is, its implication is to include but not limited to.For claim, only vicissitudinous phrase " by ... constitute " and " basically by ... formation " be enclosed or semienclosed variation phrase.
As used herein, " and/or " implication be that Listed Items is an option, but this option also comprises the combination in any of Listed Items.

Claims (36)

1, a kind of disposal route that is used for personalized raw digital video, described raw digital video comprises image, and this image comprises original background scene and prospect, and this prospect comprises original actor, the treating method comprises:
Follow the tracks of position, direction and the expression of described original actor;
At least one key component of described original actor is replaced to and described background scene continuous images;
New performer is inserted in the described video, and the position, direction and the expression that are replaced part of described new performer and described original actor are mated substantially;
Again create illumination and hatching effect on the described new performer;
Again create described new performer's shadow and image.
2, disposal route as claimed in claim 1, wherein, the part of the described original actor that is replaced comprises face and contiguous skin area at least, the skin area of described vicinity comprises the visible part of ear and neck.
3, disposal route as claimed in claim 2, wherein,
Described image comprises the skin area of the described original actor of at least one non-replacement, and the skin appearance that this skin area has is different with described new performer's skin appearance, and
Described disposal route also comprises the described non-replacement skin area of change, to mate described new performer's skin appearance.
4, disposal route as claimed in claim 1 wherein, replaces to whole original actor and the background scene continuous images.
5, disposal route as claimed in claim 1 also comprises: before inserting, the data application combination and the parameter deformation transformation that comprise described new performer are handled, to create the new performer who mixes.
6, disposal route as claimed in claim 1, wherein, replacement also comprises shadow or the image of replacing described original actor.
7, disposal route as claimed in claim 1 also comprises:
New object is inserted in the described video;
Again create illumination and hatching effect on the described new object;
Again create the shadow and the image of described new object.
8, disposal route as claimed in claim 7 wherein, is inserted the 3D model that new object has used this new object.
9, disposal route as claimed in claim 7, wherein, described new object is based on that the people information relevant with described new performer select.
10, disposal route as claimed in claim 1,
Wherein, described original video comprises the primary object with position and direction,
Described disposal route also comprises:
At least a portion of described primary object is replaced to and described background scene continuous images;
New object is inserted in the described video, and the position and the direction that are replaced part of described new object and described primary object are mated substantially;
Again create illumination and hatching effect on the described new object;
Again create the shadow and the image of described new object.
11, disposal route as claimed in claim 10 wherein, is inserted the 3D model that new object has used this new object.
12, disposal route as claimed in claim 10, wherein, described new object is based on the people information relevant with described new performer and selects.
13, disposal route as claimed in claim 1 also comprises at least a portion that substitutes described original background scene with new background scene.
14, disposal route as claimed in claim 1, wherein,
Described video comprises track, the dialogue that this track is separable into the original background audio element and is told by described original actor;
Described disposal route also comprises at least a portion that substitutes described original background audio element with new background audio elements.
15, disposal route as claimed in claim 1, wherein,
Described video comprises track, the dialogue that this track is separable into the original background audio element and is told by described original actor;
Described disposal route also comprises to original sound-track adds new background audio elements.
16, disposal route as claimed in claim 1, wherein, described video comprises track, the dialogue that this track is separable into the original background audio element and is told by described original actor, described disposal route also comprises:
Obtain new performer's speech sample;
Analyze described new performer's speech sample, to define one or more descriptive characteristics of new performer's voice;
The dialogue of using one or more descriptive characteristics conversion of described new performer's voice to tell by described original actor.
17, a kind of disposal route that is used for individualized video, this disposal route comprises:
A plurality of video libraries of having prepared video are provided, and each has been prepared video and has obtained by video preparation process;
The actor model library of one or more new actor model is provided, and each new actor model is handled by performer's modeling and is obtained;
From described video library, select video;
From described actor model library, select new actor model;
Personalized application is handled, and creates the personalized version of selected video to use selected new actor model.
18, disposal route as claimed in claim 17, described video preparation process also comprises:
The video that comprises image is provided, and this image comprises original background scene and prospect, and this prospect comprises original actor;
Follow the tracks of position, direction and the expression of described original actor;
At least one key component of described original actor is replaced to and described background scene continuous images;
Discern and follow the tracks of illumination, shade, shadow and image in the described video.
19, disposal route as claimed in claim 18, described performer's modeling are handled and are also comprised:
New performer's at least one 2D digital picture and relevant support information are provided;
Create new performer's model according to described 2D digital picture with relevant support information, this new performer's model is made of 3D model, demographic profile and other personal information.
20, disposal route as claimed in claim 18, described personalisation process also comprises:
Use described new actor model that new performer is inserted in the video, described new performer and described original actor are replaced position, direction and expression partly and mate substantially;
Again create illumination and hatching effect on the described new performer;
Again create described new performer's shadow and image.
21, disposal route as claimed in claim 20, wherein,
Described video comprises the skin area of the described original actor of at least one non-replacement, and the skin appearance that this skin area has is different with described new performer's skin appearance, and
Described disposal route also comprises the described non-replacement skin area of change, to mate described new performer's skin appearance.
22, disposal route as claimed in claim 20, wherein,
Described video preparation process also comprises:
Follow the tracks of the position and the direction of primary object in the video;
At least one key component of described primary object is replaced to and described background scene continuous images;
Described personalisation process also comprises:
New object is inserted in the described video, and the position and the direction of the position of described new object and direction and described primary object are mated basically;
Again create illumination and hatching effect on the described new object;
Again create the shadow and the image of described new object.
23, disposal route as claimed in claim 20, wherein,
Described video preparation process also comprises:
Follow the tracks of and to be fit to be used for the position and the direction in place of placing objects in the video;
Described personalisation process also comprises:
New object is inserted in the described video, and the position and the direction in the position of described new object and direction and described place are mated substantially;
Again create illumination and hatching effect on the described new object;
Again create the shadow and the image of described new object.
24, disposal route as claimed in claim 20, wherein, selected video comprises track, the dialogue that this track is separable into original background sound rail and is told by described original actor, and wherein,
Described new performer's modeling is handled and is also comprised:
Obtain new performer's speech sample;
Analyze described new performer's speech sample, to define one or more descriptive characteristics of new performer's voice;
Described personalisation process also comprises:
The dialogue of using one or more descriptive characteristics of described new performer's voice that described original actor is told is out of shape.
25, a kind of disposal route that is used to create the personalized version of raw digital video, described original video comprises the image of original actor, the treating method comprises:
At least a portion of the image of described original actor is replaced to new performer's image;
The image of new object is inserted in the described video.
26, disposal route as claimed in claim 25, wherein, the image of described new object is based on that the people information relevant with described new performer select from the image of a plurality of candidate targets.
27, disposal route as claimed in claim 26, described original video also comprises the image of primary object, and at least a portion of the image of wherein said primary object is replaced by the image of described new object.
28, a kind of computing equipment that is used to create the personalized version of raw digital video, described raw digital video comprises image, and this image comprises background scene and prospect, and this prospect comprises the image of original actor, and described computing equipment comprises:
Processor;
The storer that couples with described processor;
Store the storage medium of instruction on it, when described instruction is performed, make described computing equipment carry out action, these actions comprise:
Individualized video, this video comprises image, and this image comprises the image of original actor, and described action also comprises:
Follow the tracks of position, direction and the expression of described original actor;
At least one key component of described original actor is replaced to and described background scene continuous images;
New performer is inserted in the described video, and the position, direction and the expression that are replaced part of described new performer and described original actor are mated substantially;
Again create illumination and hatching effect on the described new performer;
Again create described new performer's shadow and image.
29, computing equipment as claimed in claim 28 also is included in the interface of network, and wherein, the action of being carried out by described computing equipment also is included in to be carried out before the step of inserting, and receives described new performer's 2D digital picture via described network.
30, computing equipment as claimed in claim 29, wherein, the action of being carried out by described computing equipment also comprises: after the described step of creating shadow and image again, send personalized video via described network.
31, computing equipment as claimed in claim 29 also is included in one or more interfaces that comprise the database of a plurality of videos.
32, computing equipment as claimed in claim 31, wherein, the action of being carried out by described computing equipment also comprised before the step of replacing:
Receive the request of one of described a plurality of videos of customization via described network;
The requested video of retrieval from described one or more databases.
33, computing equipment as claimed in claim 31, wherein, the action of being carried out by described computing equipment also comprised before the step of replacing:
Select in described a plurality of video one based on the people information relevant with described new performer;
The selected video of retrieval from described one or more databases.
34, computing equipment as claimed in claim 28 also is included in first interface of digital image device, and wherein, the action of being carried out by described computing equipment also comprises: the 2D digital picture that received described new performer before the step of inserting via described interface.
35, computing equipment as claimed in claim 35 also is included in second interface of network, and wherein, the action of being carried out by described computing equipment also comprises: before the step of replacing via network requests with receive described original video.
36, a kind of storage medium stores instruction on it, when carrying out described instruction by processor, will make this processor carry out action, and these actions comprise:
Individualized video, this video comprises image, and this image comprises the image of original actor, and described action also comprises:
Follow the tracks of position, direction and the expression of described original actor;
At least one key component of described original actor is replaced to and described background scene continuous images;
New performer is inserted in the described video, and the position, direction and the expression that are replaced part of described new performer and described original actor are mated substantially;
Again create illumination and hatching effect on the described new performer;
Again create described new performer's shadow and image.
CNA2006800341565A 2005-09-16 2006-09-14 Personalizing a video Pending CN101563698A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US71785205P 2005-09-16 2005-09-16
US60/717,852 2005-09-16
US60/717,937 2005-09-16
US60/717,938 2005-09-16

Publications (1)

Publication Number Publication Date
CN101563698A true CN101563698A (en) 2009-10-21

Family

ID=41221583

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2006800341565A Pending CN101563698A (en) 2005-09-16 2006-09-14 Personalizing a video

Country Status (1)

Country Link
CN (1) CN101563698A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102196245A (en) * 2011-04-07 2011-09-21 北京中星微电子有限公司 Video play method and video play device based on character interaction
CN102572261A (en) * 2010-10-19 2012-07-11 三星电子株式会社 Method for processing an image and an image photographing apparatus applying the same
CN103927161A (en) * 2013-01-15 2014-07-16 国际商业机器公司 Realtime Photo Retouching Of Live Video
CN104392729A (en) * 2013-11-04 2015-03-04 贵阳朗玛信息技术股份有限公司 Animation content providing method and device
CN104461222A (en) * 2013-09-16 2015-03-25 联想(北京)有限公司 Information processing method and electronic equipment
CN106068651A (en) * 2014-03-18 2016-11-02 皇家飞利浦有限公司 Audio-visual content item data stream
US9542975B2 (en) 2010-10-25 2017-01-10 Sony Interactive Entertainment Inc. Centralized database for 3-D and other information in videos
CN106920212A (en) * 2015-12-24 2017-07-04 掌赢信息科技(上海)有限公司 A kind of method and electronic equipment for sending stylized video
CN107645655A (en) * 2016-07-21 2018-01-30 迪士尼企业公司 The system and method for making it perform in video using the performance data associated with people
CN108122271A (en) * 2017-12-15 2018-06-05 南京变量信息科技有限公司 A kind of description photo automatic generation method and device
CN109377502A (en) * 2018-10-15 2019-02-22 深圳市中科明望通信软件有限公司 A kind of image processing method, image processing apparatus and terminal device
CN109474849A (en) * 2018-11-12 2019-03-15 广东乐心医疗电子股份有限公司 Multimedia data processing method, system, terminal and computer readable storage medium
CN109618223A (en) * 2019-01-28 2019-04-12 北京易捷胜科技有限公司 A kind of sound replacement method
CN109841225A (en) * 2019-01-28 2019-06-04 北京易捷胜科技有限公司 Sound replacement method, electronic equipment and storage medium
CN110337030A (en) * 2019-08-08 2019-10-15 腾讯科技(深圳)有限公司 Video broadcasting method, device, terminal and computer readable storage medium
CN110602424A (en) * 2019-08-28 2019-12-20 维沃移动通信有限公司 Video processing method and electronic equipment
CN111432234A (en) * 2020-03-11 2020-07-17 咪咕互动娱乐有限公司 Video generation method and device, electronic equipment and readable storage medium
CN111741345A (en) * 2020-06-23 2020-10-02 南京硅基智能科技有限公司 Product display method and system based on video face changing
CN113223555A (en) * 2021-04-30 2021-08-06 北京有竹居网络技术有限公司 Video generation method and device, storage medium and electronic equipment
CN113302694A (en) * 2019-01-18 2021-08-24 斯纳普公司 System and method for generating personalized video based on template

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572261A (en) * 2010-10-19 2012-07-11 三星电子株式会社 Method for processing an image and an image photographing apparatus applying the same
US9542975B2 (en) 2010-10-25 2017-01-10 Sony Interactive Entertainment Inc. Centralized database for 3-D and other information in videos
CN103635899B (en) * 2010-10-25 2017-10-13 索尼电脑娱乐公司 For the 3D and the centralized data base of other information in video
CN102196245A (en) * 2011-04-07 2011-09-21 北京中星微电子有限公司 Video play method and video play device based on character interaction
CN103927161A (en) * 2013-01-15 2014-07-16 国际商业机器公司 Realtime Photo Retouching Of Live Video
CN104461222A (en) * 2013-09-16 2015-03-25 联想(北京)有限公司 Information processing method and electronic equipment
CN104392729A (en) * 2013-11-04 2015-03-04 贵阳朗玛信息技术股份有限公司 Animation content providing method and device
CN106068651A (en) * 2014-03-18 2016-11-02 皇家飞利浦有限公司 Audio-visual content item data stream
CN106068651B (en) * 2014-03-18 2020-10-16 皇家飞利浦有限公司 Audio-visual content item data stream
CN106920212A (en) * 2015-12-24 2017-07-04 掌赢信息科技(上海)有限公司 A kind of method and electronic equipment for sending stylized video
CN107645655B (en) * 2016-07-21 2020-04-17 迪士尼企业公司 System and method for performing in video using performance data associated with a person
US10311917B2 (en) 2016-07-21 2019-06-04 Disney Enterprises, Inc. Systems and methods for featuring a person in a video using performance data associated with the person
CN107645655A (en) * 2016-07-21 2018-01-30 迪士尼企业公司 The system and method for making it perform in video using the performance data associated with people
CN108122271A (en) * 2017-12-15 2018-06-05 南京变量信息科技有限公司 A kind of description photo automatic generation method and device
CN109377502A (en) * 2018-10-15 2019-02-22 深圳市中科明望通信软件有限公司 A kind of image processing method, image processing apparatus and terminal device
CN109474849A (en) * 2018-11-12 2019-03-15 广东乐心医疗电子股份有限公司 Multimedia data processing method, system, terminal and computer readable storage medium
CN113302694A (en) * 2019-01-18 2021-08-24 斯纳普公司 System and method for generating personalized video based on template
CN109618223B (en) * 2019-01-28 2021-02-05 北京易捷胜科技有限公司 Sound replacing method
CN109841225A (en) * 2019-01-28 2019-06-04 北京易捷胜科技有限公司 Sound replacement method, electronic equipment and storage medium
CN109841225B (en) * 2019-01-28 2021-04-30 北京易捷胜科技有限公司 Sound replacement method, electronic device, and storage medium
CN109618223A (en) * 2019-01-28 2019-04-12 北京易捷胜科技有限公司 A kind of sound replacement method
CN110337030B (en) * 2019-08-08 2020-08-11 腾讯科技(深圳)有限公司 Video playing method, device, terminal and computer readable storage medium
CN110337030A (en) * 2019-08-08 2019-10-15 腾讯科技(深圳)有限公司 Video broadcasting method, device, terminal and computer readable storage medium
CN110602424A (en) * 2019-08-28 2019-12-20 维沃移动通信有限公司 Video processing method and electronic equipment
CN111432234A (en) * 2020-03-11 2020-07-17 咪咕互动娱乐有限公司 Video generation method and device, electronic equipment and readable storage medium
CN111741345A (en) * 2020-06-23 2020-10-02 南京硅基智能科技有限公司 Product display method and system based on video face changing
CN113223555A (en) * 2021-04-30 2021-08-06 北京有竹居网络技术有限公司 Video generation method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN101563698A (en) Personalizing a video
KR101348521B1 (en) Personalizing a video
Kietzmann et al. Deepfakes: Trick or treat?
Bevan et al. Behind the curtain of the" ultimate empathy machine" on the composition of virtual reality nonfiction experiences
TWI521456B (en) Video content-aware advertisement placement
US7859551B2 (en) Object customization and presentation system
TW482986B (en) Automatic personalized media identification system
US7827488B2 (en) Image tracking and substitution system and methodology for audio-visual presentations
CN101946500B (en) Real time video inclusion system
JP2022508674A (en) Systems and methods for 3D scene expansion and reconstruction
WO2018170272A1 (en) Automatically controlling a multiplicity of televisions over a network by the outputs of a subset of interfaces
CN107633441A (en) Commodity in track identification video image and the method and apparatus for showing merchandise news
US20070250901A1 (en) Method and apparatus for annotating media streams
US20130251347A1 (en) System and method for portrayal of object or character target features in an at least partially computer-generated video
JP2020150519A (en) Attention degree calculating device, attention degree calculating method and attention degree calculating program
Chen Real-time interactive micro movie placement marketing system based on discrete-event simulation
JP7210340B2 (en) Attention Level Utilization Apparatus, Attention Level Utilization Method, and Attention Level Utilization Program
TW201946476A (en) Intelligent method and device for soliciting customers selecting a target customer with a relatively high coincidence degree of potential customer feature from crowd
KR102334403B1 (en) Contents production apparatus inserting advertisement in animation, and control method thereof
CN118014659A (en) Electronic propaganda product generation and playing method, system and storage medium
CN114501097A (en) Inserting digital content in video
Souter Building the Request for Proposal
Volina Libraries in the digital age
JP2003308541A (en) Promotion system and method, and virtuality/actuality compatibility system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20091021