CN104272377B - Moving picture project management system - Google Patents
Moving picture project management system Download PDFInfo
- Publication number
- CN104272377B CN104272377B CN201380018690.7A CN201380018690A CN104272377B CN 104272377 B CN104272377 B CN 104272377B CN 201380018690 A CN201380018690 A CN 201380018690A CN 104272377 B CN104272377 B CN 104272377B
- Authority
- CN
- China
- Prior art keywords
- frame
- mask
- camera lens
- project
- computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 230000033001 locomotion Effects 0.000 claims abstract description 191
- 238000004458 analytical method Methods 0.000 claims abstract description 18
- 238000007726 management method Methods 0.000 claims description 47
- 238000009826 distribution Methods 0.000 claims description 29
- 239000000203 mixture Substances 0.000 claims description 21
- 238000004519 manufacturing process Methods 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 12
- 230000000670 limiting effect Effects 0.000 claims description 11
- 238000012384 transportation and delivery Methods 0.000 claims description 5
- 230000005284 excitation Effects 0.000 claims description 3
- 238000012946 outsourcing Methods 0.000 claims description 3
- 238000009432 framing Methods 0.000 claims 1
- 230000008859 change Effects 0.000 abstract description 30
- 238000003860 storage Methods 0.000 abstract description 19
- 238000000034 method Methods 0.000 description 150
- 230000008569 process Effects 0.000 description 69
- 238000013519 translation Methods 0.000 description 51
- 230000014616 translation Effects 0.000 description 51
- 230000015572 biosynthetic process Effects 0.000 description 49
- 238000006243 chemical reaction Methods 0.000 description 42
- 238000003786 synthesis reaction Methods 0.000 description 42
- 239000013078 crystal Substances 0.000 description 35
- 238000013461 design Methods 0.000 description 35
- 238000004040 coloring Methods 0.000 description 34
- 238000011156 evaluation Methods 0.000 description 31
- 210000001508 eye Anatomy 0.000 description 29
- 210000003128 head Anatomy 0.000 description 28
- 238000000275 quality assurance Methods 0.000 description 24
- 238000009877 rendering Methods 0.000 description 21
- 230000003068 static effect Effects 0.000 description 21
- 238000012937 correction Methods 0.000 description 19
- 238000001514 detection method Methods 0.000 description 19
- 238000002156 mixing Methods 0.000 description 19
- 238000005516 engineering process Methods 0.000 description 18
- 239000013598 vector Substances 0.000 description 18
- 230000000007 visual effect Effects 0.000 description 17
- 230000014509 gene expression Effects 0.000 description 15
- 230000000873 masking effect Effects 0.000 description 15
- 238000004422 calculation algorithm Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 14
- 230000009466 transformation Effects 0.000 description 14
- 230000009471 action Effects 0.000 description 10
- 230000003321 amplification Effects 0.000 description 10
- 238000011049 filling Methods 0.000 description 10
- 238000012986 modification Methods 0.000 description 10
- 230000004048 modification Effects 0.000 description 10
- 238000003199 nucleic acid amplification method Methods 0.000 description 10
- 230000008878 coupling Effects 0.000 description 9
- 238000010168 coupling process Methods 0.000 description 9
- 238000005859 coupling reaction Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 9
- 230000006872 improvement Effects 0.000 description 9
- 238000006073 displacement reaction Methods 0.000 description 8
- 230000008676 import Effects 0.000 description 8
- 238000004091 panning Methods 0.000 description 8
- 238000004088 simulation Methods 0.000 description 8
- 238000004140 cleaning Methods 0.000 description 7
- 230000008034 disappearance Effects 0.000 description 7
- 210000004247 hand Anatomy 0.000 description 7
- 239000000428 dust Substances 0.000 description 6
- 230000002708 enhancing effect Effects 0.000 description 6
- 239000011521 glass Substances 0.000 description 6
- 230000001965 increasing effect Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 230000002902 bimodal effect Effects 0.000 description 5
- 239000003086 colorant Substances 0.000 description 5
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 238000003909 pattern recognition Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 238000003708 edge detection Methods 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 4
- 238000002372 labelling Methods 0.000 description 4
- 235000012054 meals Nutrition 0.000 description 4
- 238000010422 painting Methods 0.000 description 4
- 230000010363 phase shift Effects 0.000 description 4
- 238000012797 qualification Methods 0.000 description 4
- 238000012552 review Methods 0.000 description 4
- 239000011435 rock Substances 0.000 description 4
- 230000006641 stabilisation Effects 0.000 description 4
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 4
- 230000006399 behavior Effects 0.000 description 3
- 230000033228 biological regulation Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 230000008439 repair process Effects 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 2
- 239000011248 coating agent Substances 0.000 description 2
- 238000000576 coating method Methods 0.000 description 2
- 230000001427 coherent effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 150000001875 compounds Chemical class 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 238000005562 fading Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000007667 floating Methods 0.000 description 2
- PMHURSZHKKJGBM-UHFFFAOYSA-N isoxaben Chemical compound O1N=C(C(C)(CC)CC)C=C1NC(=O)C1=C(OC)C=CC=C1OC PMHURSZHKKJGBM-UHFFFAOYSA-N 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000001151 other effect Effects 0.000 description 2
- 238000013439 planning Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000009017 pursuit movement Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000009738 saturating Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000011144 upstream manufacturing Methods 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 241000792861 Enema pan Species 0.000 description 1
- 241000222065 Lycoperdon Species 0.000 description 1
- 241000768494 Polymorphum Species 0.000 description 1
- 241000630329 Scomberesox saurus saurus Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000000386 athletic effect Effects 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 239000003653 coastal water Substances 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000002716 delivery method Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 239000000839 emulsion Substances 0.000 description 1
- 238000012854 evaluation process Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 239000008187 granular material Substances 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000013011 mating Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 229920000136 polysorbate Polymers 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 238000011112 process operation Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000008929 regeneration Effects 0.000 description 1
- 238000011069 regeneration method Methods 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000005201 scrubbing Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000007493 shaping process Methods 0.000 description 1
- 238000010008 shearing Methods 0.000 description 1
- 235000015170 shellfish Nutrition 0.000 description 1
- 238000004513 sizing Methods 0.000 description 1
- -1 smog Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000003892 spreading Methods 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/103—Workflow collaboration or project management
Landscapes
- Processing Or Creating Images (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Studio Devices (AREA)
Abstract
Moving picture project management system is used for syndic, expeditor and artist.Artist is processed for example so that two dimensional image is converted into 3-D view using graphical analyses and image enhaucament and computer graphicss, or otherwise creates or change motion picture.Allowing efficiently to manage the project relevant with motion picture enables enterprise to manage assets, control cost, forecast budget and profit margin, reduce archives storage, and otherwise provides the display being suitable for specific role to improve worker's efficiency.
Description
The application is the U.S. Utility Patent Shen of Serial No. No.13/029,862 submitted to on 2 17th, 2011
Part continuation case please, No.13/029,862 is Serial No. No.12/976 that on December 22nd, 2010 submits to, 970 U.S.
The part continuation case of utility application, No.12/976,970 is the Serial No. that on October 27th, 2010 submits to
No.12/913, the part continuation case of 614 U.S. Utility Patent application, No.12/913,614 are August in 2009 17
Serial No. No.12/542 submitted to, the part continuation case of 498 U.S. Utility Patent application, No.12/542,498 are
The part continuation case of the U.S. Utility Patent application of Serial No. No.12/032,969 submitted to on 2 18th, 2008 is simultaneously
And issue, No.12/032 as United States Patent (USP) 7,577,312,969 is the United States Patent (USP) 7 that on January 4th, 2006 submits to, 333,
670 continuation case, 7,333,670 is the division of the United States Patent (USP) 7,181,081 that on June 18th, 2003 submits to, and 7,181,081 are
The thenational phase entrance case of the PCT application of Serial No. No.PCT/US02/14192 that on May 6th, 2003 submits to, No.PCT/
US02/14192 requires the rights and interests of the U.S. Provisional Patent Application 60/288,929 that May 4 calendar year 2001 submits to, the saying of these documents
Bright book is hereby incorporated by reference in its entirety.
Background of invention
Technical field
One or more embodiments of the invention is related to the project management field in motion picture industry, and is related to evaluate
Member, the Production Manager of management art man.Production Manager is also referred to as " film-making ".Artist utilize graphical analyses and image enhaucament with
And computer graphicss are processed for example so that two dimensional image is converted into the 3-D view being associated with motion picture, or with other
Mode creates or changes motion picture.But more specifically without limitation, one or more embodiments of the invention makes
The moving picture project management system being configured to efficiently manage the project relevant with motion picture can manage assets, control
Cost, forecast budget and profit margin, reduce archives storage, and otherwise provide be suitable for specific role display with
Just improve worker's efficiency.
Background technology
Known method for colouring to black and white feature film is related to identify the gray level region in picture, sequentially for passing through
The color that gray level application in each region of masked operation restriction of the scope in region covering each selection is pre-selected
Conversion or look-up table, and the region sheltered described in subsequently applying from a frame to many subsequent frames.United States Patent (USP) No.4,984,
072 (for the system and method for color image enhancement) are with United States Patent (USP) No.3,705,762 (for being converted into black-and-white film
The method of cinecolour) between the main distinction be to separate and shelter the mode of area-of-interest (ROI), how this information to be turned
Move to subsequent frame and how to change this mask information to meet the change of underlying image data.In 4,984,072 systems,
Region is scribbled covering by operator via single-bit and is sheltered, and operator is manipulated to mate frame by frame using digital paintbrush method
Motion.3, during 705,762, each region is sketched the contours using vector polygon or turned by operator and retouches (rotoscope),
Described vector polygon is now adjusted frame by frame by operator, so as making of cartoon shelter ROI.Turning in 2D film to 3D film
In changing, generally also utilize various different macking techniques.
In two kinds of systems described above, in succession every frame is artificially applied and changes with the colour switching look-up table of selection
With region so that the change of view data that visually detects of compensating operation person.The all changes of potential brightness/gray scale level and fortune
Move and subjectively detected by operator, and set for the interface moving or adjusting mask shape by using such as mouse etc
Standby artificially successively correction mask to compensate the motion of detection.In all cases, potential gray level is to comprise to select in advance
The passive receiver of the mask of the colour switching selected, all modifications of mask are under operator's detection and modification.At these
In existing invention, mask information does not comprise any information specific to potential brightness/gray scale level, and therefore with from a frame to another
The characteristics of image displacement of one frame and the automated location of the corresponding mask of distortion and shape correction are impossible.
The existing system being utilized to for two dimensional image to be converted into 3-D view is likely to need for the target in image
Create the wire-frame model of the 3D shape limiting masked target.The establishment of wire-frame model is a huge thing for work
Industry.These systems do not utilize the potential brightness/gray scale level of the target in image automatically to position yet and correct the shape of the mask of target
Shape is so that corresponding to the displacement of the characteristics of image of another frame and distortion to from a frame.Accordingly, it would be desirable to substantial amounts of work is so that artificially
To for shaping and reshaping to the mask of intended application depth or Z dimension data.From frame to frame, the moving target of movement therefore needs
Will substantial amounts of human intervention.In addition, not existing for strengthening two dimensional image to the known solution of 3-D view, these solutions
Scheme in frame using multiple image compound background so that depth information is disseminated to background and masked target.This includes coming
From the data of background object, no matter it is pre-existing, or for there is missing data, that is, moving target is never opened
What background was located is blocked region and generates.In other words it is known that system using in the place that there is not view data
The algorithm of insertion view data carries out gap filling, which results in pseudomorphism.
The method of the current element of the inclusion computer generation for film is transformed into 3D from 2D or effect is generally only
Merely with the final 2D image sequence constituting film.This is to join for all films are converted into left images from 2-D data
To so that the current method of three-dimensional viewing.There is not such known current method, it obtains and gives birth to using with computer
The metadata of the elements correlation becoming is for film to be converted.So situation is exactly, because having the system of older 2D film
Piece factory may not be preserved for the intermediate data of film, the metadata of the elements correlation generating with computer, because in the past
Data volume is so big, so that studio only can retain the film number of the final computer graphicss element with reproduction
According to, and abandon metadata.For having the film of associated metadata with a grain of salt, (element being generated with computer is closed
Connection intermediate data, such as mask or α and/or depth information), this metadata converted using depth will be greatly speeded up
Journey.
In addition, in industrial setting can with substantial amounts of work or computing capability tackle film thousands of frames turn
That changes make use of iteration workflow for the typical method that from 2D, film is transformed into 3D.Iteration workflow includes sheltering often
Target in frame, adds depth, and and then is reproduced as forming the left and right viewpoint of stereo-picture by this frame or left images are joined
Right.If for example there is error in the edge of masked target, then typical workflow is related to " iteration ", will frame send
Go back to the working group's (it is likely to be at the country that world's other end has cheap non-skilled labor) being responsible for covering over the object, thereafter will
Mask sends to the responsible working group's (being likely to be at another country again) reproducing image, joins the image of reproduction thereafter
To sending back to quality assurance group.In this workflow environment, the many times iteration of complex frames is occurred to be not uncommon for.This is referred to as
" throwing hedge over " workflow, because different working groups independently works to minimize their current workloads, and
It is not to consider whole efficiency as teamwork.In the case of there are thousands of frames in film, by comprising the frame of pseudomorphism
The time quantum that iteration spends may become higher, causes the delay of overall project.Even if reproducing processes partly occur again, weight
All images of new reconstruction of scenes or the time quantum to its ray tracing are likely to result in substantial amounts of process and thus to when young
When magnitude delay.The elimination of such a iteration will provide the end-to-end time of conversion project cost or the huge of wall hanging time
Big saving, thus increasing profit, minimizes the labour force realizing needed for described workflow.
Common simple Item Management Concept is known, however, the project management in the engineering project of big complexity
Form and system application start from 20th century mid-term.Project management is typically at least related to plan and manage resource and worker with complete
Become to be referred to as the template activity of project.Project is typically time guiding, and is also constrained by scope and budget.Frederick
Winslow Taylor and his student Henry Gantt and Henri Fayol describes project first in the way of system
Management.Initially use work breakdown structure (WBS) and Gantt chart (Gantt chart), sent out later in industry and national defence setting respectively
Put on display critical path method " CPM " and programme evaluation and review technique " PERT ".Project cost is estimated to follow these development closely.Basic project pipe
Reason generally includes startup, project planning, execution, monitoring/control and complete.More complicated project management techniques can attempt to realize
Other purposes, such as described in Capability Maturity Model integrated approach it is ensured that such as definition, quantitative management and optimization
Management process.
As described above, thousands of frames are typically comprised based on industrial motion picture project, but additionally, this
The project of a little types is likely to the amount of storage using flood tide, including possibly every frame number hundred layer mask, and hundreds of worker.
The quite special mode that the project of these types is difficult to predict with wherein cost so far is managed, and controls feedback with towards finance
Success redirects project, and asset management and other best project management techniques of putting into practice most of are utilized by minimally.Separately
Outward, using the project management tool such as motion picture effect that includes being not in unique vertical industry etc project management
Details and the ready-made project management tool of conversion Project settings.Therefore, it is difficult to so far realize forecast cost in film industry
With quality and be repeatedly carried out project.For example, existing motion picture project is commented sometimes for three people in some cases
Examine the frame of editor, such as one people's locating resource in ample resources, a people evaluates this resource, and another person provides note
Solution is for feeding back and doing over again.Despite the presence of the standalone tool executing these tasks, but they are not generally integrated, and not
Personnel with role are difficult to work with.
Regardless of technology known to these, there is not the project management of the unique requirements in view of motion picture industry
The optimization of solution or implementation.Accordingly, there exist the demand for moving picture project management system.
Content of the invention
Embodiments of the invention are generally directed to the project management relevant with generation, process or the conversion of motion picture.
Big motion picture project generally utilizes the worker of some roles to process each image constituting motion picture, and its quantity may
With thousands of image frame counts.One or more embodiments of the invention enables computer database to be configured to
Accept the distribution of the task relevant with artist, the time term for artist task and expeditor to the artistical time with
The evaluation to work product for " production " and editor's role of the evaluation of actual achievement, also known as work product.System therefore allows to be engaged in treat
The artist of the camera lens being made up of multiple image of management successfully by budget finished item, together with minimizing for motion picture
The generally huge memory requirement of assets, and allow in the case of the quality of the given worker using and dispatching, grading
The following cost submitted a tender of prediction project.
Being related in motion picture project of task generally includes to be distributed, executes distribution with the project evaluation, project access, task
Task or project work, evaluation product and the task relevant to the filing of the work product of project and transport.This
Bright one or more embodiments enable the worker of different " role " with side that is consistent with its role and assisting its role
Formula checks project task.This is unknown in motion picture industry.In one or more embodiments of the invention, Ke Yili
With include " editor ", " asset managers ", " visual effect chief inspector ", " expeditor " or " film-making ", " artist ", " art is always
Prison ", " stereophotography teacher ", " combined division ", " syndic ", the role of " production assistant ".In simpler meaning, for ease of
Illustrate, three major types are not related to production worker, its management art man, the fine arts work of execution most work product related work
Author and the editor based on work product evaluation and offer feedback.Each of these roles can be using motion
The uniqueness of picture frame or shared view, and/or it is assigned to the other assets of work with each image or its role
Relevant information.
General work flow process for evaluation stage
Generally, editor and/or asset managers and/or visual effect chief inspector role show using on the display of computer
The instrument of motion picture.This instrument for example enables the various different role that this stage is related to resolve into motion picture will
The scene of work or camera lens.A kind of such instrument includes commercially can be from" the FRAME obtaining”.
General work flow process for the access stage
Generally, element layer that various different scene breaks and such as α mask, computer are generated by asset managers etc
Other resources or be input in data base with any other resource of scene relating in motion picture.In the present invention one
In individual or multiple embodiment, it is possible to use any kind of data base.Can utilize to store with motion picture and be used for project
A kind of such instrument of the relevant information of assets of management includes commercially can be from left hander Science and Technology Ltd.TMObtain
Project management database " TACTICTM”.In one or more embodiments of the invention, it is possible to use any data base, as long as
Motion picture special characteristic is included in this project management database.One or more embodiments of the invention renewal item manages
" snapshot " and " file " table in data base.The framework of project management database is briefly described in this section, and under
It is described in more detail in specific embodiment one section in face.
General work flow process for allocated phase
Generally, particular job person is distributed to and its angle by production worker using the interface being coupled with project management database
The particular task of color association, and the image to worker's distribution and the camera lens giving in motion picture or scene relating.This
Bright one or more embodiments manage data base's digital asset management table using elementary item and add improvement elementary item
The added field of management function is to optimize the project management process for motion picture industry.One or more realities of the present invention
Apply a renewal item and manage " task " table in data base.
General work flow process for the project work stage
Generally, artist, stereophotography teacher and combined division execute a big chunk of overall work to motion picture.These
Role generally obtains its task using clock facility, and arranges task status and the startup for task and dwell time.Logical
Often, artist executes mask and region design and the ID amplification of frame.Artist generally with commercially can be from " THE
FOUNDRYTM" NUKE that obtainsTMTogether utilization can include the ray-trace program of such as automatization's mask traceability with
In the cleaning of such as mask.Once client agrees with visual effect and/or depth work in scene, then combined division utilizes artist
The same tool that uses and generally utilize such as commercially can be fromObtainWith
AFTEREtc other instruments complete scene.In one or more embodiments of the invention, will be operate in spy
The people determining assets is stored in the such as custom field in project management database.
In certain workflow scheme, the element classification in scene is for example become two lists by the worker in the design of region
Only classification.Scene generally includes for example chronological two width or more multiple image.This two classifications include background element
(i.e. static set and foreground elements) or the movement elements (such as performer, automobile etc.) moved in whole scene.?
In embodiments of the invention, individually treat these background element and movement elements similar to the mode making traditional animation.Separately
Outward, many films include element (also referred to as computer graphicss or the CG, or also referred to as computer life of computer generation now
The image becoming or CGI), these elements include the fact that non-existent target, such as robot or spacecraft, or
It is added into film, such as dust, smog, cloud etc. as effect.The element that computer generates can include background
Element or movement elements.
Movement elements:Movement elements are shown as a series of frame set of order tilings or the breviary being furnished with background element
Image.Movement elements are covered using the common numerous operator interface instruments of coating system and unique tools in key frame
Cover, for example relatively bimodal thresholding of described unique job, wherein optionally mask is applied to the neighbour by cursor brush bifurcated
Near bright or dark areas.After designing completely and having sheltered key frame, then will be from key frame using mask fitting technique
Mask information be applied to all frames in display, described mask fitting technique includes:
1. fast fourier transform and the Automatic Mask Fitting of gradient descent algorithm are used based on brightness and pattern match, its
The identical masking regional of reference key frame, then continuously quotes all existing subsequent frames.Due to realizing embodiments of the invention
Computer system can be from frame to frame at least to mask profile reshaping, the mistake therefore traditionally being completed with handss according to this
Journey can save substantial amounts of work.In 2D to 3D conversion project, when mankind's recognizable object for example rotates, can feel emerging
The sub- mask of manual adjustment in interesting region, and can be this process " (tween) between benefit " so that computer system is in key frame
Between automatically regulate sub- mask from frame to frame to save additional work.
2. rim detection is as the Bezier animation of automatic animation guide
3. rim detection is as the polygon animation of automatic animation guide
In one or more embodiments of the invention, import, using RGBAZ file, the element that computer generates, these
The element that RGBAZ file includes generating for computer pixel-by-pixel or by the optional α mask on sub-pixel basis and/or
Depth.The example of such file includes EXR file format.Any other file of depth and/or α information can be imported
Form all meets the spirit of the present invention.Embodiments of the invention import any kind of literary composition of the elements correlation generating with computer
Part is to be the part instant depth value of offer of the image of the elements correlation generating with computer.In this manner it is achieved that for from
The element that frame generates to any computer of frame all without mask matching or reshaping because the element that computer is generated and
Speech, has existed pixel-by-pixel or by the α on sub-pixel basis and depth, or has otherwise been imported into or obtains.Right
For the complex movie with the element that a large amount of computers generate, the importing of α and depth and the element generating for computer
The conversion making two dimensional image to the image pairing watched for right and left eyes is economically feasible.One or more realities of the present invention
Apply example to allow background element and movement elements to have depth associated with it or be otherwise set or adjusted so that different
All targets of the target generating in computer all pass through artistic depth adjustment.In addition, embodiments of the invention allow for example from
The translation of depth, scaling or normalization that the RGBAZ file of the target association being generated with computer is imported, so as frame or
All elements in frame sequence maintain the relative fullness of depth.In addition, for constitute film image element exist all
Any other metadata as feature matte or α or other masks etc can also be imported into and using being modified to
The mask of the Operation Definition of conversion.Can be imported into obtain the one of the file of the metadata for the photographic element in scene
Plant form and include RGBA file format.To stacking different target hierarchies recently, " it is laminated ", applies each by from the deepest
Any α of element or mask, and the most nearest target is flatly translated for left images, thus based on input picture and appointing
The element metadata what computer generates creates final depth-enhanced image pairing.
In another embodiment of the present invention, these background element and movement elements are individually combined into the list of multiframe
Frame represents, the single frames synthetic as the frame set tiled or as all elements (including moving and background/foreground), its
Then become the visual reference data base of the application of computer controls for the mask in the sequence of numerous frames composition.This reference
Mask/the lookups table address of each pixel address in videotex data bank and digital frame in and be used for creating reference viewdata
The X of follow-up " original " frame in storehouse, Y, Z location are corresponding.Based on the such as edge combining with pattern recognition and other sub- mask analyses
The various differential image processing methods of detection etc, are aided with the area-of-interest of the operator's segmentation from reference target or frame
And the detection of operator's guiding of subsequent sections corresponding with original region-of-interest, mask is applied to subsequent frame.According to
This mode, gray level determines apply in the way of keying in the area-of-interest that predetermined and operator control every on one's own initiative
Individual mask (and be used for colouring the respective color of project from frame to frame and search or be used for 2 d-to-3 d inverted term purpose depth
Information) location and shape.
Camera pan background and static foreground element:Estimate skill using a series of phase places correlation, image matching and focal length
Art by the static foreground and background element combinations in several consecutive images including camera pan and is assembled together to create
Build the synthesis single frames representing image sequence used in its construction.During this construction process, by the behaviour of overlapping successive frame
The overall situation arrangement that author is adjusted removes movement elements.
For coloring project, using the multiple colour switching look-up tables only being limited by the pixel quantity in display
Color design is carried out to the single width background image representing camera pan image sequence.This allows designer to include desired to the greatest extent may be used
Details more than energy, the air-brush including mask information and other mask application technologies that maximum Expression of Originality is provided.Depth is turned
For changing project (i.e. such as 2 d-to-3 d movie conversion), represent that the single width background image of camera pan image sequence is permissible
It is utilized to arrange the depth of different items (item) in background.Once background color/depth design completes, by mask information certainly
It is transferred to all frames for creating single width composograph dynamicly.In this manner it is achieved that color or the every multiple image of depth and/or
Scene execution is once rather than every frame executes once, and color/depth information is automatically disseminated to respectively via embodiments of the invention
Frame.Mask from coloring project can be for depth conversion projects combo or packet, because colored mask can comprise to compare
Depth changes the more subregion of mask.For example, for coloring project, face is likely to be of and is applied to such as lip, eye
Some masks in the region of eyeball, hair etc, and depth conversion project may need only to the contouring head of people or the nose of people
Sub- profile or the sub- mask of several geometry are to apply depth to it.Can serve as depth conversion from the mask of coloring project
The starting point of project, because the profile defining mankind's recognizable object is time-consuming in itself, and can be utilized to start depth
Conversion masking procedure is so that time-consuming.The element that any computer of background level can be generated is applied to single width Background
Picture.
In one or more embodiments of the invention, will be with respect to during creating the single width composograph representing pan
The image displacement information of every frame is registered in text, and uses it for single synthesis mask is applied to for creating conjunction
Become the frame of image.
Due to individually having sheltered foreground moving element before application background mask, thus there is no pre-existing covering
The application background mask information in any case of mould information.
Rock with and without film, still camera scene that small photographing unit drifts about with amiable photographing unit:Wherein
Presence is rocked from the caused small camera motion of the sprocket wheel transmission of 35mm or 16mm film to number format or film
In scene, shelter moving target completely first by above-named technology.Then, automatically process scene in all frames with
Just create the single image representing static foreground element and background element, block at it and eliminate all in the case of exposing background
The moving target sheltered.
Expose background or prospect in the moving target sheltered in any case, preferential and will previous quilt with suitable skew
The background blocked and the example of prospect copy in single image to compensate camera motion.Offset information is included in and carries on the back
So that the mask information obtaining can be applied to suitable mask shift in the text of each single representation association of scape
Every frame in scene.
Using the multiple colour switching look-up tables only being limited by the pixel quantity in display to expression still camera
The single width background image of frame series carries out color design.In successive frame series, movement elements continuously block the situation of background element
Under, they are counted as the shadow being ignored and sheltering.During masked operation, black objects are neglected in the project only colouring
Slightly, because being only applied to for background in the case of there is no pre-existing mask after the background mask obtaining
Single expression all frames.If for always unexposed region background information, then this data is taken as and is based on
Any other background data that synthesis background is spread by a series of images.This allows to minimize pseudomorphism or artifact-free two dimension
To three-dimensional conversion, because never needing to stretch target or extension pixel, because turning in depth for the data of disappearance
During changing process, the view data for the believable generation of human viewer is directed to occlusion area when needed and generates and so
After take from occlusion area.Therefore, for the element that movement elements and computer generate, when what does not exist, true to nature
Data can be used for the region after these elements.This allows designer to include desired details as much as possible, including covering
The air-brush of mould information and other mask application technologies that maximum Expression of Originality is provided.Once background color design completes, by mask
Information is automatically transferred to all frames for creating single width composograph.For depth project, from photograph in synthetic frame
The distance of machine to each items is automatically transferred to all frames for creating single width composograph.By more or less level
The target context that ground movement is sheltered, thus arranging its perceived depth with the every frame corresponding auxiliary view frame in scene.Should
Move horizontally the data that can generate for shield portions using artist, or alternatively, one or more in the present invention
In embodiment, it is possible to use allow the user creating missing data to define color mark and there is no figure for the second viewpoint
As the region of data, to guarantee to occur without pseudomorphism during 2 d-to-3 d transformation process.In an embodiment of the present invention, may be used
To cover the presence unknown data in background using any of technology, that is, (as to show certain face that there is missing data
Color shows) region that cannot for example be borrowed from another scene/frame by allowing artist create complete background, or tool
There is the less occlusion area of the target of artist drafting.After depth assignment being given the target in synthesis background or logical
Cross the depth importing with the elements correlation of the computer generation of background depth, for example can pass through for the second viewpoint horizontal Horizon
Move foreground target, or can alternately pass through flatly left and right translation foreground target and regard with two of original viewpoint offsets to create
Point and be that each image in scene creates the second visual point image to produce the three-dimensional view of film, such as in its Scene
Primitive frame is assigned to the left-eye view of right eye viewpoint.
One or more instruments that system adopts pass through to generate the translation literary composition that can serve as portable pixel-by-pixel editing file
Part allows real-time edition 3D rendering need not again reproduce for example to change layer/color/mask and/or to remove pseudomorphism and
Littleization or the iteration workflow path eliminating return different operating group.For example, mask set takes source images, and for constituting
Items in every frame of the image sequence of film, region or mankind's recognizable object create mask.Depth amplification group is by depth
Such as shape is applied to the mask of mask set establishment.When reproducing image pairing, left and right visual point image and left and right translate file
Can be generated by one or more embodiments of the invention.Left and right visual point image allows the 3D viewing of original 2D image.Translation
File for example specifies the pixel-shift for each source pixel in original 2D image in the form of UV or U figure.These files lead to
Often relevant with the α mask for every layer, described layer is for example used for the layer of actress, is used for the layer of door, is used for layer of background etc..
These translation files or figure are transferred to quality assurance work group from the depth amplification group reproducing 3D rendering.This allows the quality assurance
Working group (or other working groups of such as depth amplification group etc) executes the reality of 3D rendering in the case of again not reproducing
When editor for example in case not with require such again reproduce or by mask send back process time that mask set does over again/
Again reproduce and/or the delay of iteration workflow association in the case of change layer/color/mask and/or remove and such as shelter
The pseudomorphism of error etc, wherein mask set may be at the third world countries with non-skilled labor of earth opposite side.
In addition, when reproducing left images, that is, during 3D rendering, the Z-depth in the region of such as performer etc in image can also be with α
Mask is transferred to quality assurance group together, and then this quality assurance group can also not utilize original reproduction software again to reproduce
In the case of adjust depth.This can for example be executed for example not weigh using the deleted background data of the generation from any layer
" downstream " real-time edition is allowed in the case of new reproduction or ray tracing.Quality assurance can for individuality give to shelter group or
Person's depth amplification group with feed back so that these individualities can be indicated on be not to wait for or require upstream group for current project to appoint
What feelings makes desired work product for given project in the case of doing over again.This allows feedback, but eliminates work
Send the do over again iterative delay being related to and the associated delay waiting the work product done over again as product back to.The disappearing of such a iteration
Except providing the end-to-end time of conversion project cost or the huge saving of wall hanging time, thus increasing profit and minimum
Change the labour force realizing needed for workflow.
For evaluating the general work flow process in stage
Regardless of the type of the project work of execution in given assets, all for example use and couple with project management database
To allow to check the interface evaluation assets of work product.User generally, based on editor's role uses this interface at most, art
Family and stereophotography Shi Qici, and Art Director is minimum.Can for example utilize and cover on image or scene to allow tool
There are given worker's expedited review of specific role and the clear background surrounding text of feedback to check Review explanation and image simultaneously.
Other improvement of project management database include artist grading and assets predicament.These fields allow to worker grading and
Forecast expected cost in bid project, this is unknown in motion picture project planning field.
General work flow process for filing and haulage stage
Asset managers can be deleted and/or be compressed all assets that can regenerate, and this can be typical motion diagram segment
Save the disk space of hundreds of terabytes.This allows the huge saving of disc driver hardware purchase and is unknown in the art
's.
One or more embodiments of system can be using computer and the database realizing coupling with computer.There is example
Any computer architecture of any amount of computer as coupled via computer communication network all meets the spirit of the present invention.
The data base coupling with computer at least includes project table, camera lens table, task list and time list table.Project table generally includes project
Identifier and the item description relevant with motion picture.Camera lens table generally includes camera lens identifier and quotes with start frame value
With the multiple image of end frame value, wherein said multiple image with and the motion picture of item association associate.Camera lens table generally wraps
Include at least one camera lens, there is the state relevant with the job schedule of execution on camera lens.Task list is usually used and also is located at item
Item identifier in mesh table quotes this project.Task list generally includes at least one task, and described task generally includes task
Identifier and assignment person, such as artist, and can also include and and design selected from region, install, move, synthesize
The context setting that the task type relevant with the work of the motion picture of evaluation associates.At least one task described has generally included
Become the time that at least one task described is distributed.Item identifier in time list items table usual REFER object table and task
Task identifier in table.Task list generally includes at least one the time list items comprising initial time and end time.?
In one or more embodiments of the invention, computer is configured to present and is configured to be shown by artist is checked first,
This first display includes at least one daily distribution, and it has context, project, camera lens, is configured to update in task list
The state input of state and the initial time being configured in renewal time list items table and the intervalometer of end time input.
Computer is commonly configured to present and shows, it includes tool by expeditor or " film-making " worker (i.e. film-making) check second
There are context, project, camera lens, state and artistical search to show, and the wherein second display further includes multiple art
The list of family and based on the time spending in described at least one list items time relatively according to at least one camera lens described
The corresponding statess of time of described at least one task distribution of association and actual achievement.Computer is generally also configured as presenting is joined
It is set to the 3rd display checked by editor, it includes being configured to accept with regard to described many with what at least one camera lens described associated
The comment of described at least piece image in width image or drawing or comment and the annotation frame of the two of drawing.One of computer
Or multiple embodiment may be configured to provide the 3rd display being configured to be checked by editor, the 3rd display includes covering and exists
The annotation at least one width in described multiple image.This ability provides the information with regard to a display, and it is in known system
Typically require three workers in system to integrate, and be novel in itself.
The embodiment of data base can also include snapshot table, and this snapshot table includes snapshot identifier and search-type and wraps
Include the snapshot of at least one camera lens described, for example, include the subset of at least one camera lens described, wherein this snapshot is buffered in calculating
To reduce the access for camera lens table on machine.Embodiment can also include for other kinds of task category (for example source and
Cleaning inter-related task) other contexts setting.Can also include meeting the relevant with motion picture work of spirit of the present invention
The setting of any other context or value.The embodiment of data base can also include asset request table, and this asset request table includes
Can be utilized to ask to work in assets or request is worked in by such as other workers or creates the money of assets itself
Produce request identifier and camera lens identifier.The embodiment of data base can also include required list, and this required list includes mask request
Identifier and camera lens identifier and can be utilized to ask any kind of action of such as another worker.Data base
Embodiment can also include remarks table, this remarks table include remarks identifier and REFER object identifier and include with come
At least one relevant remarks of at least one width in the described multiple image of autokinesis picture.The embodiment of data base can also be wrapped
Include the payment table comprising delivery identifier, its REFER object identifier and include the information relevant with the payment of motion picture.
One or more embodiments of computer are configured to accept from film-making or editor alternatively with the blind side commented
The work that formula is executed based on artist grading input, described blind comment in mode, syndic do not know artistical identity with
Just prevent from for example acting unfairly from selfish motives.One or more embodiments of computer are configured to accept the difficulty of at least one camera lens described, and
And the Time Calculation grading spending on the work based on artist execution and the difficulty based on camera lens and camera lens.One of computer
Or the work that is configured to accept to execute based on artist from film-making or editor's (i.e. editor) of multiple embodiment
Grading input, or accept difficulty and the work based on artist execution and the difficulty based on camera lens of at least one camera lens described
The Time Calculation grading spending on degree and camera lens, and shown for skill based on the grading that computer acceptance or computer calculate
The excitation of astrologist.One or more embodiments of computer are configured to estimate residual cost based on actual achievement, described actual achievement is based on
Be with project described in whole camera lenses at least one camera lens associate described in whole tasks flowers at least one task
The total time of expense be relatively with project described in whole camera lenses at least one camera lens associate described at least one task
In the distribution of whole tasks time.One or more embodiments of computer are configured to the actual achievement that will associate with first item
With the actual achievement being associated with second items is compared, and commented based at least one of the first worker distributing to first item
Level shows that at least one worker will distribute to second items from first item.One or more embodiments of computer are configured
Analysis is become to have the perspective project of a number of camera lens and estimate the difficulty of every camera lens, and based on the reality with item association
Achievement, calculates the forecast cost for this perspective project.One or more embodiments of computer are configured to analysis and have one
The perspective project of the camera lens of fixed number amount and estimate the difficulty of every camera lens, and based on the first item of previous execution and
The actual achievement of the second items association of the previous execution previously having completed after the first item of execution, calculates the derivative of actual achievement, is based on
The derivative calculations of actual achievement are used for the forecast cost of perspective project.For example, with the improvement of process, the improvement of instrument and worker
Improvement, work efficiency improve and budget and tendering process how can be changed by the relation of computational efficiency and time and examine
Consider this point, and be used for the cost of perspective project using this change rate forecast.One or more embodiments of computer
It is configured to analyze and the actual achievement of described item association, and with the camera lens that completes divided by the total camera lens with described item association,
Provide the deadline of project.One or more embodiments of computer are configured to analyze the actual achievement with described item association,
And with the camera lens that completes divided by the total camera lens with described item association, provide the deadline of project, accept that there is grading
The input of at least one additional artist, accept wherein using additional artist a number of camera lens, based on described at least
One additional artist and number of shots calculate time-consuming, deduct that this is time-consuming, and give from the deadline of project
Go out the renewal time that project completes.One or more embodiments of computer are configured to calculating and can be utilized to project is returned
Shelves amount of disk space, and show can from other assets rebuild at least one assets to avoid to this at least one assets
Filing.One or more embodiments of computer are configured in artist to the frame being not currently at least one camera lens described
Number work in the case of show error message.This possibly be present at for example is fade-in fade-out, fades out or other effects make specific mirror
When elongated, the frame that wherein this camera lens comprises not in original source assets.
Brief description
Fig. 1 shows and represents the multiple events that wherein there is the single instance of background or the scene of perception or editing (cut)
Thing piece or radiomoviess frame.
Fig. 2 shows the scene that the isolated background from the plurality of frame shown in Fig. 1 is processed, wherein using various
Different subtractions and differential technique remove all movement elements.Then, created using single width background image and represent that designer selects
The background mask of color lookup table cover, wherein dynamic pixel color automatically compensates for or adjusts mobile shade and other are bright
Degree change.
Fig. 3 shows that the representative sample of each moving target (M target) in scene receives and represents what designer selected
The mask of color lookup table covers, wherein dynamic pixel color automatically compensate for motion in scene for the moving target or
Person adjusts mobile shade and other brightness flop.
Fig. 4 show all masking elements of scene then reproduced to create the frame of coloring completely, wherein motion mesh
Each suitable frame that mark mask is applied in scene, is followed by only not having pre-existing mask in boolean's mode
The background mask of local application.
Fig. 5 A and Fig. 5 B shows the series of successive frames being loaded in display-memory, and wherein one frame utilizes background (to close
Key frame) shelter completely and prepare to travel to subsequent frame via automatic mask matching method mask.
Fig. 6 A and Fig. 6 B shows amplifying and scalable single width figure of the continuous image series in display display-memory
The subwindow of picture.This subwindow enable the operator in real time or during the motion that slows down on single frames or in multiframe
Interactively manipulate mask.
Fig. 7 A and Fig. 7 B shows that single mask (human body) automatically propagates to all frames in display-memory.
Fig. 8 shows that all masks associating with moving target propagate to all successive frames in display-memory.
Fig. 9 A shows the picture of face.
Fig. 9 B illustrates the feature of the face in Fig. 9 A, and " little dark " pixel wherein shown in Fig. 9 B is used for using bilinear interpolation
Calculate Weighted Index.
Figure 10 A-D shows the best fit in search error surface:Error surface meter in gradient descent searching method
Calculate be related between reference image frame corresponding in search image frame (skew) position (x, y) with reference picture pixel (x0,
The mean square deviation of the pixel in square matching frame centered on y0).
Figure 11 A-C shows from declining, along (individually evaluating) error surface gradient, the second search box derived, right
For it, the error function of evaluation reduces with respect to original reference frame, is possibly minimized (from these frames and Figure 10 A-D
In reference block visual comparison clearly visible).
Figure 12 depicts gradient component evaluation.Error surface gradient calculates according to the definition of gradient.Vertically and horizontally error
Deviation is evaluated at four positions of search box near center location, and be combined to provide the error gradient of this position
Estimate.
Figure 13 shows the mask of the propagation in the first order example, wherein between underlying image data and mask data
There is little difference.It will be clear that dress mask and hand mask are to close with respect to view data.
Figure 14 shows that mask data is by quoting the bottom figure in prior images by using Automatic Mask Fitting routine
It is adjusted to view data as data.
Figure 15 shows that the mask data in image below in sequence shows significantly with respect to underlying image data
Difference.Eye make-up, lipstick, blush, hair, face, full dress and handss view data both relative to mask data shift.
Figure 16 shows and automatically regulates mask based on underlying image data according to previous mask and underlying image data
Data.
Figure 17 shows and illustrates covering from Figure 16 using suitable colour switching after whole frame Automatic Mask Fitting
Modulus evidence.Adjust mask data based on from previous frame or from the data of initial key frame to be suitable for bottom brightness mould
Formula.
Figure 18 shows for sketching the contours the polygon for the area-of-interest sheltered in frame 1.Square polygon point
Snap to the edge of interesting target.Using Bezier, Bezier point snaps to interesting target and control point/curve
It is suitable for edge.
Figure 19 shows that whole polygon or Bezier are taken to the last frame select in display-memory, wherein
Operator adjusts polygon point using alignment (snap) function that point and curve automatically snap to the edge of interesting target
Or Bezier point and curve.
Figure 20 shows in the case of there is the interactive regulation of operator, if the point in the frame between two frames and curve
Between there is significant difference, then operator is adjusted the frame of the presence maximum error of fitting in the middle of described multiframe further.
Figure 21 shows when determining polygon or Bezier correctly animation between the frame of two regulations, will
Suitable mask is applied to all frames.
Figure 22 shows the mask that polygon or the Bezier animation from point and curve automatic aligning to edge obtains.Palm fibre
Color mask is colour switching, and green mask is any see-through mask.
Figure 23 shows an example of twice mixing:Purpose in twice mixing is from the mosaic of final mixing
Eliminate moving target.This can be completed by mixing described frame first, and therefore moving target is completely from a left side for background mosaic
Sidesway removes.As shown in Figure 23, personage can remove from scene, but still can see on the right side of background mosaic
See.
Figure 24 shows second time mixing.Now, generate the second background mosaic, wherein using hybrid position and width
Degree is so that moving target removes from the right side of final background mosaic.As shown in Figure 24, personage can move from scene
Remove, but still can see in the left side of background mosaic.In second time mixing as shown in Figure 24, the people of motion
Thing is shown in the left side.
Figure 25 shows final background corresponding with Figure 23-24.Mix for described twice and moved from scene with generating
Background mosaic except the final mixing of moving target.As shown in Figure 25, there is the back of the body of the final mixing of sport figure
Scape is removed.
Figure 26 shows editing frame pairing window.
Figure 27 shows the successive frame of the expression camera pan being loaded in memorizer.Moving target (is moved to the left to
The house keeper of door) sheltered with a series of color transform informations, stay and do not apply mask or color transform information
Black and white background.
Figure 28 shows six representative successive frames for the sake of clarity showing pan above.
Figure 29 shows synthesis or the montage image of the whole camera pan building using phase coherent techniques.Motion
Target (house keeper) by keep first with last frame and related to the phase place of both direction averagely and as saturating for reference
Bright positive is included.Using the same color conversion macking technique for foreground target, the single montage of pan is represented and carry out
Color designs.
Figure 30 shows the frame sequence in the camera pan after background mask colour switching, and montage is applied to use
To create every frame of montage.There is no the local application mask of pre-existing mask, thus there is suitable skew in application
Background information while keep moving object mask and color transform information.
Figure 31 for the sake of clarity shows automatically to be applied to color background mask and covers without pre-existing
The frame sequence of the selection in pan after the frame of mould.
Figure 32 is shown in which to shelter the frame sequence of all moving targets (performer) using single colour switching.
Figure 33 for the sake of clarity shows the frame sequence of the selection before background mask information.All of movement elements are all
Sheltered completely using Automatic Mask Fitting algorithm.
Figure 34 shows the static background and foreground information deducting the moving target previously sheltered.In this case, with
Shelter the single representation of complete background with moving target similar mode using colour switching.It should be pointed out that before removing
The profile of scape target is seemed to be truncated and not can recognize that due to due to its across incoming frame train interval, i.e. black in frame
Object representation wherein moving target (performer) never exposes the region of background and prospect.In the project only colouring, black mesh
It is ignored during being marked on masked operation, because the background mask obtaining later should only in the case of not having pre-existing mask
Use all frames of the single representation for background.In depth conversion project, missing data region can be shown so that
View data can obtain/generate so that in horizontal translation foreground target to carry when generating the second viewpoint for missing data region
For visually believable view data.
Figure 35 show by background mask information with after suitable offset applications to every frame and do not have pre-existing
Successive frame in still camera scene cut in the case of mask information.
Figure 36 shows from after suitable offset applications background information and do not having pre-existing mask information
In the case of still camera scene cut frame representative sample.
Figure 37 A-C shows the embodiment of mask fitting function, inserts including digital simulation grid and on fitted mesh difference scheme
Enter mask, wherein, Figure 37 A shows mask matching general processing flowchart, and Figure 37 B shows the flow process of digital simulation grid
Figure, Figure 37 C shows the flow chart on fitted mesh difference scheme to mask interpolation.
Figure 38 A-B shows the embodiment extracting background function, and wherein, Figure 38 A shows the handling process extracting background,
Figure 38 B shows the handling process of synthesis static background.
Figure 39 A-C shows the embodiment of alignment point function, and wherein, 39A shows the place of alignment Bezier/polygon point
Reason flow process, Figure 39 B shows the handling process of the edge image that restriction is found, and Figure 39 C shows the processing stream finding snap point
Journey.
Figure 40 A-C shows that bimodal threshold value shelters the embodiment of function, and wherein, figure A shows relatively bimodal threshold value instrument
Handling process, i.e. the image of light/dark cursor shape " create ", and Figure 40 B Figure 40 C corresponds to the step 2.1 in Figure 40 A,
Corresponding to the step 2.2 in Figure 40 A, " light/dark shape is applied to mask ".
Figure 41 A-B shows the embodiment of digital simulation value function, and wherein, Figure 41 A shows the place of digital simulation gradient
Reason flow process, Figure 41 B shows the handling process of digital simulation value.
Figure 42 shows two picture frames of separately some frames on the time of the people of floating crystal ball, wherein by these figures
As the different target of each in frame is converted into objective from two dimension target.
Figure 43 shows will the sheltering of first object from the first picture frame that two dimensional image is converted into 3-D view.
Figure 44 shows sheltering of the second target in the first picture frame.
Figure 45 shows two see-through masks allowing to check with the first picture frame of the part of mask sequence.
Figure 46 shows sheltering of the 3rd target in the first picture frame.
Figure 47 shows three see-through masks allowing to check with the first picture frame of the part of mask sequence.
Figure 48 shows sheltering of the 4th target in the first picture frame.
Figure 49 shows sheltering of the 5th target in the first picture frame.
Figure 50 shows the control panel for creating 3-D view, including the mask of layer and objective and image frame in
Association, specifically illustrate the establishment of the plane layer of coat-sleeve for the people in image.
Figure 51 shows the 3-D view of each the different mask shown in Figure 43-49, wherein associates with the coat-sleeve of people
Mask be illustrated as on the right of the page towards left and right viewpoint rotation plane layer.
Figure 52 shows the view of the somewhat rotation of Figure 51.
Figure 53 shows the view of the somewhat rotation of Figure 51.
Figure 54 illustrates control panel, and the crystal ball that it specifically illustrates before for the people in image creates spherical object.
Figure 55 shows the flat mask that spherical object is applied to crystal ball, and it illustrates in spheroid and projects to spheroid
Front and back to illustrate to distribute to the depth of crystal ball.
Figure 56 shows the top view of the three dimensional representation of the first picture frame, illustrates to distribute to the Z dimension of crystal ball, shows crystal
Ball is in before the people in scene.
Figure 57 show rotate so that in X-axis coat-sleeve seem from image out more coat-sleeve planes.
Figure 58 shows control panel, and it is particularly shown creating the head target for being applied to the face in image,
Need not give face depth true to nature in the case of such as line model.
Figure 59 shows the head target in 3-D view, and it is too big and is not aligned with the actual number of people.
Figure 60 shows the head target in 3-D view, and it is with suitable face and adjusted by resizing, for example flat
Move on to the position of the number of people of reality.
Figure 61 shows the head target in 3-D view, and Y-axis is rotated through circle and illustrates, and Y-axis with the head of people is
Initial point, therefore allows the correct rotation of head target to correspond to the orientation of face.
Figure 62 shows and rotates the head the being slightly tilted corresponding head mesh so that with people also around Z axis slightly clockwise
Mark.
Figure 63 shows and travels to mask in second and final image frame.
Figure 64 shows the home position of mask corresponding with the handss of people.
Figure 65 shows the reshaping of the mask that can automatically and/or manually execute, and any of which intermediate frame obtains
The depth information between benefit between first picture frame mask and the second picture frame mask.
Figure 66 shows when foreground target (herein for crystal ball) moves to the right as sheltered mesh in following image
With the missing information of colored prominent left view point on the left of target.
Figure 67 shows when foreground target (herein for crystal ball) moves to the left side as sheltered mesh in following image
With the missing information of colored prominent right viewpoint on the right side of target.
Figure 68 shows that the ultimate depth of available red/blue 3-D glasses viewing strengthens the three-dimensional shadow of the first picture frame
Picture.
Figure 69 shows that the ultimate depth of available red/blue 3-D glasses viewing strengthens second and last picture frame
Stereoscopic image, notes the motion of the rotation of the number of people, the motion of staff and crystal ball.
Figure 70 shows the right side of the crystal ball with fill pattern " smearing ", wherein has disappearance letter for left view point
Breath, that is, on the right side of crystal ball on pixel take from the right hand edge of missing image pixel and flatly " smear " so as to cover disappearance believe
Breath.
Figure 71 shows for the upper body of performer and the mask of head (and transparent wing) or α plane.Mask can
To include being shown as the zone of opacity of black and being shown as the transparent region of gray area.
Figure 72 shows occlusion area, and it is corresponding to the performer of Figure 71, and it illustrates never appointing in scene
The region of the background exposing in what frame.This can be such as synthesis background.
Figure 73 show reproduce on art be generated for use in 2 d-to-3 d conversion in background complete and true to nature so that
Allow the occlusion area of artifact-free conversion.
Figure 73 A shows partly to draw or otherwise reproduce and turns to be generated for use in minimum 2 d-to-3 d
The occlusion area of the background true to nature enough in the pseudomorphism changing.
Figure 74 shows the bright area of the shoulder on the right side of Figure 71, and its expression ought move on to the left side foreground target to create right
The gap being located using stretching (it is also depicted in Figure 70) during viewpoint.The dark-part of figure is taken from least one frame of its Scene
The available background of data.
Figure 75 shows in the case of the background not using generation, can be used in scene without background data
The region being blocked in all frames, the example of the stretching (smearing) of pixel corresponding with the bright area in Figure 74.
Figure 76 shows the result of the right viewpoint not having pseudomorphism on the edge of the shoulder of people, and wherein dark areas include scene
Available pixel in one or more frames, and the data of the generation in the region always blocked for scene.
Figure 77 shows an example of the element (herein for robot) that computer generates, and it models in three dimensions
And it is projected as two dimensional image.If the metadata of such as α, mask, depth or its combination in any etc exists, then permissible
Utilize this metadata with accelerate from two dimensional image to for right and left eyes so that the transformation process of the two dimensional image pairing of three-dimensional viewing.
The element that Figure 78 is generated with computer (has the machine of the depth automatically configuring via the depth metadata importing
Device people) the color of importing and depth together illustrate be separated into background and foreground elements (mountain range in background and sky and
The soldier of lower left, referring also to Figure 79) original image.As shown in background, all may be used in any region that scene is covered
Artistically to reproduce for example to provide believable missing data, as shown in Figure 73 of the missing data based on Figure 73 A, its
Lead to artifact-free edge for example as shown in figure 76.
Figure 79 is shown and is associated so that by good application to the element generating positioned at computer with the soldier's photo in prospect
The mask of the different piece of soldier in front of the depth of (i.e. robot).Show to occur from the horizontally extending dotted line of masks area
The horizontal translation of foreground target, and illustrate when the other elements for film have metadata, for example, when for appearance
When there is α in the target before the element that computer generates, it is possible to use the metadata of importing is accurately sheltered from dynamic(al) correction
Depth in target or the place excessively scribbled of color.Can be utilized to obtain a type of literary composition of mask edge data
Part is the file with α file and/or mask data, for example RGBA file.
Figure 80 shows and is also used as mask layer to limit that operator defines and to be possibly less accurately used for
Importing α layer by the mask at the edge to three soldiers A, B and C for the good application.Furthermore it is possible to along the line being labeled as " dust "
The element that the computer being used for dust is generated is inserted in scene to increase the verity of scene.
Figure 81 shows that the computer covering in such as robot etc when the movement elements such as soldier etc generates
Element on when using operator's definition the result that is not adjusted of mask.By using being applied to the Operation Definition of Figure 79
The α metadata of Figure 80 of mask edge, thus allow for the artifact-free edge on overlapping region.
Figure 82 shows source images, its to carry out depth enhancing and with left and right translate provide together with file and α mask so that
Downstream working group can execute the real-time edition of 3D rendering and need not again reproduce for example not return original working group
Iteration workflow path in the case of change layer/color/mask and/or remove and/or adjust depth.
Figure 83 shows the mask being generated by mask working group for depth amplification group application depth, wherein mask with all
Target association as the mankind's recognizable object in the source images of such as Figure 82 etc.
Figure 84 be shown in which generally for closer to target darker and for farther target brighter apply depth
Region.
Figure 85 A shows the left UV figure comprising translation in horizontal direction or skew for each source pixel.
Figure 85 B shows the right UV figure comprising translation in horizontal direction or skew for each source pixel.
Figure 85 C shows the black level value movable part of the left UV figure of Figure 85 A showing minor element therein.
Figure 85 D shows the black level value movable part of the right UV figure of Figure 85 B showing minor element therein.
Figure 86 A shows the left U figure comprising translation in horizontal direction or skew for each source pixel.
Figure 86 B shows the right U figure comprising translation in horizontal direction or skew for each source pixel.
Figure 86 C shows the black level value movable part of the left U figure of Figure 86 A showing minor element therein.
Figure 86 D shows the black level value movable part of the right U figure of Figure 86 B showing minor element therein.
Figure 87 shows the known application of UV figure, and wherein threedimensional model is unfolded so that the image in UV space can make
Signed on 3D model with UV figure painting.
Figure 88 shows disparity map, and it illustrates the maximum region of difference between wherein left and right translation figure.
Figure 89 shows the left eye reproduction of the source images of Figure 82.
Figure 90 shows the right eye reproduction of the source images of Figure 82.
Figure 91 shows the stereoscopic image of the image of Figure 89 and Figure 90 being used together with red/blue glasses.
Figure 92 show masked and be in depth for each different layers enhanced during image.
Figure 93 shows the UV figure covering on the α mask associating with the actress shown in Figure 92, and it is based in α mask
The depth setting of each different pixels and the translational offsets of left and right UV in figure obtaining are set.
Figure 94 shows and strengthens program or such as the second depthEtc synthesis program generate, as scheme
The work space that each different layers shown in 92 generate, that is, be used for the left and right UV translation figure of each α, wherein this work space
Allow quality assurance personnel (or other working groups) to execute the real-time edition of 3D rendering and need not again reproduce for example with
Just not iteratively to any other working group send repair in the case of change layer/color/mask and/or remove pseudomorphism or
Otherwise adjust mask and thus change 3D rendering pairing (or stereoscopic image).
Figure 95 shows the workflow for iteration correction workflow.
Figure 96 shows an embodiment of the workflow allowing for by one or more embodiments of system, its
In each working group can not again reproduce in the case of execute 3D rendering real-time edition for example in case change layer/color/
Mask and/or remove pseudomorphism and otherwise correction from another working group work product without with again again
Existing/ray tracing or work product is sent back to by workflow so that the iterative delay that associates of correction.
Figure 97 illustrates the framework view of one embodiment of the invention.
Figure 98 illustrates the session manager window of the multiple image being utilized to limit to its work or share out the work
Band annotation view.
Figure 99 illustrates the view of film-making display, its with for and the state one of each task context that associates of camera lens
Act the project that illustrates, camera lens and the task relevant with the camera lens selecting.
Figure 100 illustrates, for each task context associating with close-up, the reality associating with this camera lens in project
The view of achievement, wherein " low bid " task actual achievement illustrate in the first way, and appointing in the predefined percentage ratio in bid amount
Business illustrates, the task of high bid is illustrated with Third Way simultaneously in a second manner.
Figure 101 illustrates and can be saved by for example deleting the file that can rebuild from alternative document after project completes
The amount of disk space saving disc driver spending and saving.
Figure 102 illustrates the view of artist display, and it illustrates task context, project, camera lens, state, instrument, rises
Beginning time button, registration input, reproduction input, internal lens evaluation input, meals, initial/time/stopping, evaluation and submission
Input.
Figure 103 illustrates the band annotation view of the menu bar of artist display.
Figure 104 illustrates the band annotation view of the task row of artist display.
Figure 105 illustrates the band annotation view of the major part of the user interface of artist display.
The structure timeline that Figure 106 illustrates the artist display for creating timeline to be worked shows.
The snapshot that browses that Figure 107 illustrates for artist display shows, it enables artist to check camera lens
Snapshot or otherwise the caching important information relevant with camera lens are so that data base need not the live number asking to be frequently utilized that
According to.
Figure 108 illustrates artist actual window, and it illustrates the time spending in such as task relatively for task distribution
The actual achievement of time, has the drop-down menu for special time list.
The remarks that Figure 109 illustrates for artist display show, it is relevant with camera lens that it enables artist to input
Remarks.
The registration that Figure 110 illustrates for artist display shows, its permission is stepped on after the work on camera lens completes
Record workpoints work.
Figure 111 illustrates the view of edit display, and it illustrates project, filters input, timeline input and Search Results,
Its major part in window together with work context and assignment person illustrates camera lens.
Figure 112 illustrates the view for selecting the session manager of the edit display of the camera lens of evaluation to show.
Figure 113 illustrates the view that the Advanced Search of edit display shows.
Figure 114 illustrates the view that the simple search of edit display shows.
Figure 115 illustrates the view of the evaluation pane for camera lens, its also show that integrated remarks in same number of frames and/or
SNAPSHOT INFO.
Figure 116 illustrates the view selecting for the timeline of evaluation and/or registration after modification.
Figure 117 illustrates and is added to the annotation for feedback for the frame using the instrument of Figure 117.
Specific embodiment
Figure 97 illustrates the framework view of one embodiment of the invention.One or more embodiments of system include computer
9702 and the data base 9701 that couples with computer 9702.Have for example any amount of via computer communication network coupling
Any computer architecture of computer all meets the spirit of the present invention.The data base 9701 being coupled with computer 9702 is at least included
Project table, camera lens table, task list and time list table.Project table generally includes item identifier and the project relevant with motion picture
Description.Camera lens table generally includes camera lens identifier and quotes the multiple image with start frame value and end frame value, wherein institute
State multiple image with and the motion picture of item association associate.Camera lens table generally includes at least one camera lens, has and in camera lens
The relevant state of the job schedule of upper execution.Task list is usually used the item identifier also being located in project table and quotes this
Mesh.Task list generally includes at least one task, and described task generally includes task identifier and assignment person, such as skill
Astrologist, and can also include to and selected from the design of such as region, install, motion, (or motion picture is related for synthesis and evaluation
Any other set of type tasks) motion picture work relevant task type association context setting.Context sets
Put and can also imply or have default workflow so that region design flows in the depth of synthetic.This makes
System can distribute next task type, or camera lens will allow the context that work executes thereon.This flow process can be straight line
, or can for example iteration do over again.At least one task described generally includes and completes what at least one task described was distributed
Time.The task identifier in item identifier and task list in time list items table usual REFER object table.Task list leads to
Often comprise initial time and at least one time list items of end time.In one or more embodiments, task
Complete the next task that the context of task can be set in the sequence in workflow, and system can be based on and want
The next worker that the next work context of execution automatically notifies in workflow, and worker can be as previous
Described works under different contexts.In one or more embodiments, context can have sub- context, i.e. root
According to the desired workflow of the specific operation type for executing in motion picture project, region design can be resolved into
Shelter and shelter with outsourcing, and depth can resolve into key frame and motion context.
The embodiment of data base can also include snapshot table, and this snapshot table includes snapshot identifier and search-type and wraps
Include the snapshot of at least one camera lens described, for example, include the subset of at least one camera lens described, wherein this snapshot is buffered in calculating
To reduce the access for camera lens table on machine.Resource on the other embodiment tracking network of snapshot table, storage is with regard to resource
Information and follow the tracks of the version management of resource.Embodiment can also include for other kinds of task category (for example source and
Cleaning inter-related task) other contexts setting.Can also include meeting the relevant with motion picture work of spirit of the present invention
The setting of any other context or value.The embodiment of data base can also include asset request table, and this asset request table includes
Can be utilized to ask to work in assets or request is worked in by such as other workers or creates the money of assets itself
Produce request identifier and camera lens identifier.The embodiment of data base can also include required list, and this required list includes mask request
Identifier and camera lens identifier and can be utilized to ask any kind of action of such as another worker.Data base
Embodiment can also include remarks table, this remarks table include remarks identifier and REFER object identifier and include with come
At least one relevant remarks of at least one width in the described multiple image of autokinesis picture.The embodiment of data base can also be wrapped
Include the payment table comprising delivery identifier, its REFER object identifier and include the information relevant with the payment of motion picture.
One or more embodiments of data base especially can be compiled using following framework or as in computer 9702
Journey and the function of can supporting the present invention as described in following combination in any or sub-portfolio any other framework, only
Want motion picture project management can execution as detailed herein, or in order that with exemplary specifications here with any its
His mode preferably management movement picture items.
Project table
Unique items identifier, item code (text title), motion picture title, item types (are tested or are confessed
Rent), nearest database update date and time, state (retired or active service), data base recent release update, item types
(coloring, effect, 2D->3D conversion, feature film, catalogue), foreman, evaluation driver (wherein store evaluation camera lens).
Task list
Unique job identifier, assignment person, description (what will do), state (take in, wait, completing, returning,
Approval), bid from date, the Close Date of submitting a tender, the persistent period of submitting a tender, actual from date, physical end date, preferentially
Level, context (stacking, assets, motion, movement vision effect, outsourcing, cleaning, α for, synthesize, shelter, cleaner plate, installation, pass
Key frame, quality control), time of spending of the item code in project table, chief inspector or film-making or editor, every process.
Snapshot table
Unique snapshot identifier, search-type (searched for which project), description (remarks relevant with camera lens), log in (with
The worker of snapshot association), timestamp, context (install, cleaning, motion, synthesis ...), the version of snapshot on camera lens,
Snapshot type (catalogue, file, information, evaluation), item code, evaluation sequence data (wherein storing data on network),
Assets title (α, mask ...), use snapshot (for making the code of other snapshots of this snapshot), registry path (to from
The path in the place of its registration data), instrument version, review date, filing, can rebuild (true or false), source delete, source delete day
Phase, source delete and log in.
Remarks table
Unique remarks identifier, item code, search-type, search id, login (worker id), context (synthesize, comment
Examine, motion, editor ...), timestamp, remarks (describing with by searching for the text of the remarks that the image collection that limits associate).
Pay table
Unique delivery identifier, (worker's) login, timestamp, state (whether retired), (how it hands over delivery method
Pay), description (what kind of medium be used for TK project), return goods (true or false), driver (serial number of driver), case
(serial number of case), due date, item identifier, client's (text title of client), producer's (producer's title).
Delivery time table
Unique payment item identification symbol, timestamp, payment code, item identifier, file path (wherein store payment
Items).
Time list table
Time list unique identifier, login (worker), timestamp, total time, time single approval, initial time, end
Time, meals 1 (rest time started half an hour), meals 2 (rest time started half an hour), state are (co-pending or criticized
Accurate).
Time list items table
Time list items unique identifier, login (worker), timestamp, context (region design, synthesis, reproduction, fortune
Dynamic, management, mask cleaning, training, cleaning, management), item identifier, time single identifier, initial time, the end time, shape
State (co-pending or approved), (worker) approval, task identifier.Sequence table
Sequence unique identifier, login (limiting the worker of sequence), timestamp, camera lens order (it constitutes sequence).
Camera lens table
Camera lens unique identifier, login (limiting the worker of camera lens), timestamp, camera lens state are (in carrying out, finally,
Whole client approval), customer statuss (synthesis carry out in, depth client evaluation, synthesis client's evaluation, final), the description (literary composition of camera lens
WD, such as 2 airplanes fly near each other), the first frame number, last frame number, frame number, assignment person, region set
Meter, depth target date, the depth worker of distribution, synthesis chief inspector, synthesis foreman, synthesis target date.
Asset request table
Assets unique identifier, timestamp, the assets worker of distribution, state (co-pending or determined), camera lens identifier,
Question letters description, production worker, the foreman of distribution, priority, due date.
Mask request table
Mask is asked unique identifier, is logged in (worker making mask request), timestamp, depth artist, depth
Leader, depth expeditor or production worker, shelter problem, mask (there is the version of the problem relevant with mask request),
The source of use, due date, remarks of doing over again.
In one or more embodiments of the invention, computer is commonly configured to assume session manager to select
Work and/or to a series of images in its distribution task or camera lens to be evaluated.Computer is commonly configured to present
It is configured to the first display including search display checked by film-making, this search shows with context, project, camera lens, shape
State and artist, and the wherein second display further includes multiple artistical lists and based at least one time described
The time that the time spending in single items is relatively distributed according at least one task described in associate with least one camera lens described
Corresponding statess and actual achievement.
Figure 98 illustrates the meeting of multiple image being utilized to limit for example to its work or sharing out the work or evaluate
The band annotation view of words manager window.Computer 9702 accepts input for project, sequence (for example utilizes in particular sequence
The motion picture of camera lens or trailer) together with camera lens, mask version and each different vertical shift, and alternatively for example
By locally downloading for image computer for processing locality.Each field further details on figure annotate.
Figure 99 illustrates the view of film-making display, its with for and the state one of each task context that associates of camera lens
Act the project that illustrates, camera lens and the task relevant with the camera lens selecting.The camera lens of composition project as shown in the left side, can be selected, this
The major part of window is redirected as showing the information relevant with the camera lens of selection, including in " shot information ", camera lens
Use " assets ", select check frame, remarks, mission bit stream, actual achievement, registration data integrity tab.As window
Major part shown in, some task contexts are associated with state, assignment person etc. and illustrate together.Film-making utilizes
This display and computer accept the input via this display from production worker (such as user) in case with distribution
Time is provided for artistical task together.Camera lens neglect the lower left being illustrated in display to give production worker
The view of the motion picture camera lens relevant with task and context setting.One potential purpose of film-making role be distribution task simultaneously
And evaluate state and actual achievement, and an artistical potential purpose is to manipulate image using instrument set, and the one of syndic
Potential purpose is the higher resolution image having and having the integrated metadata relevant with camera lens and its state for evaluation.Change
Yan Zhi, the display in system customizes for role, then through integrated so that mainly preferential in motion picture establishment/transformation process
Consider there is importance for information about to this role.
Figure 100 illustrates, for each task context associating with close-up, the reality associating with this camera lens in project
The view of achievement, wherein " low bid " task actual achievement illustrate in the first way, and appointing in the predefined percentage ratio in bid amount
Business illustrates, the task of high bid is illustrated with Third Way simultaneously in a second manner.
Figure 101 illustrates and can be saved by for example deleting the file that can rebuild from alternative document after project completes
The amount of disk space saving disc driver spending and saving.As illustrated, the rebuild amount that the only part being used for project completes
May easily be in the range of terabyte.Assets can be rebuild and compress it by safely deleting after project completes
His assets, can save the disc driver of flood tide.In one or more embodiments, computer accesses data base and determines
Which resource depends on other resources, and whether they can be compressed to and with can be in advance and/or based on such as sundry item
Which kind of Normal squeezing calculating is than compression.Then computer calculates total storage capacity and can pass through compression and/or resource regeneration
The amount of storage of release, and for example show this information on a computer display.
Computer is generally also configured as presenting and is configured to be shown by artist is checked second, and this second display includes
At least one daily distribution, its have context, project, camera lens, be configured to update task list in state state input
And it is configured to the initial time in renewal time list items table and the input of the intervalometer of end time.
Figure 102 illustrates the view of artist display, and it illustrates task context, project, camera lens, state, instrument, rises
Beginning time button, registration input, reproduction input, internal lens evaluation input, meals, initial/time/stopping, evaluation and submission
Input.This allows to check with the carrying out of the work in project and updates motion picture inter-related task, and this gives film-making for spy
The special exercise picture items management correlation of the state of effect film project and/or conversion is checked.
Figure 103 illustrates the band annotation view of the menu bar of artist display.This menu bar is shown in the display in Figure 102
The upper left of device
Figure 104 illustrates the band annotation view of the task row of artist display.This band annotation view illustrate only one
OK, however, it is possible to show multirow according to Figure 102.
Figure 105 illustrates the band annotation view of the major part of the user interface of artist display of Figure 102.
The structure timeline that Figure 106 illustrates the artist display for creating timeline to be worked shows.
The snapshot that browses that Figure 107 illustrates for artist display shows, it enables artist to check camera lens
Snapshot or otherwise the caching important information relevant with camera lens are so that data base need not the live number asking to be frequently utilized that
According to.Snapshot follows the tracks of the position of each the different file associating with camera lens, and follows the tracks of and be related to the work product relevant with camera lens
Other information, i.e. source, mask, resolution, file type.In addition, snapshot follows the tracks of version management and the file of each different file
The version of the instrument that type and being optionally used to works on each different file.
Figure 108 illustrates artist actual window, and it illustrates the time spending in such as task relatively for task distribution
The actual achievement of time, has the drop-down menu for special time list.
The remarks that Figure 109 illustrates for artist display show, it is relevant with camera lens that it enables artist to input
Remarks.
The registration that Figure 110 illustrates for artist display shows, its permission is stepped on after the work on camera lens completes
Record workpoints work.
Computer is generally also configured as assuming the 3rd display being configured to be checked by editor's (i.e. editor), its
Including being configured to accept with regard at least piece image described in the described multiple image that associates with least one camera lens described
Comment or drawing or comment and the annotation frame of the two of drawing.One or more embodiments of computer may be configured to carry
For being configured to the 3rd display checked by editor, the 3rd display includes covering at least one width in described multiple image
Annotation.This ability provides the information with regard to a display, and it typically requires three workers in known systems and integrates,
And it is novel in itself.
Figure 111 illustrates the view of edit display, and it illustrates project, filters input, timeline input and Search Results,
Its major part in window together with work context and assignment person illustrates camera lens.
Figure 112 illustrates the view for selecting the session manager of the edit display of the camera lens of evaluation to show.
Figure 113 illustrates the view that the Advanced Search of edit display shows.
Figure 114 illustrates the view that the simple search of edit display shows.
Figure 115 illustrates the view of the evaluation pane for camera lens, its also show that integrated remarks in same number of frames and/or
SNAPSHOT INFO.This view typically requires three workers in the past and creates, and saves the plenty of time and has greatly speeded up evaluation
Process.Can by any kind of information can be caused to cover to image in case allow image and relevant data are realized on a display
Integration different views.
Figure 116 illustrates the view selecting for the timeline of evaluation and/or registration after modification.
Figure 117 illustrates and is added to the annotation for feedback for the frame using the instrument of Figure 117.
One or more embodiments of computer are configured to accept from film-making or editor alternatively with the blind side commented
The work that formula is executed based on artist grading input, described blind comment in mode, syndic do not know artistical identity with
Just prevent from for example acting unfairly from selfish motives.One or more embodiments of computer are configured to accept the difficulty of at least one camera lens described, and
And the Time Calculation grading spending on the work based on artist execution and the difficulty based on camera lens and camera lens.One of computer
Or multiple embodiment is configured to accept the grading input of the work based on artist execution from film-making or editor, or
Flower in the difficulty of at least one camera lens described in accepting and the work based on artist execution and the difficulty based on camera lens and camera lens
The Time Calculation grading taking, and shown for artistical excitation based on the grading that computer acceptance or computer calculate.
One or more embodiments of computer be configured to based on actual achievement estimate residual cost, described actual achievement be based upon with project in
The total time phase that the whole tasks at least one task described of whole camera lens associations at least one camera lens described spend
To for in project described in whole camera lenses at least one camera lens associate described in whole tasks at least one task
The time of distribution.One or more embodiments of computer are configured to the actual achievement being associated with first item and and second items
The actual achievement of association is compared, and shows at least one based at least one grading of the first worker distributing to first item
Individual worker will distribute to second items from first item.One or more embodiments of computer are configured to analysis and have one
The perspective project of the camera lens of fixed number amount and the difficulty of the every camera lens of estimation, and based on the actual achievement with item association, calculate and be used for
The forecast cost of this perspective project.One or more embodiments of computer are configured to analysis and have a number of camera lens
Perspective project and estimate the difficulty of every camera lens, and based on the first item of previous execution and previously execute the
The actual achievement of the second items association of the previous execution completing after one project, calculates the derivative of actual achievement, the derivative meter based on actual achievement
Calculate the forecast cost for perspective project.For example, with the improvement of process, the improvement of instrument and the improvement of worker, work
Efficiency improvement and budget and tendering process how can be changed by the relation of computational efficiency and time and in consideration of it,
And it is used for the cost of perspective project using this change rate forecast.One or more embodiments of computer are configured to analyze
With the actual achievement of described item association, and with the camera lens that completes divided by the total camera lens with described item association, provide the complete of project
The one-tenth time.One or more embodiments of computer are configured to analyze and the actual achievement of described item association, and with completing
Camera lens, divided by the total camera lens with described item association, provides the deadline of project, and at least one accepting to have grading is added
Artistical input, accepts a number of camera lens wherein using additional artist, based at least one ornament described
Family and number of shots calculate time-consuming, deduct that this is time-consuming, and provide what project completed from the deadline of project
The renewal time.One or more embodiments of computer are configured to calculate the disk space being utilized to project filing
Amount, and show can from other assets rebuild at least one assets with avoid to this at least one assets file.Computer
One or more embodiments be configured to the feelings to the frame number work being not currently at least one camera lens described in artist
Error message is shown under condition.This possibly be present at for example is fade-in fade-out, fades out or when other effects make close-up elongated, its
In the frame that comprises not in original source assets of this camera lens.
Each different motion picture workflow is summarized
For colouring/the enhanced feature film of depth and TV series data prepare:Feature film be made into radiomoviess or
Person uses such as 10 bit SPIRITOr the high-resolution scanner of similar devices etc from 35mm or
16mm film is transferred to HDTV (1920x1080 24P), or 2000 lines to 4000 lines and up to 16 ratios in a larger format
Special gray level is in the such as U.S.Data is made on the laser film scanner of the scanner that company manufactures etc
Film.Then, higher resolution frame file is converted into the typical case of such as uncompressed TIP file or uncompressed TGA file etc
Be in 16 bit triple channel linear formats or the standard digital file of 8 bit triple channel linear formats.If source data is
HDTV, then 10 bit HDTV frame files are converted into the uncompressed file of similar TIF or TGA of every passage 16 bit or 8 bits.
Then, to each frame pixel averagely so that triple channel merges to be respectively created single 16 bit channels or 8 bit channels.
Can be using any other scanning technique that existing film scanning can be become number format.Currently, many films are completely with number
Word format generates, and therefore can be utilized in the case of not scanning film.For the numeral with associated metadata
For film, such as the film of the personage being generated using computer, background or any other element, can import
This metadata for example in case obtain on the basis of pixel-by-pixel or by sub-pixel the element generating for computer α and/or
Mask and/or depth.A kind of form of the file comprising α/mask and depth data is RGBAZ file format, and its one kind is realized
Mode is EXR file format.
The digital to television film of 35 or 16mm negative films or positive and form independence individual color elements are swept in high-resolution film
Retouch in instrument with various different resolution and bit-depth digitized, for example, utilizeSPIRITAnd EASTMANPerformed, it for example transmits 525 or 625 forms, HDTV, (HDTV)
1280x720/60Hz line by line, 2K, DTV (ATSC) form, such as 1920x1080/24Hz/25Hz is line by line and 1920x1080/
48Hz/50Hz segmentation frame or 192,0x1,080 501.The invention provides for film is compiled the improved of motion picture
Method.Visual image is transferred to high definition video storage medium from the motion picture film of development, this medium is a kind of suitable
Equip the storage medium of display image in storage image and with reference to display, it has much larger than NTSC compatible video storage medium
With the scanning density associating display equipment.Visual image also shifts from motion picture film or high definition video storage medium
To the numerical data storage format being suitable to for numerical non-linear motion picture editor equipment use.Paramount shifting visual image
After definition video storage medium, numerical non-linear motion picture editor equipment is used for generating editorial decision list, motion diagram
Piece film is then in accordance with this editorial decision list.High definition video storage medium is typically suitable for storage and display has at least
Article 1080, the visual image of horizontal scanning density.Electronics or optical transform can be used to allow to use make full use of the method
Used in storage format vision depth-width ratio.By this digitized No. of film according to this and from film be transferred to such as HDTV it
The data input of one of numerous forms of class is such asThe HDTV STILL that technology company manufacturesEtc
In converting system.Digital picture can be converted into all reticle by such large scale digital buffer data transducer
Formula, such as 1080i HDTV format, such as 720p and 1080p/24.Asset management system's server provides powerful local kimonos
Business device backs up and files standard scsi device, C2 level security, streamlined menu setecting and multiple criteria database search.
During the process of the image being derived from motion picture film in digitized, the machinery of the film frame in radiomoviess machine
Positioning suffers from the inexactness being referred to as " film rocks " that can not be completely eliminated.However, various different sheet registrations and offseting
Or flatten door assembly and can use, such as United States Patent (USP) No.5, the group implemented in 328,073 (sheet registrations and offset an assembly)
Part, it is fixed that it is related to be used for having by the door with position location or aperture the focus of the picture frame of verge-perforated banding film
Position.The laterally aligned perforation that less first and second pins of size enter film is matched so that picture frame is registering with aperture.
Less 3rd pin of size enters sells, with second, the 3rd perforation that separates along film, and then by film, obliquely tractive is extremely
The reference line extending between first and second pin is to cover the first and second pins by the perforation of there and picture frame is smart
Really it is registrated at position location or aperture.Positioned adjacent position is gradually moved along a pair of flexible-belt that film edge extends,
It is incrementally increased the contact with film to offset it and to clamp its perforation against door.Pin is accurate with position location by picture frame
Ground registration, and picture frame is maintained exact focus position by band.Positioning can be followed by such as United States Patent (USP) No.4,903,
The method of the method implemented in 131 (method from dynamic(al) correction of the error in image registration during for film scanning) etc
The accurate mechanical of image captures and further enhances.
In order to remove or reduce the referred to as random structure of crystal grain being superimposed upon on image in the feature film of exposure and
Make the ambiguous cut of light or dust granule or other fragments of transmission, using various different algorithms, the such as U.S. is special
(image drops for sharp No.6,067,125 (structures and methods reducing for film grain noise) and United States Patent (USP) No.5,784,176
Make an uproar the method for process) in the algorithm implemented.
Create, as visible database, the film element preparing reversely to edit:
Digital movie is resolved into scene and editing.Then, it is sequentially processed whole film, so that automatic detection scene becomes
Change, including fading out, wipe and shear.These transformations further decompose into camera pan, camera zoom and represent seldom or
The static scene that person does not move.Based on standard smpte time code or other suitable continuous naming conventions will for more than
Described all data bases quote the editorial decision list (EDT) being input in data base.Exist and be largely used to detect in film
Drama in appearance and the technology of delicate transformation, for example:
US05959697 09/28/1999 is used for detecting the method and system fading out transformation in video signal
US05920360 07/06/1999 is used for detecting the method and system of the transformation of being fade-in fade-out in video signal
US05841512 11/24/1998 preview and the method for editor's motion picture
US05835163 11/10/1998 is used for detecting the device of the editing in video
US5767923 16/06/1998 is used for detecting the method and system of the editing in video signal
US5778108 07/06/1996 is used for detecting the side of the transformation labelling of the such as uniform field etc in video signal
Method and system
US5920360 06/07/1999 is used for detecting the method and system of the transformation of being fade-in fade-out in video signal
Wherein photographing unit is seemed between expression such as two or more people intercepting between two transaudient heads
All editings of the identical content in dialogue are combined in file entries for later batch processing.
Operator visually checks all data base entries to guarantee:
1. scene is broken down into camera movement
2. editing is merged into single batch of element in a suitable case
3. basis is blocked element, moving target quantity and optics quality (mildness of such as element etc.) and will be moved
Resolve into simple and compound movement.
Film-making scene analysis and scene are decomposed for the establishment of reference frame ID data basis in advance:
File is numbered using continuous smpte time code or other continuous naming conventions.Use
AFTEROr similar program is with the speed of 24 frames/second (used in not having standard NTSC 30 frames/second video
Field correlation 3/2 is drop-down) image file is edited into together on DVD to create the audio frequency with feature film or TV series
Operation video.This is used for assisting scene analysis and scene to decompose.
Scene and editing are decomposed:
1. data base allows input scene, editing, the critical data of design, key frame and other times code form and use
Description information in each scene and editing.
2. identify each scene cut with respect to camera technique.Timing code is used for pan, zoom, static background, has
Unstable or the drift static background of photographing unit and the uncommon camera cut paying particular attention to.
3. designer and assistant designer are directed to color clue and color references or for depth project case study event
Thing piece, for Depth cue, is generally directed to the goals research film of off-standard size.It is directed to color/depth under usable condition
Degree of accuracy provides research.The Internet for example can be utilized to determine the color of specific items or the size of specific items.Right
It is known that the size of target allows to calculate the depth of the items in such as scene for depth project.For wherein depth unit number
The becoming the relevant depth project of three-dimensional movie with by two-dimentional movie conversion of the element generating according to the computer that can be used in film and
Speech, normalize to can seat for such as background and movement elements by depth metadata scaling or translation or otherwise
Mark system or unit.
4. select to be derived from the single frames of each scene for use as design frame.Color design is carried out to these frames, or imports use
In the depth of element and/or the metadata of mask and/or α of computer generation, or to the background element in frame or motion
Element makes depth assignment (referring to Figure 42-70) to represent the overall perception of feature film.For feature film, approximate 80-
100 design frames are typical.
5. in addition, selecting the single frames of the referred to as key frame of each editing from feature film, it comprises each editing domestic demand
Want all elements that color/depth considers.There may be up to 1000 key frames.These frames will be contained in not having additional color
In the case of selection by color/good application all colours/depth conversion necessary to all successive frames in each editing
Information.
Color/depth selects:
History reference, studio's archives and film are analyzed and are provided color references to designer.Using such as mouse etc
Input equipment, designer shelters the feature in the single frames of the selection comprising multiple pixels, and uses HSL color space model
Based on the gray level below intention consideration and each mask and Luminance Distribution, color assignment is given them.Select one or more base
Color is for the view data below each mask, and applies it to the certain luminance type genus of the characteristics of image of selection
Property.The each color of selection is applied to by the target entirely sheltered or mesh based on unique gray-scale value of the feature below mask
Specific characteristic in target luminance patterns.
Thus, create the look-up table of unique luminance patterns or the colour switching for target or feature, it represents application
To target color to brightness value.Color due to being applied to feature extends the whole model from dark to bright potential gray-scale value
Enclose, thus designer can ensure that, when the introducing for example with shade or light, the gray-scale value of intermediate scheme change is uniform
Be distributed to film the dark or bright area of follow-up frame in when, the color for each feature is also consistent uniformly, and right
Correctly brighten or dimmed in this color application pattern thereon.
The target that can generate for the computer that wherein metadata exists imports depth, and/or can use the present invention
Embodiment using such as mouse etc input equipment by depth assignment to target and be adjusted to distribute target with
Certain depth, including profile depth, the geometry for example distributing such as ellipse etc is to such as face.This allows target to exist
It is converted into during three-dimensional image and look like naturally.For computer generate element for if desired, can
To adjust the depth importing and/or α and/or mask shape.Fixed range is distributed to foreground target and tends to so that object table
It is now outline, that is, flat.Referring also to Figure 42-70.
Mask colour switching/depth information travels to a series of subsequent frames from a frame:
Then, the single colour switching/depth wheel designing the design alternative in frame be would indicate that by one or more method
Wide mask copies all subsequent frames in moving-picture frame series to, and methods described is for example by Bezier automatic Fitting to side
Edge, based on being associated with respect to the gradient descent algorithm of the luminance patterns in the subsequent frame of design frame or continuous prior frame and fast
The Automatic Mask Fitting of fast Fourier transform, by scribble the target of an only frame in by coating shelter multiple successive frames, general
Vector point automatic Fitting copies to edge and by single mask or multiple mask and pastes the subsequent frame of selection.Separately
Outward, can be depth information " between benefit " so that the forward direction/backward moving with regard to photographing unit catch position or scaling to be described.For meter
For the element that calculation machine generates, α and/or mask data are typically correct, and can be skipped for reshaping process, because
It is that the metadata of elements correlation being generated with computer is obtained from the archetype of target in a digital manner, and therefore generally no
Need to adjust.(for the border that mask matching position is set to CG element so as possibly to skip in subsequent frame matching mask with
Edge reshaping is a large amount of process in be aligned photo element, referring to Figure 37 step C 3710).It is alternatively possible to computer
The element deformation generating or reshaping are to provide the specially good effect initially not having in film scene.
Single frames set design and coloring:
In an embodiment of the present invention, by for each scene and editing by the illiteracy of the background from series of successive frames
Too strange or composograph is created in the single frames comprising all background element, integrates camera motion and by itself and each scene
In movement elements separate.The single frames obtaining becomes the expression of the whole common background of the numerous frames in film, thus creating this
A little all elements of frame in and the visible database of photographing unit offset information.
In this way it is possible in one time using single frames montage to great majority setting backgrounds be designed and
Color/depth strengthens.Each montage is sheltered, and does not consider the foreground moving object individually sheltered.Then, from single width background
Automatically extract the background mask of montage in montage image, and using in view data storage offset answered
Use for the subsequent frame creating single montage to be properly aligned with mask with each subsequent frame.
There is basic formula in film making, its within feature film and between vary less (except using a large amount of
Outside those films of hand-held or stabilisation camera gun).Scene is made up of editing, and it is directed to standard camera and moves
(i.e. pan, zoom) and the static or camera angle of locking and the combination of these motions are blocked.Editing or single
Generation event, or the combination of cutaway (cut-a-way), wherein for example exist in the dialogue between two individualities and return
Return to particular camera camera lens.Such cutaway is considered single sequence of scenes or simple shear is collected, and can be
Integrate in a time image procossing.
Pan can in single frames visible database using special panoramic mosaic technology but do not carry out lens compensation and
Integrate.Every frame in pan is related to:
1., in the side of frame, some information are lost in top and/or bottom
2. with respect to the public information in the most of frames continuing frame immediately preceding front and rear, and
3. the opposite side of frame, the fresh information of top and/or bottom.
By the common element based on continuous frame in, these frames are stitched together and thus background element complete
Scape, creates visible database, and it has and can be used for quoting in whole successive frame set covering to be applied to single mask
All pixels offset.
The establishment of visible database:
Because corresponding " original " that each pixel in the single frames visible database of background is created from it with it be not (whole
Closing) proper address of frame in is corresponding, thus the masked operation that determines of any designer and the phase being applied to visible database
Should shelter and search each pixel that table name is all correctly applied to original film frame in for creating single frames synthetic
Proper address.
In this manner it is achieved that being used for each of set of each scene and editing by single frames (visible database)
Represent, wherein pixel has single or multiple expressions in therefrom deriving their primitive frame and be serial.Single visualization data
Every region 1 bit mask creating suitable look-up table is all represented, this look-up table synthesizes with creating list by all sheltering of storehouse frame in
The common or unique pixel address of the continuous frame in of frame is corresponding.The pixel of sheltering that these addresses are limited is applied to full resolution
Frame, is wherein automatically checked using feature, rim detection and pattern recognition routine when necessary and adjusts total sheltering.?
In the case of needing to adjust, that is, not corresponding to the most of edge features in grayscale image in the edges of regions sheltered of application
In the case of, " red marker " extremely comment on send to operator adjust frame by frame it may be necessary to signal.
The single frames of the motion of many frame ins represents:
Difference algorithm for detecting moving target usually can distinguish the violent of the expression moving target from frame to frame
Pixel region changes.It is derived from moving target wherein in the case that the projection in background may be obscured with moving target, obtain
Mask will be assigned to default α layer, this layer by make moving object mask this is partially transparent.In some cases, use
One or more vectors or apply painting tool operator by designated movement target and projection between description.However, most
In the case of number, projection will be detected as the surface with respect to two critical movements targets.In the present invention, projection will be by carrying on the back
Scape searches list processing, and this background look-up table is automatically adjusted along the intensity level determined by spectrum of the bright dark gray level value in image
Section color.
The action of every frame in is via the difference including (direction and speed) difference (in the case that i.e. action occurs in pan)
Point or frame-frame subtracting techniques and the machine vision technique to target and its behavior modeling and separation.
Then, differential pixel is synthesized the single frames (or being isolated under tiled pattern) representing numerous frames, thus
Operator is allowed to guide image that subsequent frame for computer controls shelter to area-of-interest adding window and otherwise
Process operation.
As setting discussed above or background montage, action in the multiframe in scene for the generation can be by single frames
Visible database represents, wherein 1 suitable bit of each unique pixel position experience is sheltered, and applies corresponding lookup therefrom
Table.However, from unlike application wherein in single frames traversal and the setting of specified all colours/depth or background montage,
The purpose that establishment action synthesizes visible database be adding window in or otherwise specify by receive particular task each
Feature of interest or region, and from a key frame element to subsequent key frame element application area-of-interest vector, thus
The help of the computer disposal by following the tracks of each area-of-interest is provided to operator.
During the design phase, (i.e. single frames action occurs in the back of the body to the single instance occurring in background for moving target
In the interior suitably x corresponding with the single frames action therefrom deriving it of background in scape or splicing synthesis background, y-coordinate), by mask
It is applied to the area-of-interest that designer specifies.Using the input equipment of such as mouse etc, operator is used for sheltering in establishment
Area-of-interest in using following instrument.Alternatively, the project of the element metadata that the related computer of tool generates can
To import, and if it is necessary to this metadata is zoomed to the unit being used for depth in this project.Due to these masks with
Digital method creates, thus can be assumed that they are accurate in whole scene, and therefore for resetting operation, permissible
Ignore the profile in region and the depth of computer generation.With these targets adjoin element therefore can more accurately reshaping,
Because the profile of the element that computer generates is considered as correct.Therefore, even for having neighbouring motion or background element
Identical bottom gray level computer generate element for, even if there is not vision difference at node, mask at node
Shape can also be considered as accurate.Again, for mask matching position being set to the border of CG element so that can
Skip matching mask in subsequent frame, to walk referring to Figure 37 C with by edge reshaping for a large amount of process in be aligned photo element energy
Rapid 3710.
1. the combination of the such as edge detection algorithm of standard Laplace filter etc and pattern recognition routine
2. the automatic or auxiliary closure in region
3. the automated seed filling in the region selecting
4. for bright or dark areas bimodal brightness detections
5. operator's auxiliary slide rule and other instrument creations and bottom brightness value, pattern and weight variable and underlying pixel data
Dynamic range corresponding " best fit " distribution
6. with respect to follow-up point of bottom gray level, brightness, area, pattern and the multiple weighting characteristic being close to neighboring area
Analysis creates the unique determination/identification set being referred to as detector file.
Select with inclusion in the single design motion database of pre-production key frame stage synthesis described above
All subsequent motion of key frame moving target present together.Can be closed by being sequentially switched on and closing each continuous motion
Thing is become to open and close in background or watch all motion synthetics being in motion in background.
Key frame moving target creates:Operator continuously to design frame on all area-of-interest adding windows sheltered,
And by various different point to instrument and routines, the key frame of the selection in computer guiding to visible database is transported
Relevant position (area-of-interest) on moving-target, thus reducing the region that computer must operate thereon, (that is, operator abides by
Follow and transport from design frame with the approximate establishment at the center of the area-of-interest representing in the visible database of key frame moving target
Moving-target is to the vector of each subsequent key frame moving target.This operator's householder methods limit mask is applied to former
The detection of the needs that must be executed by computer in the corresponding area-of-interest in beginning frame operates).
The selection sheltered completely with inclusion in film-making stage synthetic key frame described above moving target data base
All subsequent motion of key frame moving target present together.As described above, all motion synthetics can be opened in the background
Open and close or be continuously sequentially opened and closed to imitate actual motion in background.In addition, it is corresponding lacking it
Moving target when can assume all regions (area-of-interest) sheltered.In this case, 1 bit color mask quilt
It is shown as translucent or opaque random color.
Subsequent motion mesh during film-making process and under operator's visual spatial attention, between two critical movements target frame
Each area-of-interest of mark frame in experiences computer masked operation.This masked operation is related to covering in formerly moving target frame
Mould (is located in subsequent key frame moving target with the new or subsequent detectors file operation continuing in frame and potential parameter
Mask dimension in parameter vector, gray-scale value and multiple weighter factor) comparison.This process is subject in visible database
Vector application and adding window or sensing (using various different sensing instruments) auxiliary.If the operator of subsequent motion target
Auxiliary detection zone in value fall in the range of the respective regions with respect to the formerly moving target of surrounding values, and if this
A little values decline along by comparing the first key frame and the second key frame desired value (vector) track, then computer will really
Surely mate and best fit will be attempted.
Unpressed high-definition picture dwells on server level, and all follow-up masked operation on area-of-interest is all
It is shown on the compression synthetic frame in display-memory, or be shown on the tiling condensed frame in display-memory so that grasping
Author can determine correct tracking and the coupling in region.Illustrate that the area-of-interest window of the scaling in uncompressed region is shown in screen
Visually to determine optimal area-of-interest on curtain.This high-resolution window also can completely move and check, make
Obtain operator and can determine whether masked operation is accurate in terms of motion.
In first embodiment as shown in Figure 1, multiple feature films or radiomoviess frame 14a-n represent wherein there is the back of the body
The scene of the single instance of scape 16 (Fig. 3) or perception or editing.In shown scene 10, some performers or movement elements
18 ', 18 " and 18 " ' move on step out of doors, and photographing unit is just executing pan to the left.Fig. 1 shows and constitutes 5 seconds pans
The sample of the selection of 120 frames 14 altogether.
In fig. 2, to separating background 16 process scene be derived from Fig. 1 in represent multiframe 14a-n, wherein using various not
Same subtraction and differential technique remove all movement elements 18.The single frame creating pan is combined into visible database,
Single width shown in wherein in Fig. 3 synthesizes and represents each frame being derived from 120 frames 14 of the original pan of composition in background image 12
Unique and common pixel.Then, single width background image 12 is used for creating the background of the color lookup table representing that designer selects
Mask covers 20, and wherein dynamic pixel color automatically compensates for or adjust shade and other brightness flop of movement.For depth
For degree project, can be to any depth of any Target Assignment in background.Depth can be executed using various instruments
To any portion of distribution of background, including applying painting tool, being based on geometric graph target instrument, it allows to goal setting wheel information
Wide depth or the text field input are to allow numeral input.Synthesis background shown in Fig. 2 for example can also have distribution
Ramp function in case allow by closer to depth assignment to scene left-hand component, and automatically distribution image on the right of depth
The linear increase of degree.Referring also to Figure 42-70.
In an illustrative embodiment of the present invention, operator's auxiliary and automatization operation is used for detecting by clear
Obvious anchor point represented by the intersection point of clear rim detection and every frame of composition single width composograph 12 and the mask 20 covering
Other neighboring edges in 14.These anchor points are also illustrated in composograph 12 and are properly assigned to labelling for assisting
The every frame 14 being represented by single width composograph 12.
Anchor point and with passing through closure or the close clear-cut margin the closing target limiting and/or region are designed to singly cover
Mould region and give single look-up table.In the region that these clearly define, create the polygon that its anchor point is domination point.
Do not exist the sharp edge of detection to create the region ideally closing in the case of, the edge using the mask of application generates
Polygon.
The polygonal mesh obtaining includes anchor point and arranges the inside in region plus all perimeters between these regions.
To be registered in data base by the mode parameter that the Luminance Distribution in each polygon creates so that covering
The corresponding polygon address applications of mask to for create synthesis single image 12 frame proper address when quoted.
In figure 3, the representative sample of each moving target in scene 10 (M target) 18 receives and represents that designer selects
Color lookup table/depth assignment mask cover, wherein when M target 18 is moved scene 10 in, dynamic pixel color oneself
Compensate dynamicly or adjust shade and other brightness flop of movement.This representative sample each be considered as crucial M target 18,
For limiting the bottom pattern in the M target 18 sheltered, edge, the light characteristic of packet etc..These characteristics are used for along logical
Designing mask is moved to follow-up M target 18b from crucial M target 18a by the parameter vector to the restriction of crucial M target 18c,
Each follow-up M target becomes new crucial M target in succession with the application of mask.As illustrated, can be to crucial M target 18a
Distribute 32 feet from photographing unit capture point of depth, 28 feet from photographing unit capture point can be distributed to crucial M target 18c simultaneously
Depth.Each different depth of target can between each different depth point " between benefit " so that the target in such as frame
In three-dimensional motion true to nature need not be allowed to occur in editing in the case of the wire-frame model of all targets.
As background operation above, operator's auxiliary and automatization operation is used for detecting to be examined by sharp edge
The obvious anchor point represented by intersection point surveyed and be used for creating other neighboring edges in each moving target of key frame.
The specific sense in each moving target limiting and with passing through closure or the close clear-cut margin closing anchor point
Interest region is appointed as single masks area and gives single look-up table.In the region that these clearly define, create its anchor
Point is the polygon of domination point.Do not exist the sharp edge of detection to create the region ideally closing in the case of, use
The edge of the mask of application generates polygon.
The polygonal mesh obtaining includes anchor point and arranges the inside in region plus all perimeters between these regions.
To be registered in data base by the mode parameter of the brightness value profile creation in each polygon to cover in handle
The corresponding polygon address applications of lid mask to for create synthesis single frames 12 frame proper address when quoted.
Polygon sample is bigger, and bottom brightness value and assessment are more detailed, and the matching of overlying mask is more accurate.
It is sequentially processed follow-up or motion key frame target 18 between two parties.Mask set including motion key frame target is protected
Hold its correct address position being in subsequent frame 14 or in the subsequent instance of next moving target 18.Mask is illustrated as impermeable
Bright or transparent color.Operator utilize mouse or other sensing equipments and with its in the subsequent frame of moving target and/or
Relevant position in example succeedingly indicates each mask together.Computer is then using expression bottom luminance texture and mask side
The best fit of the subsequent instance of the corresponding polygon of both edge and existing anchor point establishment moving target.
Next example operation to moving target 18 in an identical manner, until completing editing between critical movements target
10 and/or scene in all moving targets 18.
In the diagram, then all masking elements of reconstruction of scenes 10 are enhanced to create coloring and/or depth completely
Frame, each the suitable frame wherein M target 18 mask being applied in scene, it is followed by background mask 20, it is not only having
Applied in boolean's mode in the case of pre-existing mask.Then, arrange foreground elements according to the priority of preprogramming
It is applied to every frame 14.The accurate application of auxiliary background mask 20 is vector point, and it has such as edge and/or obvious brightness
It is applied to visible database sheltering Shi You designer in the case of the reference point of good definition of point etc.These vectors are created
Build with reference to dot matrix it is ensured that mask to be rendered to the degree of accuracy of the single frame forming each scene.Those skilled in the art will
It will be appreciated that the depth of each different target applied determines the water applied during the left and right viewpoint of utilization in generating three-dimensional viewing
Flat translational movement.In one or more embodiments of the invention it is desirable to target in setting and depth true to nature can be observed
Operator dynamically shows when mobile.In other embodiments of the invention, the depth value of target determines that applied level is moved
Dynamic, such as it will be recognized by those skilled in the art and its be at least taught in the USPN 6,031,564 of Ma et al., the document
Description pass through to quote and be specially herein incorporated.
Mask is applied to continuous moving-picture frame using some instruments by operator.
Display:Key frame including all moving targets of this frame sheltered completely and with thumbnail image format multiple after
Continuous frame is loaded in display buffer together;Typically 2 seconds or 48 frames.
Fig. 5 A and Fig. 5 B shows series of successive frames 14a-n being loaded in display-memory, and wherein one frame 14 utilizes
Background (key frame) is sheltered completely and is prepared to travel to subsequent frame 14 via automatic mask matching method mask.
All frames 14 can also use second (sub) window together with colour switching/depth enhancing of associated mask and/or application
During cause for gossip, (24 frames/second) are sequentially displayed, to determine whether automatic masking operation correctly works.Situation in depth project
Under, anaglyph spectacleses or red/blue anaglyph spectacleses can be utilized to watch two viewpoints corresponding with each eye.Any
Type depth viewing technology can be utilized to watch depth-enhanced image, including video display unitss, its without anaglyph spectacleses,
But using the images pairing more than two that can be created using embodiments of the invention.
Fig. 6 A and Fig. 6 B shows amplifying and scalable single width figure of the continuous image series in display display-memory
The subwindow of picture.This subwindow enable the operator in real time or during the motion that slows down on single frames or in multiframe
Interactively manipulate mask.
Mask is changed:Mask can be copied to frame that is all or selecting, and in thumbnail view or in preview
Automatically change in window.In the preview window, transport on mask modification generation single frame in the display or real-time
Occur in multiframe during dynamic.
Mask travels to the multiple successive frames in display-memory:Using various different copy functions by foreground moving mesh
Target key frame mask is applied to all frames in display buffer:
All masks in one frame are copied to all frames;
All masks in one frame are copied to the frame of selection;
The mask that one or more of one frame is selected copies all frames to;
The mask that one or more of one frame is selected copies the frame of selection to;And
Create the mask generating in a frame using the direct copying at the identical address in every other frame.
Referring now to Fig. 7 A and Fig. 7 B, single mask (human body) automatically propagates to all frames 14 in display-memory.
Operator can specify the frame of selection to apply the mask of selection, or instruction applies it to all frames 14.Mask has been
The duplication of the initial mask in the first frame entirely sheltered.The modification of this mask occurs only at after they are transmitted.
As shown in Figure 8, all masks associating with moving target propagate to all successive frames in display-memory.This
A little images show the displacement that underlying image data is with respect to mask information.
Above-named transmission method does not have a kind of on one's own initiative mask to be fitted to the target in frame 14.They are from a frame
The frame of (typically key frame) to every other frame or selection only applies identical mask shape and associated color conversion
Information.
After mask is adjusted to compensate using the brightness based on image for the various different instruments, pattern and local edge
Target motion in continuous frame.
Automatic Mask Fitting:The successive frame of feature film or collection of TV plays shows the motion of performer and other targets.?
In present example, single representative frame designs these targets so that the feature that selects of operator or region have by containing
Cover the unique color conversion of unique mask mark of whole feature.The purpose of mask matching tool be provide for correct placement and
The automation means of each mask area-of-interest (ROI) in reshaping successive frame are so that work as ROI from single representative frame
Home position displacement when, mask conforms exactly to correct locus and the two dimensional geometry of ROI.The method is intended to permit
Permitted for masks area to travel to successive frame from original reference or design frame, and automatically allowed it to according to association bottom figure
Each image shift adjustable shape and position as feature.For the element that computer generates, associated mask is with numeral side
Method creates, and can be assumed in whole scene it is accurate, and therefore grasps for Automatic Mask Fitting or reshaping
Make, the profile in region and the depth of computer generation can be ignored.The element adjoining with these targets therefore can be more accurately
Reshaping, because the profile of the element of computer generation is considered as correct.Therefore, even for there is neighbouring motion or carry on the back
For the element that the computer of the identical bottom gray level of scape element generates, even if there is not vision difference at node, at node
The shape of mask can also be considered as accurate.Therefore, no matter when the Automatic Mask Fitting of mask is taken with computer
The shape on the border of element mask generating, the element mask that this computer generates can be utilized to the step according to Figure 37 C
The border of 3710 masks limiting operator's definition.Which save process time, because generating using many computers in scene
The Automatic Mask Fitting of element mask can minimize.
For automatically location revision and the in the picture all masks of correctly matching to compensate the corresponding figure between frame
Method as the motion of data is related to herein below:
Setting reference frame mask and respective image data:
1. reference frame (frame 1) is covered using the various means such as scribbled with polygon tool etc by operator
Cover so that all area-of-interests (i.e. feature) are strictly covered.
2. calculate minimum and maximum x of each masking regional, y-coordinate value is covered often to create around each masking regional
The rectangular bounding box of all underlying image pixels of individual masking regional.
3. for each the region of interest domain identifier pixel subset in its bounding rectangles (i.e. every 10 pixels).
Reference frame mask and respective image data are copied to all subsequent frames:From the mask of reference frame, bounding box and
Respective pixel position subset copies all subsequent frames to by operator.
Approximate region skew between reference frame and next subsequent frame:
1. calculate fast fourier transform (FFT) so that approximate image data shift between frame 1 and frame 2
2. each mask being had using the movement of FFT result of calculation in the frame 2 of adjoint bounding box is derived from frame 1 to compensate
Respective image data displacement.
3. around described region, bounding box is expanded additional enough and to spare to accommodate other motions and shape deformation effect.
Mask is fitted to new position:
1., using the offset vector being determined by FFT, calculate in the following manner in the view data below each mask
Minimum error gradient decline:
2. create matching frame around each pixel in bounding box subset
3. use the Weighted Index of all pixels of bilinear interpolation method digital simulation inframe.
4. determine skew and the best fit of each subsequent frame, using gradient descent algorithm, mask is fitted to desired area
Domain.
Mask matching initializes:Characteristics of image is selected in the frame (reference frame) of the single selection in scene for the operator, and
Comprise the mask of all colour conversions (color lookup table) for underlying image data for each feature-modeling.Marked by operator
The characteristics of image of the selection known has by scanning the feature below each mask and identifying for minimum and maximum x, y-coordinate value
Good definition geometric etendue, thus limit around each mask rectangular bounding box.
Fitted mesh difference scheme for fitted mesh difference scheme interpolation:For the purpose optimizing, using methods described only each side of matching
The sparse subset of the correlation masking latitude of emulsion area pixel of boundary's inframe;This pixel subset limits the figure of the bright pixel institute labelling as Fig. 9 A
Regular grid in picture.
" little dark " pixel shown in Fig. 9 B is used for calculating Weighted Index using bilinear interpolation.Grid interval current setting
For 10 pixels so that being less than 1 in substantially 50 pixels currently utilize gradient descent search matching.This grid interval is permissible
It is user's controllable parameter.
Fast fourier transform (FFT) estimates shift value:The mask with respective rectangular bounding box and fitted mesh difference scheme is copied
Shellfish is to subsequent frame.Calculate forward and reverse FFT between reference frame and next subsequent frame to determine and each mask and side
The x of the corresponding characteristics of image of boundary's frame, y shift value.The method generates relevant surfaces, and its maximum is provided for searching in image
" best fit " position of individual features position.Then, in the second frame in, each mask and bounding box are adjusted to suitable x, y
Position.
Match value calculates (gradient descent search):FFT provides shift vector, and it uses gradient descent searching method guiding right
Search in ideal mask matching.Gradient descent search requires translation or skew to be less than around the minima on matching error surface
The radius in basin.Minimum requirements will be created for each masks area is related with the successful FFT of bounding box.
Best fit in search error surface:Error surface in gradient descent searching method calculates and is related to calculate reference
Between picture frame corresponding in search image frame (skew) position (x, y) centered on reference picture pixel (x0, y0)
The mean square deviation of the pixel in square matching frame, as shown in Figure 10 A-D.
Respective pixel values in two (reference and search) matching frames are subtracted each other, square, summation/cumulative, and the sum obtaining
Square root finally divided by pixel count in frame (pixel count=height x width=highly 2) to generate the matching search bit of selection
Put root-mean-square misfit (" the error ") value at place
Error (x0, y0;X, y)={ ∑ i ∑ j (reference block (x0, y0) pixel [i, j] search box (x, y) pixel
[i, j]) 2 }/(highly 2)
Match value gradient:The shift vector data creation deriving from FFT searches for matching position, and error surface calculates and opens
Start from this deviation post, along (in face of) error surface gradient is advanced down to the local minimum on this surface, this Local Minimum
Value is assumed best fit.The method based on previous frame using the normalized square deviation in such as 10x10 frame and along
Mean square error gradient finds minima and finds the best fit of each next frame pixel or pixel groups.This technology is similar to mutual
Close, but have for the sample boxes limited by calculating.In this way it is possible to check in previous frame with regard to its mask index
Corresponding match pixel, and complete the distribution obtaining.
Figure 11 A-C shows from declining, along (individually evaluating) error surface gradient, the second search box derived, right
For it, the error function of evaluation reduces with respect to original reference frame, is possibly minimized (from these frames and Figure 10 A-D
In reference block visual comparison clearly visible).
Calculate according to the definition of gradient by error surface gradient.Vertically and horizontally error deviation is in search box near center location
Four positions at evaluated, and be combined to provide the estimation of the error gradient of this position.Gradient component evaluation by means of
Figure 12 explains.
The gradient at coordinate (x, y) place for the surface S is given by the directional derivative on surface:
Gradient (x, y)=[dS (x, y)/dx, dS (x, y)/dy],
It is provided by following formula for the discrete case of digital picture:
Gradient (x, y)=[(error (x+dx, y)-error (x-dx, y))/(2*dx), (error (x, y+dy)-error (x,
y-dy))/(2*dy)]
Wherein dx, dy are the half of frame width or frame height, are also defined as matching frame " frame radius ":Frame width=frame height=2x frame
Radius+1.
It should be pointed out that with the increase of frame radius, matching frame dimension increases and thus the image that wherein comprises is special
The size levied and details also increase;Therefore, frame is bigger, and the matching accuracy of calculating is modified, and data to be processed is more, but
The calculating time that matching (error) calculates every time increases with the increase of radius squared.
For the element mask area pixel that any computer finding at specific pixel x, y location generates, then
This position is counted as the edge of the mask of the Operation Definition of overlying, and mask fits within other location of pixels and continues, until
Checked all pixels of mask
Previously with the reference picture propagated:Reference picture for mask matching is usually the neighbour in film image frame sequence
Nearly frame.However, it is sometimes preferably that the mask of exquisite matching is used as reference picture (such as key frame mask, or from it
Propagate/copy the source frame of masks area).Present example provides the switch of disabling " neighbouring " reference frame, in this frame by passing recently
Broadcast the propagation mask using reference picture in the case of event qualification.
Mask fit procedure:In the present example, n frame is loaded in display buffer operator.One frame includes will
Propagate and be fitted to the mask of every other frame.Described mask whole or some be then propagate into the institute in display buffer
There is frame.Because mask fitting algorithm quotes prior frame in series or the first frame so that mask is fitted to subsequent frame, thus
First frame mask and/or prior mask must strictly be applied to interesting target and/or region.Without do so, then
Mask error will be accumulated, and mask matching will fail.Operator shows subsequent frame, adjusts the sample radius of matching, and holds
The whole frame of behavior calculates the order of mask matching.This execution order can be thump or mouse hot button command.
As shown in Figure 13, the mask of the propagation in the first order example, wherein in underlying image data and mask data
Between there is little difference.It will be clear that dress mask and hand mask are to close with respect to view data.
Figure 14 shows that mask data is by quoting the bottom figure in prior images by using Automatic Mask Fitting routine
It is adjusted to view data as data.
In fig .15, the mask data in image below in sequence shows significantly with respect to underlying image data
Difference.Eye make-up, lipstick, blush, hair, face, full dress and handss view data both relative to mask data shift.
As shown in Figure 16, automatically regulated according to previous mask and underlying image data based on underlying image data and cover
Modulus evidence.In this Figure 13, illustrate mask data to show based on bottom pattern and brightness data automatically using random color
The region adjusting.Blush and do not quote MARG with eye make-up and automatically adjust on the basis of brightness and grey-scale modes.
In fig. 17, the mask from Figure 16 is shown using suitable colour switching after whole frame Automatic Mask Fitting
Data.Adjust mask data based on from previous frame or from the data of initial key frame to be suitable for underlying luminance mode.
Propagate using using the Bezier of justified margin and the mask of polygon animation:Can use and surround area-of-interest
Bezier or polygon promote the mask for moving target.Multiframe is loaded in display-memory, and close
Its midpoint automatic aligning the area-of-interest application Bezier point at the edge of detection and curve or polygon in view data
Point.Once the target in frame 1 is surrounded by polygon or Bezier, then operator adjusts and is loaded in display-memory
Polygon in the last frame of frame or Bezier.Operator and then execute fitting routine, it is by polygon or Bezier point
Snap to all intermediate frames plus controlling curve, make mask that animation to be formed on all frames in display-memory.Polygon and
Bezier algorithm includes shining to process camera zoom, pan and complexity for the control point rotating, scaling and movement is whole
Camera motion.
In figure 18, polygon is used for sketching the contours the area-of-interest for sheltering in frame 1.Square polygon point alignment
Edge to interesting target.Using Bezier, Bezier point snaps to interesting target and control point/curve is suitable for
In edge.
As disclosed in Figure 19, whole polygon or Bezier are taken to last that select in display-memory
Frame, wherein operator adjust polygon point using the alignment function that point and curve automatically snap to the edge of interesting target
Or Bezier point and curve.
As shown in Figure 20, in the case of having that operator is interactive and adjusting, if the point in the frame between two frames and
There is significant difference, then operator is adjusted the presence maximum error of fitting in the middle of described multiframe further between curve
Frame.
As shown in Figure 21, when determining polygon or Bezier correctly animation between the frame of two regulations
When, suitable mask is applied to all frames.In these figures it is seen that any mask color filling polygon or Bezier are bent
Line.
Figure 22 shows the mask that polygon or the Bezier animation from point and curve automatic aligning to edge obtains.Palm fibre
Color mask is colour switching, and green mask is any see-through mask.For depth project, through the area of depth assignment
Domain can be a kind of color, and those regions waiting depth assignment may, for example, be another kind of color.
The coloring of the background in feature film and collection of TV plays/depth strengthens:Mask information is applied to feature film or TV
The process of the successive frame in collection of drama is known, but is laborious for several reasons.In all cases, these processes relate to
And from frame to frame correction mask information to compensate the motion of underlying image data.The correction of mask information not only include scene or
Again the sheltering of performer in editing and other moving targets, and include correction of movement target and block during its motion or cruelly
The background of dew and foreground information.This wherein photographing unit follow the photograph of action in scene cut to the left, to the right, up or down
It is especially difficult in camera pan.In this case, operator must not only correction of movement target motion, operator
Must also correcting background information block and expose plus correction when camera motion to background and prospect the stylish back of the body of new portion
The exposure of scape information.Typically, these examples are substantially increased to scene due to due to extremely many manual labor amount of being related to
The time of editing coloring and degree-of-difficulty factor.Embodiments of the invention are included for the scene cut including complicated camera motion
And wherein exist the erratic action of pursuit movement target photographing unit rock or the camera motion that drifts about scene
Multiframe in editing automatically colours/the enhanced method of depth and process.
Camera pan:Background formation sequence for the non-athletic target association in pan camera sequence, with scene
Major part.In order to carry out to a large amount of target contexts colouring/depth enhancing for panning sequence, create and remove moving target
Mosaic including the target context of whole panning sequence.This task is completed using pan background splicer tool.Once it is raw
Become the background mosaic of panning sequence, it can be coloured/depth strengthens once, and automatically applies it to individually
Frame, carry out colouring/depth assignment without the target context in the artificially every frame to sequence.
Pan background splicer tool uses two general operations to generate the background image of panning sequence.First, by meter
Calculate the conversion needed for the every frame in sequence is aligned with previous frame and estimate the motion of photographing unit.Because moving target forms film
The major part of sequence, thus the technology using the impact registering to frame of minimum moving target.Secondly, by interactive selection from
Effectively remove twice Mixed Zone of moving target in final mosaic and frame is mixed in final background mosaic.
Background synthesis output data includes:The such as standard digital format of tiff image file (bkg.*.GIF) etc
Gray level/(possibly colored for depth project) image file, it is made up of the background image of whole panning lens, moves
Except desired moving target, prepare the masked operation using having been described above and carry out color design/depth assignment;And establishing
Background mask/coloring/depth data the component (bkg.*.msk, bkg.*lut ...) of association carries out background mask afterwards and carries
Take required associated background text data file.Background text data file provide filename, the frame position in mosaic with
And determine information for each other frame dimension constituting (input) frame with Background Contexture, have in following often row (every frame)
Hold:Frame filename, frame x position, frame y location, frame width, frame height, frame left margin x maximum, frame right margin x minima.Remove
Outside as first (frame filename) of character string, each data field is integer.
Generate conversion:In order to generate the background image for pan camera sequence, calculate the motion of photographing unit first.According to
The motion of camera is determined by checking to carry a frame to the conversion needed for being aligned with previous frame.By each pair phase in the sequence of calculation
The motion of adjacent frame, can generate the Transformation Graphs providing the relative position of every frame in sequence.
Translation between image pairing:Most of image registration techniques use some form of intensity related.Regrettably,
Method based on image pixel intensities will be lost biased so that it is difficult to estimate due to photographing unit because of any moving target in scene
The motion moved and cause.The method of feature based is also used for image registration.These methods limited on the fact that:Most of spies
Levy and occur on moving object boundary, inaccurate result is also given for purely camera motion.Artificially select a large amount of frames
Characteristic point cost too high.
Used in pan splicer method for registering using Fourier transform property in case avoid be partial to scene in fortune
Moving-target.The autoregistration of frame pairing is calculated and for final background image assembly.
The Fourier transform of image pairing:First step in process of image registration includes taking the Fourier of each image to become
Change.Camera motion can be estimated as translating.The specified quantitative translation that second image is given according to the following formula:
I2(x, y)=I1(x-x0,y-y0) (1)
The Fourier transform taking each image in pairing obtains following relation:
Phase shift calculates:Next step is related to calculate the phase shift between image.Do so leads to according to the first and second images
Fourier transform phase shift expression:
Inverse Fourier transform
By the inverse Fourier transform taking the phase shift providing in (3) to calculate, obtain its peak and be located at the translation of the second image
Delta-function.
Peak position:The two-dimensional surface being obtained by (4) will have maximum from the first image to the translation of the second image point
Peak.By searching for the maximum in surface, the conversion finding the camera motion in expression scene is simple.Although because fortune
The reason moving-target, there will be spike, but the domination campaign of photographing unit should represent peak-peak.For whole panning sequence
In each continuous frame pairing, execute this calculating.
Process picture noise:Unfortunately, due to be likely to occur the result of falseness, picture noise the reason picture noise
The result of transformation calculations may thoroughly be changed.Pan background splicer is using the two methods detecting and correcting error situation
Process these exceptional values:Peak match recently and location of interpolation.If these corrections are matched unsuccessfully for specific image, then splicing
Application program has the option of the position of any frame pairing in artificially correction sequence.
Coupling peak recently:After calculating the conversion for image pairing, determine between this conversion and previous transformation
Percentage difference.If this difference is higher than predetermined threshold, then carry out the search for adjacent peak.If find closer to coupling and
Peak less than difference threshold, then replace peak-peak using this value.
This supposes that, for pan camera lens, motion is relatively stable, and moves for the pairing of each frame
Between difference will be little.This corrects such situation, and wherein picture noise is likely to result in and somewhat converts than with photographing unit
The corresponding higher peak of true peak.
Location of interpolation:If coupling peak calculates the legitimate result failing to obtain to be provided by percentage difference threshold value recently, that
Based on the result estimated location matched from prior images.Again, this generally gives for stable panning sequence
Go out good result, because the difference continuing between camera motion should be roughly the same.Peak correlation and interpolation result are shown in
In splicing application program, therefore if it is desired, it is possible to carry out manual synchronizing.
Generate background:Once calculating the relative camera motion of each frame pairing that continues, these frames can be synthesized to
Represent in the mosaic of whole background of sequence.Due to needing to remove the moving target in scene, thus using different figures
As blend options to effectively remove the domination moving target in sequence.
Assembly background mosaic:First, generate background image buffer, it is large enough to cross over whole sequence.Permissible
Background is mixed by single pass, or removes moving target if necessary, then using twice mixing being explained in detail below.Mixed
The position closed and width can be edited in splicing application program, and can globally arrange, or for each frame
Pairing is separately provided.Each mixing is added in final mosaic, and and then writes out as single image file.
Twice mixing:The purpose of twice mixing is to eliminate moving target from the mosaic of final mixing.This can lead to
Cross and mix described frame first and complete, therefore moving target removes from the left side of background mosaic completely.One is shown in Figure 23
Individual example, wherein personage remove from scene, but still can see on the right side of background mosaic.Figure 23.In Figure 23
In shown first pass mixing, sport figure is shown on the step of the right.
Then, generate the second background mosaic, wherein using hybrid position and width so that moving target is carried on the back from final
The right side of scape mosaic removes.Such a example is shown, wherein personage can remove from scene in Figure 24, but
Still can see in the left side of background mosaic.In second time mixing as shown in Figure 24, the personage of motion is shown in
The left side.
Finally, mix for described twice to generate the background edge of the final mixing removing moving target from scene
Embedding pattern.Final background corresponding with Figure 23 and Figure 24 is shown in Figure 25.As shown in Figure 25, there is the final of sport figure
The background of mixing is removed.
In order to promote to effectively remove the moving target of the zones of different that may occupy frame during panning sequence, splicer
It is individually or globally every all over the option with every frame interactive setting mixing width and position that application program has.In Figure 26
Can see that the sample screen shot from mixture editing tool illustrating first and second times hybrid positions, Figure 26 is that mixing is compiled
The screenshot capture of the instrument of collecting.
Background text data preserves:The output text data file comprising the parameter value relevant with background mask extraction is from upper
The initial phase of face description generates.As mentioned above, each text data record includes:Frame filename, frame x position,
Frame y location, frame width, frame height, frame left margin x maximum, frame right margin x minima.
Output text data file name by pre-attached " bkg. " prefix and is added " .txt " extension and is synthesized from first
Incoming frame root name claims to form.
Example:The representative line output text data file of referred to as " bkgA.00233.txt " can include mixed from constituting
Close the data of 300 or more multiframe of image:
4.00233.GIF 0 0 1436 1080 0 1435
4.00234.GIF 7 0 1436 1080 0 1435
4.00235.GIF 20 0 1436 1080 0 1435
4.00236.GIF 37 0 1436 1080 0 1435
4.00237.GIF 58 0 1436 1080 0 1435
The image displacement information being used for creating the synthesis expression of frame series is included in the text associating with composograph
Interior, and be used for for single synthesis mask being applied to all frames for creating composograph.
In figure 27, would indicate that the successive frame of camera pan is loaded in memorizer.Moving target (is moved to the left to door
House keeper) sheltered with a series of color transform informations, stay and do not apply the black of mask or color transform information
White background.Alternatively, for depth project, depth and/or depth shape can be distributed to moving target.Referring to figure
42-70.
In Figure 28, for the sake of clarity show six representative successive frames of pan above.
Figure 29 shows synthesis or the montage image of the whole camera pan building using phase coherent techniques.Motion
Target (house keeper) by keep first with last frame and related to the phase place of both direction averagely and as saturating for reference
Bright positive is included.Using the same color conversion macking technique for foreground target, the single montage of pan is represented and carry out
Color designs.
Figure 30 shows the frame sequence in the camera pan after background mask colour switching, and montage is applied to use
To create every frame of montage.There is no the local application mask of pre-existing mask, thus there is suitable skew in application
Background information while keep moving object mask and color transform information.Alternatively, for depth project, every frame
Right and left eye view can illustrate as pairing, or for example for each eye shown in single window.Additionally, image
Can also be shown in three-dimensional viewing screen.
In Figure 31, for the sake of clarity, automatically by color background/depth strengthen background mask be applied to without
The frame sequence of the selection in pan after the frame of pre-existing mask.
Static state and drift camera gun:By contrast, do not move in film scene editing with motion " prospect " target
It is considered " background " target with the target of change.If photographing unit does not move in whole frame sequence, then association
Target context looks like static state for sequence time duration, and only can shelter for the related frame of institute and
Coloring is once.This is and motion (such as pan) camera scenario needing splicing tool described above to generate background composition
Contrary " still camera " (or " static background ").
It is related to seldom or does not have the editing of camera motion or frame sequence to provide and for editing background for generating
The simple scenario of the useful two field picture background " synthetic " of color.However, because even " static " photographing unit is for various
The reason also experience slight vibration, thus static background compositing tool can not assume perfect pixel alignment from frame to frame, needs
Assess the interframe movement being accurate to 1 pixel, so that optimum before being added to its contribution data in synthetic (meansigma methodss)
Pixel between ground disassociation frame.Static background compositing tool provides this ability, be the coloring of each disassociation frame after generating with
Extract all data needed for background coloring information.
The moving foreground object of such as performer's etc is masked, leaves unshielded background and static foreground object.
Expose background or prospect in any case in the moving target sheltered, preferential and with suitable skew by the background previously blocked and
The example of prospect copies in single image so that compensation campaign.Offset information is included in the literary composition associating with the single representation of background
So that the mask information obtaining can be applied to the every frame in scene cut with suitable mask shift in presents.
Background synthesizes output data, and using being suitable for, coloring/depth is enhanced to include the average gray scale inputting background pixel value
Level tiff image file (bkg.*.GIF), and establish the back of background mask/coloring data/depth enhancing component of association
The required associated background text data file of scape mask extraction (bkg.*.msk, bkg.*.lut ...).Background text data carries
Determine information for filename, mask shift and each other frame dimension constituting (input) frame for associating with synthetic,
There is following often row (every frame) form:Frame filename, frame x offsets, frame y offset, frame width, frame height, and frame left margin x is maximum
Value, frame right margin x minima.In addition to first (frame filename) as character string, each in these data fields
Individual is all integer.
Initialization:The initialization of static background compositing process is related to initialize and gather create synthesis background image buffer
Data required for data.This needs to circulate on all composition input picture frames.Initializing in any generated data may
Before generation it is necessary to mark, load synthetic input frame, and allow all foreground targets identified/coloring (i.e. use mask label mark
Note, to exclude synthesis).These steps are not the parts of static background compositing process, but occur browsing data base or mesh
Record tree, select and load relevant incoming frame, foreground target carried out scribble/depth assignment after call synthetics before.
Obtain frame to move:Neighbouring image background data in still camera editing can show little being mutually perpendicular to
And horizontal-shift.Take the first frame in sequence as baseline, by being compared of the background image of all successive frames and the first frame,
Line by line and matching by column, so that straight according to the horizontal and vertical skew of all measurable image lines and column-generation two " measurement "
Fang Tu.These histogrammic patterns provide in every frame [iframe] array DVx [iframe], in DVy [iframe] mark and
The vertical shift that most frequent (and possible) of storage is assessed.These offset array generate in the circulation to all incoming frames.
Obtain largest frames to move:To incoming frame circulation to generate DVx [], DVy [] offset array number during initializing
According to when, find bare maximum DVxMax, DVyMax value from DVx [], DVy [] value.When suitably determining that the background that obtains closes
Become the size of image to need these values when accommodating the pixel of all synthetic frames in the case of no shearing.
Obtain frame enough and to spare:When incoming frame being circulated during initializing, call additional process to find more than left image
Abundant right hand edge and the left hand edge of right image enough and to spare.Because the pixel in enough and to spare has null value or the value close to zero, thus
The column index at these edges is found by asking for the average image row pixel value and its variation.Edge column index is respectively stored in
In every frame [iframe] array lMarg [iframe] and rMarg [iframe].
Moved using maximum extension frame:The frame mobile phase asked for during described GetFrameShift () for
" baseline " first frame of synthesis frame sequence, and the frame movement value found is the movement/skew with respect to the background compositing frame obtaining.
The dimension of background compositing frame is equal to and is expanded with width D VxMax, DVyMax pixel respectively by the vertically and horizontally enough and to spare of all sides
The dimension of the first synthetic frame of exhibition.Vertical shift therefore must include the enough and to spare width with respect to the background frames obtaining, and therefore
Every iframe is needed to be added to the skew with the calculating of the first frame:
DVx [iframe]=DVx [iframe]+DVxMax
DVy [iframe]=DVy [iframe]+DVyMax
Initialization composograph:Frame buffer class object example is created for the background composition obtaining.The back of the body obtaining
Scape synthetic has the first incoming frame increases 2*DVxMax (horizontal direction) and the dimension of 2*DVyMax (vertical direction) pixel respectively
Degree.First incoming frame background image pixels (maskless non-foreground pixel) are copied to background image buffering with suitable vertical shift
In device.For receiving initialized pixel, associated pixel synthesis counting buffer value is initialized as one (1), is otherwise initialized as
Zero (0).For the handling process for extracting background by for example occurring for all frame delta frame masks of scene, join
See Figure 38 A.Figure 38 B illustrates the determination of the mobile amount with enough and to spare of the frame being caused by such as camera pan.According to for example every
After individual desired frame determines and covers the image of movement, preserve composograph.
Figure 39 A shows the determination (respectively 1.1 and 1.2) of edgeDetection (rim detection) and snap point, its point
Describe in detail not in Figure 39 B and Figure 39 C and its enable those skilled in the art via average filter, gradient filter,
Filling gradient image and with the comparison of threshold value and realize image edge detection routine.In addition, the GetSnapPoint of Figure 39 C (obtains
Snap point) routine shows based on determined by the Rangelmage less than shown MinDistance (minimum range)
BestSnapPoint (best alignment point) determines NewPoint (new point).
Figure 40 A-C shows how to realize bimodal threshold value instrument in one or more embodiments of the invention.Bright half-light
The establishment of the image of mark shape utilizes MakeLightShape (making light shape) routine to realize, and wherein utilizes such as Figure 40 A afterbody
Shown corresponding routine applies the light/dark value for shape.These routines are shown in Figure 40 C and Figure 40 B.Figure 41 A-B illustrates
Calculating for the FitValue (match value) in one or more routines above and gradient.
Synthetic frame circulates:Incoming frame is synthesized (interpolation) in the background obtaining via the circular order on frame.Using
Incoming frame background pixel is added to background image buffering by the relevant skew (DVx [iframe], DVy [iframe]) for every frame
In device, and for the pixel receiving synthesis interpolation, associated pixel synthesis counting value increase by (1) (is provided for this and individually closes
Become count array/buffer).Only background pixel, pixel synthesis (interpolation) that those do not associate input mask index arrives
To background in;The pixel will with non-zero (tagged) mask value as foreground pixel, and thus without by background
Synthesis;Therefore, they are ignored.Status bar in gill (Gill) is circulated by incoming frame for every time and all increases.
Synthesis completes:Generate the final step in output composograph buffer to ask for constituting the pixel of composograph
Meansigma methodss.When completing synthetic frame circulation, background image pixel values represent all incoming frame pixels of be aligned producing contribution
With.Output pixel due to obtaining must be the average of these values, thus requires the counting divided by the input pixel producing contribution.
As mentioned, every pixel counts are provided by associated pixel synthesis counting buffer.There are all pixels of non-zero synthesis counting
It is averaged;Other pixels remain zero.
Composograph preserves:The tiff format output gray level image with every pixel 16 bit is from synthesis average background figure
As buffer generates.Export file name passes through pre-attached " bkg. " prefix and (and if necessary, adds common " .GIF "
Image spreading name) and write the associated context file (if applicable) at path " ../Bckgrnd Frm " place, otherwise
Write default path (identical with incoming frame) and be made up of the first synthetic input frame filename.
Background text data preserves:Comprise the output text data file of the parameter value relevant with background mask extraction by
(40A-C) initial phase described in generates.As (referring to Figure 39 A) that be previously mentioned in brief introduction, each text data record
Including:Frame filename, frame x offsets, frame y offset, frame width, frame height, frame left margin x maximum, frame right margin x minima.
Output text data file name is passed through pre-attached " bkg. " prefix and is added " .txt " extension name, and writes road
The associated context file (if applicable) at footpath " ../Bckgrnd Frm " place, otherwise writes default path (with incoming frame
Identical) and claim to form by the first synthetic input frame root name.
Example:It is referred to as the complete output text data file of " bkg.02.00.06.02.txt ":
C:\NewYolder\Static_Backgrounding_Test\02.00.06.02.GIF 1 4 1920 1080
0 1919
C:\New_Folder\Static_Backgrounding_Test\02.00.06.03.GIF l 4 1920 1080
0 1919
C:\New_Folder\Static_Backgrounding_Test\02.00.06.04.GIF l 3 1920 1080
0 1919
C:\New_Folder\Static_Backgrounding_Test\02.00.06.05.GIF 2 3 1920 1080
0 1919
C:\New_Folder\Static_Backgrounding_Test\02.00.06.06.GIF l 3 1920 1080
0 1919
Data scrubbing:The memorizer of the data object being used by static background compositing process is distributed in release.These include
Background synthesis GUI session object and its member's array DVx [], DVy [], lMarg [], rMarg [], and background composograph delay
Rush object, its content is previously saved disk and is no longer necessary to.
Coloring/the depth assignment of synthesis background
Once being extracted background as described above, then single frames can be sheltered by operator.
It is transferred to the mask data covering background by being used for background synthesis offset data so that being used for for creating synthetic
The mask of each successive frame be properly positioned.
There is no pre-existing mask (such as foreground actors) in any case, by background mask market demand to often
Individual successive frame.
Figure 32 is shown in which to strengthen, using single colour switching/depth, the frame sequence sheltering all moving targets (performer)
Row.
Figure 33 for the sake of clarity shows the frame sequence of the selection before background mask information.All of movement elements are all
Sheltered completely using Automatic Mask Fitting algorithm.
Figure 34 shows the static background and foreground information deducting the moving target previously sheltered.In this case, with
Shelter the single representation of complete background with moving target similar mode using colour switching.It should be pointed out that before removing
The profile of scape target is seemed to be truncated and not can recognize that due to due to its across incoming frame train interval, i.e. black in frame
Object representation wherein moving target (in this case for performer) never exposes background and prospect, i.e. deleted background view data
3401 region.For the project only colouring, black objects are ignored during masked operation, because the background obtaining
Mask was only applied to all frames of the single representation for background later in the case of not having pre-existing mask.Right
In depth relevant item, can art ground or realistically reproduce the black mesh that wherein there is deleted background view data 3401
Mark, such as so that two dimensional image is converted into the information utilize in 3-D view by filling.Because these regions are that wherein pixel is not
Can borrow from the region of other frames, because they never expose in the scene, thus draw them or otherwise at that
In create credible image it is allowed to all background informations exist and for artifact-free 2 d-to-3 d conversion.For example, in order to from
There is two dimensional image from unexposed region in the scene and create artifact-free 3-D view pairing, can generate have for
The background of all or enough information needed of the background area being always blocked.Can scribble, draw, creating, computer generates
Or otherwise for example from studio obtain disappearance background image data 3401 so that include black region background
Middle have enough information, flatly to translate foreground target, and is that the edge of translation for occlusion area borrows life
The background data becoming.This allows to generate artifact-free three-dimensional image pairing, because the area being always blocked in the scene can be exposed
The horizontal translation of the foreground target in domain leads to the use of the background data newly creating rather than stretches target or make pixel deformation,
This creates the pseudomorphism of the error as human-detectable.Therefore, for the enhanced frame for depth, obtain occlusion area using foot
Enough horizontal photorealism data division ground is filled, or all occlusion areas are rendered in the region seeming true to nature enough
And the background being completely filled with (draw and colour and/or depth assignment) leads to artifact-free edge.Also respectively referring to Figure 70
With Figure 71-76 and the description that associates.The generation of deleted background data can also be utilized to the element generating along computer
Create artifact-free edge.
Figure 35 show by background mask information with after suitable offset applications to every frame and do not have pre-existing
Successive frame in still camera scene cut in the case of mask information.
Figure 36 shows from after suitable offset applications background information and do not having pre-existing mask information
In the case of still camera scene cut frame representative sample.
Color reproduction:After completing color treatments for each scene, in 24 bits or 48 bit RGB color
Interior combination is follow-up or continuous multicolor motion mask and relevant look-up table, and is rendered as TIF or TGA file.Then,
These unpressed high-definition pictures are rendered to the various different medium of such as HDTV 35mm negative film etc (via number
Word Film scanner), or various other standards and non-standard video and movie formats are to watch and to put on display.
Handling process:
Digitized, stabilisation and noise reduction:
1. with any one of some number formats by 35mm film digitization as 1920x1080x10.
2. every frame experience standard stabilization technology in case minimize in the film when it crosses camera sprocket interior from
So rock, and any suitable Digital television and film technology adopting.It is also adopted by frame difference technology so that further stabilisation
Image stream.
3. then, every frame experience noise reduction is to minimize random film crystal grain and possibly into the electronics in acquisition procedure
Noise.
Pre-production film resolves into camera element and visible database creates:
1. using various different subtractions, phase place is related and each scene of film is resolved into background by focal length algorithm for estimating
With foreground elements and moving target.Background and foreground elements can include element or the such as original film that computer generates
Element present in material.
2. using uncompensated (lens) splicing routine, the background of m pan and foreground elements are combined in single frames.
3. by prospect be defined as with background identical direction on move, but because it is permissible adjacent to camera lens
Represent any target and/or the region of faster vector.In the method, pan is reduced to single width presentation graphics, its bag
Contain and take from being had powerful connections and foreground information of multiframe.
4. sometimes by zoom be processed as tile data base, wherein by matrix application to key frame, wherein reference vector point and
Characteristic point in image is corresponding, and corresponding to the characteristic point on the mask of the application on the synthesis mask covering any distortion.
5. creating data base from the frame constituting single representative or synthetic frame (will each be common and novel during pan
Pixel is distributed to them and is derived from or their total described multiframes).
6. in this manner it is achieved that the mask of expression bottom look-up table covers and will be properly assigned to the background in respective frame
The pixel novel and common with the correspondence of prospect represents.
Pre-production design background designs:
1. whole background is carried out colouring/depth assignment as single frames, wherein remove all moving targets.Background is sheltered to be made
With being scribbled using standard, filling, the routine of Digital printing, transparent, texture mapping and similar means completes.Color selecting is using certainly
Adjust so that 24 bit color look-up tables mating the density of bottom gray level and brightness complete dynamicly.Depth assignment is via in list
In individual synthetic frame, distribution depth, distribution geometry, input complete with regard to the numerical value of target or in any other manner.According to
This mode, applies the color/depth creatively selecting, it is suitable for being mapped to the gray level/depth below each mask
Scope.It is used for selecting the standard colour wheel of color gamut to detect bottom gray level dynamic range and determine that designer can be therefrom
The respective color scope that (i.e. those color saturations of gray level brightness below coupling mask) select.
2. each look-up table allows the grey level range below numerous color application to mask.The color of distribution will be according to
Brightness and/or automatically regulate according to the color vector being pre-selected, compensates the change of bottom gray level density and brightness.
Pre-production design movement elements design:
1. create design moving target frame, it includes all persons and element in whole scene background and its Scene
The single representativeness moment of the motion in scene existing.These non-background element of moving are referred to as designing frame target (DFO).
2. each DFO is resolved into the design section interested paying special attention to the element with distinct contrast focusing in DOF
(area-of-interest), described element can readily use various different gray levels and Luminance Analysis (for example pattern recognition and
Or edge detection routine) separate.Strengthen because existing cinecolour can be used for depth, thus can be in the feelings considering color
Area-of-interest is selected under condition.
3. the bottom gray level of each masking regional and Luminance Distribution figure and other gray scales including pattern analyses
Level analysis is represented from the figure of the region shape with area, girth and various different weighting parameters and is shown.
4. based on for film types, period, creation intention etc. suitably study and using automatically regulate with
Join the density of bottom gray level and brightness 24 bit color look-up tables be including each target each area-of-interest determine
Color selecting, applies color that is suitable and creatively selecting.Standard colour wheel detection bottom gray level scope and limiting sets
Meter teacher only from will coupling mask below gray level brightness those color saturations selected.Depth item can be directed to
Mesh is made or is adjusted depth assignment, until for example obtaining depth true to nature.
5. this process continues, and all targets of motion create reference design mask in for scene.
Pre-production design key frame target assistant designer:
1. once substantially completing all colours selection/depth assignment for special scenes, then moving target frame will be designed
Reference as the greater amount of key frame target creating in scene.
2. select key frame target (all movement elements not including background element in scene, such as people, automobile etc.
Deng) for sheltering.
3. the determination factor being used for each continuous key frame target is between a key frame and next key frame target
Amount of new information.
To the enhanced method of movement elements coloring/depth in successive frame:
1. multiframe is loaded in display buffer film-making colorist (operator).
2. one of frame in display buffer obtains the key frame of all masking informations by including operator from it.Operator
Do not carry out creativeness or color/depth decision-making, because all colour conversions information encodes in key frame mask.
3. but operator can be switched to from the look-up table of coloring or application and be distinguished by any color with distinct contrast
Semi-transparent masks.
4. operator can check the motion of all frames in display buffer, observes the motion occurring in successive frame, or
Person they can be from a key frame to next key frame progressively pursuit movement.
5. key frame mask information is propagated (copy) all frames in display buffer by operator.
6. operator and then continuously execution mask fitting routine on every frame.Figure 37 A show resolve into subsequently detailed
The mask fit general processing flowchart of flow chart Figure 37 B and Figure 37 C.This program is based on gray level/brightness, edge parameters are carried out
Best fit, and the gray level based on key frame in display or previous frame and luminance patterns carry out pattern recognition.For
For the element that computer generates, skip mask fitting routine, establishment is (and thus non-because mask or α define numeral
Operator limit) edge, these edges be accurately defined computer generation element border.Mask fit operation is considered
Element mask or α that computer generates, and stop when hitting the edge of element mask of computer generation, because these sides
Boundary is considered as accurate according to the step 3710 of Figure 37 C, but regardless of gray level is how.Which enhance the accurate of mask edge
Degree, and the reshaping in the mask that limits with operator of the element for example, identical basic brightness that computer generates.As Figure 37 A
Shown in, mask matching initialization area and fitted mesh difference scheme parameter, then call digital simulation grid routine, and and then intending
Close grid routine on to mask interpolation, it executes on any computer as described herein, wherein these routines especially by
It is configured to as the digital simulation grid of defined in Figure 37 B and Figure 37 C.Figure 37 B schemes to initialization from initialization area routine
Handling process as row and image column and reference picture flows into CalculateFitValue routine, this routine call matching gradient
Routine, this fit gradient routine calculates xx and yy as the difference between xfit, yfit and the gradient for x and y successively.
If for x, y and xx and yy, FitValue is more than fit, then xfit and yfit value is stored in FitGrid.Otherwise,
Process to return to and have for continuing at the fit gradient routine of the new value of xfit and yfit.When completing for grid for x and y
The process of size, then according to Figure 37 C to mask interpolation.After initialization, it is determined that for FitGridCell index i and
J, and execute bilinear interpolation at fitGridA-D position, wherein mask is fitted at 3710 and looks for for any CG element
To any border (i.e. for known α border or have for example limit digital reproduction element depth value be considered as
Examine the border of errorless correct mask border).Mask fitting continues, until the chi of the mask that xend and yend limits
Very little.
If 7. motion creates big deviation in the zone from a frame to next frame, then operator can select to want mask
The single region of matching.The region of displacement is moved on to the apparent position of area-of-interest, there, program attempts to create most preferably
Matching.This routine successively continues for each area-of-interest, until all masking regionals are applied in display-memory
All successive frames in moving target.
A. operator clicks on the single mask in each successive frame on the respective regions belonging to frame 2.Computer is based on ash
Degree level/brightness, edge parameters, grey-scale modes and other analyses carry out best fit.
B. this routine successively continues for each region, until all again fixed for all area-of-interests in frame 2
Position.
C. operator and then completed using click instruction, and by the gray level parameter in the mask in frame 2 and frame 3
It is compared.
D. this operation continues, and all frames between two or more key frame are completely masked.
8. in the case that presence is blocked, using the best fit parameters of modification.Once blocking over, operator will be blocked
Front frame is with acting on the reference of the frame after blocking.
9., after all motions complete, successively background/setting mask is applied to every frame.Apply and be:Do not covering
The local application mask of mould.
10. the mask of moving target can also be used in using the Bezier or polygon surrounding area-of-interest
Form animation.
A. multiframe is loaded in display-memory, and near area-of-interest application Bezier point and Polygonal Curves
Point, wherein these points automatically snap to the edge of detection in view data.
Once b. polygon or Bezier surround the target in frame 1, operator adjusts and loads in display-memory
The polygon in last frame in frame or Bezier.
C. then operator executes fitting routine, and polygon or Bezier point are snapped to institute plus controlling curve by this routine
There is intermediate frame, make the mask on all frames in display-memory form animation.
D. polygon and Bezier algorithm include control point, and it is used for rotating, scale and entirely moving so that when necessary
Process zoom, pan and complicated camera motion.
Figure 42 shows two picture frames of separately some frames on the time of the people of floating crystal ball, wherein by these figures
As the different target of each in frame is converted into objective from two dimension target.As illustrated, going out to the second frame (being shown in bottom)
When existing, crystal ball moves with regard to the first frame (being shown in top).Because these frames are associated with each other, although thus dividing in time
Open, but most of masking information can be used for this two frame, such as previously use embodiments of the invention described above reset
Shape.For example, using the mask reshaping technology described above for coloring, will bottom gray level be used for mask is carried out with
Track and reshaping, eliminate the great majority work becoming two-dimentional movie conversion involved by three-dimensional movie.This attribution on the fact that:
Once key frame has the color being applied to them or depth information, then mask information can be automatically in whole frame sequence
Propagate, this eliminates the needs for example adjusting wire-frame model.Although for the sake of clarity only existing the two width images illustrating,
It is when crystal ball lentamente moves right in image sequence, these images separate other images some in time.
Figure 43 shows will the sheltering of first object from the first picture frame that two dimensional image is converted into 3-D view.?
This in figure, the first object sheltered is crystal ball.Need not cover over the object in any order.In this case, using simple
Mask circular to a certain extent is applied to crystal ball by free form drawing instrument.Alternatively, it is possible to circular masks are dropped
To on image, resizing and move to tram so as to circle crystal ball corresponding.However, the mesh sheltered due to great majority
Mark is not simple geometry, thus there is illustrated interchangeable method.The gray-scale value covering over the object therefore is utilized
Come in subsequent frames to mask reshaping.
Figure 44 shows sheltering of the second target in the first picture frame.In the figure, the hair of the people after crystal ball
It is used free form drawing instrument masked as the second target with face.Rim detection or gray level thresholding can be utilized
Come as above previously with respect to the edge accurately arranging mask described in coloring.Target is not required to be simple target, i.e. the head of people
Send out and face can as or shelter not as single items, and therefore can by depth assignment give the two or as wish
That hopes is individually dispensed.
Figure 45 shows two see-through masks allowing to check with the first picture frame of the part of mask sequence.This figure will
Mask is shown as colourful transparent mask, thus if desired, can adjust mask.
Figure 46 shows sheltering of the 3rd target in the first picture frame.In the figure, picking is selected as the 3rd target.From
It is utilized to limit the shape of mask by form instrument.
Figure 47 shows three see-through masks allowing to check with the first picture frame of the part of mask sequence.Again
Ground if desired, these masks can be adjusted based on transparent mask.
Figure 48 shows sheltering of the 4th target in the first picture frame.As illustrated, the jacket of people forms the 4th target.
Figure 49 shows sheltering of the 5th target in the first picture frame.As illustrated, the coat-sleeve of people forms the 5th target.
Figure 50 shows the control panel for creating 3-D view, including the mask of layer and objective and image frame in
Association, specifically illustrate the establishment of the plane layer of coat-sleeve for the people in image.On the right side of screen dump, enable
" rotation " button, illustrates " translation Z " rotation amount, shows that coat-sleeve rotates forward as shown in next figure.
Figure 51 shows the 3-D view of each the different mask shown in Figure 43-49, wherein associates with the coat-sleeve of people
Mask be illustrated as on the right of the page towards left and right viewpoint rotation plane layer.Similarly, as illustrated, associating with jacket and face
Mask is allocated Z-dimension or depth before background.
Figure 52 shows the view of the somewhat rotation of Figure 51.The plane layer that the coat-sleeve that this illustrates rotation tilts towards viewpoint.
Crystal ball is illustrated as the flat target being still within two dimension, just as it is not yet allocated objective type.
Figure 53 shows the view of the somewhat rotation of Figure 51 (and Figure 52), and wherein coat-sleeve is illustrated as turning forward, again
Ground never limits the wire-frame model for coat-sleeve.Alternatively, it is possible to by the objective type application of post to coat-sleeve so that shape
Become 3D shape target even more true to nature.Here, plane type is for the sake of clarity shown.
Figure 54 shows control panel, and the crystal ball that it specifically illustrates before for the people in image creates spherical mesh
Mark.In the figure, " create and the select " button by clicking in the middle of frame creates spherical objective and is knocked down graphics
As in, described frame then (next in figure translation and resizing to crystal ball on after) illustrate.
Figure 55 shows the flat mask that spherical object is applied to crystal ball, and it illustrates in spheroid and projects to spheroid
Front and back to illustrate to distribute to the depth of crystal ball.Spherical object can translate, and moves on three axles, and weight
Sizing is to be suitable for target associated with it.The projection of crystal ball to spheroid illustrates that spherical object is slightly larger than crystal ball, however,
Which ensure that whole crystal ball pixels are allocated depth.Can also be as desired by the reality of spherical object resizing to spheroid
Border size is for finer job.
Figure 56 shows the top view of the three dimensional representation of the first picture frame, illustrates to distribute to the Z dimension of crystal ball, shows crystal
Ball is in before the people in scene.
Figure 57 show rotate so that in X-axis coat-sleeve seem from image out more coat-sleeve planes.Tool
The circle having the line (X-axis line) projecting by it defines the Plane of rotation of objective, is here and coat-sleeve mask sequence
Plane.
Figure 58 shows control panel, and it is particularly shown creating the head target for being applied to the face in image,
Need not give face depth true to nature in the case of such as line model.Head target uses " create and select " in the middle of screen
Button creates, and illustrates in next in figure.
Figure 59 shows the head target in 3-D view, and it is too big and is not aligned with the actual number of people.According to Figure 58
After creating head target, head target occurs in 3-D view as the general depth primitive being commonly available to head.This
Attribution on the fact that:Depth information is not very necessary for human eye.Therefore, in depth assignment, it is possible to use one
As depth primitive in case eliminate for three-dimensional wireframe needs.As will be detailed later, head target translates in subsequent figure, rotates
And resizing.
Figure 60 shows the head target in 3-D view, and it is with suitable face and adjusted by resizing, for example flat
Move on to the position of the number of people of reality.
Figure 61 shows the head target in 3-D view, and Y-axis is rotated through circle and illustrates, and Y-axis with the head of people is
Initial point, therefore allows the correct rotation of head target to correspond to the orientation of face.
Figure 62 shows and rotates the head the being slightly tilted corresponding head mesh so that with people also around Z axis slightly clockwise
Mark.Mask shows to it is unnecessary to result 3-D view credible to human eye and face is completely in one line.In the case of desired, can
With using tightened up rotation and resizing.
Figure 63 shows and travels to mask in second and final image frame.Above previously disclosed for move mask and
All methods of their reshapings are not only applicable to colour, and are also applied to depth and strengthen.Once mask is traveled to separately
In one frame, can therefore all frames between this two frame being mended.Between frame is mended, thus by depth information (and if
If not being cinecolour, colouring information) it is applied to non-key frame.
Figure 64 shows the home position of mask corresponding with the handss of people.
Figure 65 shows and automatically carries out and if desired, the mask that artificially can adjust in key frame
Reshaping, any of which intermediate frame obtains the depth information between the benefit between the first picture frame mask and the second picture frame mask.
The reshaping from motion tracking and mask of mask allows to greatly save work.It is allowed to the artificial essence of mask in the case of desired
Refinement allows accurate work.
Figure 66 shows when foreground target (herein for crystal ball) moves to the right as sheltered mesh in following image
With the missing information of colored prominent left view point on the left of target.It is necessary to generate prominent in the left view point generating 3-D view
Data is so that filling is derived from the missing information of this viewpoint.
Figure 67 shows when foreground target (herein for crystal ball) moves to the left side as sheltered mesh in following image
With the missing information of colored prominent right viewpoint on the right side of target.It is necessary to generate prominent in the right viewpoint generating 3-D view
Data is so that filling is derived from the missing information of this viewpoint.Alternatively, single camera viewpoint can be with the regarding of original camera
Point skew, however, missing data is big for new viewpoint.There are a large amount of frames and such as some missing information are neighbouring in this
Can be utilized in the case of finding in frame.
Figure 68 shows that the ultimate depth of available red/blue 3-D glasses viewing strengthens the three-dimensional shadow of the first picture frame
Picture.Original two dimensional image is illustrated with three-dimensional now.
Figure 69 shows that the ultimate depth of available red/blue 3-D glasses viewing strengthens second and last picture frame
Stereoscopic image, notes the motion of the rotation of the number of people, the motion of staff and crystal ball.Original two dimensional image is illustrated with three-dimensional now,
Because mask follows the tracks of by using mask as described above/reshaping and will in this subsequent frame of image sequence
Depth information is applied to mask and movement/reshaping.As described above, using have CPU (CPU), storage
Device, the general purpose computer of the bus being located between CPU and memorizer execute the operation for depth parameter is applied to subsequent frame,
Described general purpose computer is for example especially programmed to do so, and the figure illustrating computer screen display wherein here is intended to
Represent such computer.
Figure 70 shows the right side of the crystal ball with fill pattern " smearing ", wherein has disappearance letter for left view point
Breath, that is, on the right side of crystal ball on pixel take from the right hand edge of missing image pixel and flatly " smear " so as to cover disappearance believe
Breath.Any other method for data introduces hidden area all meets the spirit of the present invention.Stretch or smear disappearance letter
The pixel that breath is located creates the pseudomorphism that can be identified as mistake for human viewer.It is used for by obtaining or otherwise creating
The data true to nature of missing information, the background of the generation being for example filled via missing information, can avoid filling missing data
Method, and therefore eliminate pseudomorphism.For example, can be used to create the seemingly rational drafting of absent region with artist or apply
It is in a kind of conversion project used in 2 d-to-3 d that the mode drawn provides all missing information appointed synthesis background or frame
The method obtaining missing information.
Figure 71 shows upper body and head 7101 and transparent wing 7102, giving for scene for performer
The mask of frame or α plane.Mask can include the zone of opacity being shown as black and the transparent region being shown as gray area.
α plane can for example be generated as the 8 bit gradation levels " or (OR) " of all foreground mask.Generate the prospect with moving target
Any other method of the foreground target correlation masking of mask or restriction all meets the spirit of the present invention.
Figure 72 shows occlusion area, that is, as the deleted background view data in the colored subareas domain of the performer of Figure 71
7201, it never opens bottom background, the place that is, missing information in the background of scene or frame occurs.This region is forever
Far from the region of the background exposing in any frame of scene, and thus can not possibly borrow from another frame.When for example generating conjunction
When becoming background, it is true that any background pixel that passive movement target mask or foreground mask do not cover can be provided with simple boolean
Value, every other pixel therefore be also as shown in Figure 34 block pixel.
Figure 73 shows the occlusion area of data 7201a with the generation for deleted background view data, this data
Artistically draw or otherwise reproduce complete and true to nature in artifact-free 2 d-to-3 d conversion to be generated for use in
Background.Referring also to Figure 34 and its description.As illustrated, Figure 73 also has with the background shown in the color different from source images
The mask drawn in target.This allows for example desired coloring or coloring modification.
Figure 73 A shows to have partly to draw or otherwise reproduce and arrives to be generated for use in artifact-free two dimension
The occlusion area of deleted background view data 7201b of background true to nature enough in three-dimensional conversion.In this illustration, art
The narrower version of occlusion area can be drawn so that when projection the second view, that is, exposing the horizontal translation prospect of occlusion area by family
During target, the skew to foreground target will have enough backgrounds true to nature to be worked.In other words, deleted background image data area
The edge in domain can inwardly flatly be drawn enough, to allow the data generating using some, or the data of all generations
It is used in the second viewpoint generating for 3-D view set.
In one or more embodiments of the invention, a number of scene from film can be for example by artist
Generated by computer graphics, or send to artist for completing background.In one or more embodiments, Ke Yichuan
Networking station is for example being connected to the computer system of the Internet for artist bid background finished item, wherein website custody
On.For obtaining the background with enough information so that any other method that two-dimensional frames are reproduced as 3d viewpoint pairing all accords with
Close the spirit of the present invention, including all occlusion areas (it is shown in Figure 73) utilizing for Figure 72 or only Figure 72 blocked area
The complete background of the data reproduction true to nature of the part (it is shown as Figure 73 A) at the edge in domain.By estimating background depth and arriving prospect
The depth of target and know for the offset distance desired by two viewpoints, therefore obtains and is used in artifact-free 2 d-to-3 d
In conversion all or less than occlusion area be possible.In one or more embodiments, (for example each blocks constant offset
100 pixels on each edge in region) or the certain percentage (i.e. such as 5%) of size of foreground target can be with labelling
For being created, and if necessary to more data, then be used for updating frame flag, or can be using smearing or pixel is drawn
Stretch to minimize the pseudomorphism of missing data.
Figure 74 shows the bright area of the shoulder on the right side of Figure 71, wherein when generation is for the right image of 3-D view pairing
During right viewpoint, there is deleted background view data 7201.Deleted background view data 7201 represents and foreground target ought be moved on to a left side
Side is to create the gap being located during right viewpoint using stretching (it is also depicted in Figure 70) or other pseudomorphism generation technologies.Figure
Dark-part take from the available background of data at least one frame of its Scene.
Figure 75 show stretching with bright area (i.e. deleted background view data 7201) the corresponding pixel in Figure 74 or
The example of person's " smearing pixel " 7201c, wherein in the case of the background not using generation, can use without background data
In the region being blocked in all frames of scene, create pixel.
Figure 76 shows by being used for being illustrated as the always screening for scene data 7201a (or 7201b) generating
The deleted background view data 7201 in the region of gear, does not have the result of the right viewpoint of pseudomorphism on the edge of the shoulder of people.
Figure 77 shows an example of the element (herein for robot 7701) that computer generates, and it is in three dimensions
Model and be projected as two dimensional image.Background is Lycoperdon polymorphum Vitt to show invisible area.As shown in following in figure, such as α, cover
The metadata of mould, depth or its combination in any etc be utilized to accelerate from two dimensional image to for right and left eyes so that three-dimensional see
The transformation process of the two dimensional image pairing seen.With handss or even to shelter this personage in a computer-assisted way by operator be pole
For time-consuming, because exist that depth (and/or color) is correctly rendered to hundreds of needed for this complex target (if not
Thousands of) sub- mask.
The element that Figure 78 is generated with computer (has the machine of the depth automatically configuring via the depth metadata importing
Device people 7803) the color of importing and depth together illustrate and be separated into background 7801 and foreground elements 7802 and 7803 (backgrounds
In mountain range and sky and lower left soldier, referring also to Figure 79) original image.Although soldier is present in original image
In, but its depth is arranged by operator, and generally there is the shape of the depth of change or mask should with respect to original object
Used in these depths to obtain stereo-picture pairing (referring to Figure 79) for right and left eyes viewing.As shown in background, right
Any region of such as (project to soldier's head) in background profile 7804 being covered in scene etc can be in skill
Reproduce in art for example to provide believable missing data, as shown in Figure 73 of the missing data based on Figure 73 A, it leads to example
As artifact-free edge as shown in figure 76.Import for computer generate element data can include reading have for
Computer generate element 7701 pixel-by-pixel on the basis of depth information and on a computer display should with perspective view
Presentation of information is the element importing, such as robot 7803.This importing process saves a large amount of operator's times, and makes two
Dimension film is to the conversion economically feasible of three-dimensional movie.One or more embodiments of the invention by mask and import data storage
So that one or more computer is used in transformation process in computer storage and/or computer disc driver.
Figure 79 shows that the mask 7901 associating with soldier 7802 photo in prospect (forms the helmet of rightmost soldier
Part).Mask 7901 with soldier with together with the mask of the every other Operation Definition shown in multiple artificial colors by depth
It is applied to the soldier occurring in original image in front of the depth of the element (i.e. robot 7803) that computer generates
Different piece.Show there occurs the horizontal translation of foreground target from the horizontally extending dotted line of masks area 7902 and 7903, and
And illustrating when the other elements for film have metadata, it is possible to use the metadata of importing is accurately sheltered from dynamic(al) correction
Depth in target or the place excessively scribbled of color.For example, when for the mesh occurring in before the element that computer generates
When mark has α, edge can be accurately determined.The a type of file that can be utilized to obtain mask edge data is tool
There are the file of α file and/or mask data, such as RGBA file (referring to Figure 80).In addition, the masked area in these horizontal translations
The absent region at domain 7902 and 7903, the data of generation being used for background allows for artifact-free 2 d-to-3 d conversion.
Figure 80 shows and is also used as mask layer to limit that operator defines and to be possibly less accurately used for
Importing mask, being illustrated as skipper covering by good application to the three soldiers 7802 and referred to as edge of soldier A, B and C
α layer.Furthermore it is possible to the element that the optional computer of such as dust etc generates is inserted into along the line being labeled as " dust "
To increase the verity (if desired) of scene in scene.Appointing in the element that background, prospect or computer generate
What one can be utilized to fill final left images pairing when needed.
Figure 81 shows that the computer covering in such as robot etc when the movement elements such as soldier etc generates
Element on when using operator's definition the result that is not adjusted of mask.Do not use the first number with original image target association
According to, such as matte (matte) or α 8001, pseudomorphism occurs, wherein the mask of operator's definition not exclusively with the side covering over the object
Edge is aligned.In uppermost picture, the lip of soldier shows light border 8101, and following picture show artifact-free
Edge, because the α of Figure 80 is used for limiting the edge of the mask of any operator's definition.Operation by using being applied to Figure 79 is fixed
The α metadata of Figure 80 of mask edge of justice, thus allow for the artifact-free edge on overlapping region.Those skilled in the art
It will be understood that, combine with its α continuous closer to the application of element be used for arriving from behind and being above deposited in all target hierarchies
To create the final image pairing for left eye and right eye viewing at its each different depth.
Embodiments of the invention are allowed in real time by generating the translation file that can serve as portable pixel-by-pixel editing file
Edit 3D rendering and need not again reproduce for example to change layer/color/mask and/or to remove pseudomorphism and minimize or disappear
Except the iteration workflow path returning different operating group.For example, mask set takes source images, and for constituting the image of film
Items in every frame of sequence, region or mankind's recognizable object create mask.Depth amplification group is by depth and such as shape
It is applied to the mask of mask set establishment.When reproducing image pairing, left and right visual point image and left and right translation file can be by these
One or more embodiments of invention generate.Left and right visual point image allows the 3D viewing of original 2D image.Translation file for example with
The form of UV or U figure specifies the pixel-shift for each source pixel in original 2D image.These files are generally and for every
The α mask of layer is relevant, and described layer is for example used for the layer of actress, is used for the layer of door, is used for layer of background etc..These translation literary compositions
Part or figure are transferred to quality assurance work group from the depth amplification group reproducing 3D rendering.This permission quality assurance work group (or
Other working groups of person's such as depth amplification group etc) not again reproduce in the case of execute 3D rendering real-time edition for example
So that not with require such again to reproduce or mask is sent back process time that mask set does over again/again reproduce
And/or change layer/color/mask in the case of the delay of iteration workflow association and/or remove and such as shelter error etc
Pseudomorphism, wherein mask set may be at the third world countries with non-skilled labor of earth opposite side.In addition, ought be again
Existing left images, that is, during 3D rendering, the Z-depth in the region of such as performer etc in image can also be together with α mask
It is transferred to quality assurance group, then this quality assurance group can also be adjusted in the case of not utilizing original reproduction software again to reproduce
Section depth.This can for example utilize deleted background data execution from any layer of generation so as example again not reproduce or
" downstream " real-time edition is allowed in the case of person's ray tracing.Quality assurance can give to shelter group for individuality or depth expands
Increasing group is not to wait for or requires upstream group for current project, anything to be returned to feed back so that these individualities can be indicated on
In the case of work, desired work product is made for given project.This allows feedback, but eliminates and send work product
Iterative delay and the associated delay waiting the work product done over again that return work is related to.The elimination of such a iteration provides
End-to-end time or the huge saving of wall hanging time that conversion project spends, thus increasing profit, and minimize and realize work
Make the labour force needed for flow process.
Figure 82 shows source images, and it will carry out depth enhancing and translate the file (enforcement for translation file with left and right
Example, referring to Figure 85 A-D and Figure 86 A-D) provide together with α mask (for example shown in Figure 79), to allow (such as downstream work
Group) the execution real-time edition of 3D rendering and need not again reproduce or ray tracing scene in whole image sequence for example so that
In the case of the iteration workflow path not returning original working group change layer/color/mask and/or remove and/or
Adjust depth or otherwise change 3D rendering (according to Figure 96 relative to Figure 95).
Figure 83 shows the mask being generated by mask working group for depth amplification group application depth, wherein mask with all
Target association as the mankind's recognizable object in the source images of such as Figure 82 etc.Generally, sheltered using non-skilled labor
Mankind's recognizable object in key frame in scene or image sequence.Non-skilled labor is cheap and is usually located at coastal waters.
Hundreds of workers can be employed to execute and to shelter this hard row to hoe associating with low price.Any existing colored mask
May serve as the starting point of 3D mask, it can combine and resolve into, to be formed, the different depth limiting in mankind's recognizable object
Sub- mask 3D mask profile.Any other method obtaining the mask for image-region all meets the spirit of the present invention.
Figure 84 be shown in which generally for closer to target darker and for farther target brighter apply depth
Region.This view provides the quick overview of the relative depth of target in frame.
Figure 85 A shows the left UV figure comprising translation in horizontal direction or skew for each source pixel.Should when utilizing
During depth reconstruction of scenes, it is possible to use graphically map the translation of the skew of the horizontal movement of single pixel
Figure.Figure 85 B shows the right UV figure comprising translation in horizontal direction or skew for each source pixel.Due in these images
Each width appear the same as, thus by the black level value of mobile color to project the difference of the specific region of Figure 85 A and Figure 85 B
Different have delicate difference it is easier to observe in described two files.Figure 85 C shows and shows minor element therein
The black level value movable part of the left UV figure of Figure 85 A.This region is corresponding to the branch shown in Figure 82, Figure 83 and Figure 84 upper right corner,
The left side directly over concrete mixer and in lamp stand.Figure 85 D shows the right UV figure of Figure 85 B showing minor element therein
Black level value movable part.Branch shown in the Light Difference of color shows, those pixels will be moved to the phase of pure UV in figure
Answer position, this UV figure will be the brightest from being the most secretly mapped to by redness in the horizontal direction, and in vertical direction by green from
Secretly it is mapped to the brightest.In other words, the translation figure in UV embodiment is to occur when generating left and right viewpoint with respect to original source image
Movement figure describe.UV figure can be utilized, however, (or more fine granularity) is from source images on the basis of being included in pixel-by-pixel
Any other file type of horizontal-shift can be utilized, including the compressed format being not easily seen as image.For
Some software kits of editor are provided together with the UV small tool building in advance, and therefore if desired, can thus sharp
Translate file or figure with UV.For example, some synthesis programs have the object building in advance, and it enables UV figure easily to utilize,
And otherwise in figure upper-pilot, and thus for these implementations for, the file that figure may be viewed by can
To be utilized, but it is not required.
Because about 2D image creation, viewpoint employs and moves horizontally, thus likely monochrome is used for translating literary composition
Part.For example, because the often row of translation file is indexed based on the position in memorizer in vertical direction, thus may
Be to simply use a kind of lifting color, for example in the horizontal direction using red to show the home position of pixel.Cause
This, any pixel movement of translation in figure is all illustrated as given pixel value from a horizontal-shift to another movement, works as example
As when movement is less in the background, this leads to trickle color change.Figure 86 A shows and comprises level side for each source pixel
Translation upwards or the left U figure of skew.Figure 86 B shows and comprises translation in horizontal direction or skew for each source pixel
Right U figure.Figure 86 C shows the black level value movable part of the left U figure of Figure 86 A showing minor element therein.Figure 86 D shows
Show the black level value movable part of the right U figure of Figure 86 B of minor element therein.Again, do not require considerable using the mankind
The file format seen, and can using store with respect to source images pixel-by-pixel on the basis of horizontal-shift any lattice
Formula.Because memorizer and storage device are so cheap, thus can be using any form in spite of compression, without having into
Any of this dramatically increases.Generally, the establishment of eye image makes the foreground part of U figure (or UV figure) seem darker, because
They are moved to the left, and vice versa.This by only right eye watch attentively with opening something in prospect and and then slightly towards
Move right and be easily observe that (observing that foreground target is really moved to the left).Due to U figure (or the UV under invariant state
Figure) it is to bright simple color slope from dark, thus something is moved to the left, that is, for right viewpoint, map that to U figure
The more dark areas of (or UV figure).Accordingly, with respect to the pixel not moved, in the same area of each U figure (or UV figure)
Identical branch is darker for right eye, and brighter for left eye.Again, do not require using the figure that may be viewed by,
But show the concept of the movement for the generation of given viewpoint.
Figure 87 illustrates the known application of UV figure, and wherein threedimensional model is unfolded so that the image in UV space can use
UV figure painting is signed on 3D model.The figure shows and traditionally how to be schemed so that texture maps are applied to 3D shape using UV.For example,
Here be earth acquisition image scribble or the texture of plane set is mapped to U and V coordinate system, this coordinate system is converted
Become X, the Y on 3D model and Z coordinate.Traditional animation executes in this manner, and wherein wire-frame model is unfolded and flattens, its
Limit U the and V coordinate system wherein applying texture maps.
Embodiments of the invention described herein utilize UV and U to scheme in new ways, wherein utilize figure pairing restriction to be used for
The horizontal-shift of two width images (left and right), each source pixel is translated, relative with single figure, and it is utilized to restriction will
Texture maps are placed in the coordinate on 3D model or wire frame.That is, embodiments of the invention utilize UV figure and U figure (or any other water
Average shifting file format) allow to be adjusted to offset target in the case of again not reproducing whole scene.Again, with for example
Two orthogonal coordinates are mapped to objective UV figure known using embodiments of the invention that are relative, enabling here
Using make use of two figures, that is, it is used for a figure and a figure for right eye of left eye, it maps the water for left and right viewpoint
Average shifting.In other words, because pixel goes up (for left and right eye) translation in only the horizontal direction, thus embodiments of the invention are in water
In one-dimensional interior mapping on the basis of putting down line by line.That is, 2 dimensions are mapped to 3-dimensional by prior art, and embodiments of the invention utilize 1 dimension
A kind of interior 2 translation figures (therefore, the visible embodiment of translation figure utilizes color).For example, if a line bag of translation file
Containing the 0,1,2,3...1918,1919, and the 2nd and the 3rd pixel to 4 pixels of right translation, then this of file is about to be read as 0,
4,5,3...1918,1919.The extended formatting of performance skew relatively can not be checked as slope colored region, but can provide
Big compression level, for example, may be read as 0,0,0,0...0,0 using the row of the relatively file of skew, and the 2nd and the 3rd pixel
4 pixels move to right so that this document is read as 0,4,4,0 ... 0,0.Offset if there is having zero level in the viewpoint of left and right
Big background parts, then such file can be compressed to a great extent.However, this document can be seen
Make standard U file, just as it is pure and fresh, that is, with respect to be counted as coloud coding translate file relative be so that it is
Absolute.In an embodiment of the present invention, it is possible to use any of the skew moving horizontally for left and right viewpoint can be stored
Extended formatting.Also ramp function is had on Y or vertical axises as UV files classes, the value in such file is for example for image
Bottom row will be (0,0) corresponding with each pixel, (0,1), (0,2) ... (0,1918), (0,1919), for such as second
Horizontal line or row will be (1,0), (1,1) etc..Such skew file allows motion in non-aqueous flat raft for the pixel,
However, embodiments of the invention simply move horizontally data for left and right viewpoint, and therefore need not follow the tracks of source pixel movement
To which vertical row, because horizontal movement is in identical row according to definition.
Figure 88 shows disparity map, and it illustrates the maximum region of difference between wherein left and right translation figure.This shows to lean on most
The most pictures of movement between two UV (or U) figures that the target of nearly beholder has shown in Figure 85 A-B (or Figure 86 A-B)
Element
Figure 89 shows the left eye reproduction of the source images of Figure 82.Figure 90 shows the right eye reproduction of the source images of Figure 82.Figure
The stereoscopic image of 91 images showing Figure 89 and Figure 90 being used together with red/blue glasses.
Figure 92 show masked and be in depth for each different layers enhanced during image, described layer bag
Include actress layer, gate layer, background layer (illustrate can by generate missing information and the deleted background information that fill referring to example
As Figure 34, Figure 73 and Figure 76).That is, the view data that partly can use generation of the sky of background after the actress in Figure 92
Filling (referring to the profile of the head of the actress on background wall).By the view data of generation is used for each layer, and again
All images in reproduction or ray tracing scene are so that real-time edition relatively, can for example utilize synthesis program.For example, such as
The hair mask of the actress in fruit Figure 92 be altered in case more correctly cover hair, then not by new mask cover any
Pixel obtains and almost at once can be used for watching from background and (may take hours with when editing anything scene
The standard of disposal ability all images in reconstruction of scenes again again reproduce or ray tracing relatively).This can be included for bag
The data including any layer of acquisition generation of background generates for artifact-free 3D rendering.
Figure 93 shows the UV figure covering on the α mask associating with the actress shown in Figure 92, and it is based in α mask
The depth setting of each different pixels and the translational offsets of left and right UV in figure obtaining are set.This UV layer can be with other UV layers
Utilize together to provide in the case of again not reproducing entire image to quality assurance work group (or other working groups)
Real-time edition 3D rendering, for example, correct pseudomorphism, or correct the ability sheltering mistake.However, iteration workflow may require
Frame is sent back to third world countries to do over again to mask, these masks are then returned to the different operating group of the such as U.S. so that weight
New reproduction image, then this image is checked by quality assurance work group again.Such iteration workflow is completely eliminated
Small pseudomorphism, because quality assurance work group simply to α mask reshaping, and can regenerate from original source figure
The pixel-shift of picture is so that real-time edition 3D rendering, and avoids for example being related to other working groups.According to such as Figure 42-70 or
The depth of setting actress determines the UV figure do not changed and generates the experienced amount of movement of UV figure, described UV in any other way
Figure is used for left eye for mono- according to Figure 85 A-D, and one is used for eye image and manipulates (or the U figure in Figure 86 A-D).These figures are permissible
It is supplied to for example any synthesis program for every layer together with α mask, the wherein change of mask for example allows synthesis program simply
Obtain the pixel being derived from other layers " to add up " image in real time.This can include for the view data of generation being used for any layer
(or in the case of the data deeper layer not being existed to generation, data is filled in gap).Those skilled in the art will
Understanding, by arbitrating or otherwise determining, which layer and respective image being placed in over each other to form output figure
Picture, combination in synthesis program has the layer set of mask to form output image.After adding depth again not again
In the case of reproduction or ray tracing, use level translates figure combinations of pairs source image pixels to form appointing of output pixel
Where method all meets the spirit of the present invention.
Figure 94 shows based on each different layers shown in Figure 92, that is, the left and right UV translation figure being used for each α is the
The work space of two depth enhancing Program Generating, wherein this work space permission quality assurance personnel (or other work
Group) again do not reproduce or ray tracing and/or iteratively do not send to any other working group repair in the case of in real time
Ground adjusts mask and thus changes 3D rendering pairing (or stereoscopic image).One or more embodiments of the invention can be with pin
Layer number is circulated by source file, and creates the script generating work space as shown in fig. 94.For example, once mask work
Create the mask for each different layers as group and generate mask file, then reproduction group can programmatically be read in
Mask file and generate scripted code, this scripted code includes generating source icon based on the output that reproduction group reproduces, for every
The α of layer copies icon, the left and right UV figure for every layer, and each different layers is combined to other figures in the visual point image of left and right
Mark.The representation tool that this allows quality assurance work group to utilize them to be familiar with and may utilize than reproduction work group is faster and relatively
Uncomplicated instrument.For worker generate graphical user interface in case allow real-time edition 3D rendering any method, including for
Every frame creates and is connected to the α mask artwork target source icon for every layer and generates the translation being connected to each other for left and right viewpoint
Scheme and every layer is circulated, until combining for the method for 3D viewing with output viewpoint, meet the spirit of the present invention.Replaceable
Ground, allows any other method of real-time edition image all to meet in the case of again not reproducing by using the pairing of translation figure
The spirit of the present invention, even if these translation figures are invisible or do not illustrate to user for user.
Figure 95 shows the workflow for iteration correction workflow.At 9501, mask working group generates and is used for
The mask of the target of the mankind's recognizable object in such as image sequence or any other shape etc.This can include generating
Sub- mask set and generate limit different depth region layer.This step is generally by generally having unusual low labor cost
Unskilled and/or sweated labour force's execution in country.The target sheltered is by employee's (typically artist) of high professional qualification
Check, described employee masking regional in scene by depth and/or color application at 9502.Artist is usually located to be had
More high labor cost industrialized country.Another working group (usually quality assurance group) and then check at 9503 obtains
Image, and any pseudomorphism needing to repair or mistake are determined whether there is based on the requirement of specific project.If it is,
So by have vicious mask or wherein find mistake image in position send back to and shelter working group to do over again, that is, from
9504 to 9501.Once not having more mistakes, this process completes at 9505.Even in less working group, mistake can
With by again doing over again to mask and again reproducing or otherwise all images in ray tracing scene entangle
Just, this may take hours process time for example to make simple change.Mistake during depth judges generally occurs not
Too frequent, because the labourer of more high professional qualification is based on more high professional qualification horizontal application depth, and it is usual therefore to return reproduction group
Occur less frequent, therefore this circulation is for the sake of clarity not shown, although this Iterative path is likely to occur.Shelter
" returning " may take a significant amount of time return system work, because work product must be sheltered by other working groups and so again
Again reproduce afterwards.
Figure 96 shows an embodiment of the workflow allowing for by one or more embodiments of system, its
In each working group can not again reproduce in the case of execute 3D rendering real-time edition for example in case change layer/color/
Mask and/or remove pseudomorphism and otherwise correction from another working group work product without with again again
Existing/ray tracing or work product is sent back to by workflow so that the iterative delay that associates of correction.The life of mask is imaged on
Occur at 9501 like that in Figure 95, the application of depth occurs as in Figure 95 at 9502.In addition, at 9601, then
Now group generates the translation figure reaching quality assurance group with the image reproducing.Quality assurance group is as in Figure 95 at 9503
Check work product, and check pseudomorphism at 9504 also like such in Figure 95.However, those skilled in the art will manage
Solution, because quality assurance work group (or other working groups) has translation figure and adjoint layer and α mask, thus they can
With at 9602 for example using such asEtc commercially available synthesis program edit in real time 3D rendering or
Person's otherwise partly correction chart picture.For example, as shown in fig. 94, quality assurance work group can be opened familiar to them
Graphics programs (relative with the complicated reproduction program that artist uses), and for example adjust α mask, translate wherein about each
The skew of in figure is by quality assurance work group reshaping as needed, and output image is successively formed (according to Figure 34, Tu73He
Figure 76 uses the deleted background information of any generation, and the element layer using any computer generation according to Figure 79).This area
Technical staff will recognize that, generating two width output images from farthest backing layer to foreground layer can not carry out ray tracing
In the case of completed by only almost immediately covering in final output image the pixel from every layer.This permits effectively
Permitted quality assurance work group replace as reproduction work group using 3D modeling and ray tracing etc. carry out local image pixel-by-pixel
Manipulate.This process time that can save multiple hours and/or the image again reproducing composition scene with other workers of wait
The delay of serial correlation.
Although describing invention disclosed herein by means of specific embodiment and its application, those skilled in the art can
So that it is made in the case of illustrated the scope of the present invention in without departing from claims with many modifications and modification.
Claims (20)
1. a kind of moving picture project management system, including:
Computer;
Data base, it is coupled with described computer, and wherein said data base includes
Project table, it includes item identifier and the description of the project relevant with motion picture;
Camera lens table, it includes camera lens identifier and quotes multiple image using start frame value and end frame value, wherein said many
Width image with and the described motion picture of described item association associate, and it includes
There is at least one camera lens of the state relevant with the job schedule of execution on described camera lens;
Task list, it quotes described item identifier in described project table and it includes
At least one task, it includes task identifier and assignment person and it further includes and and motion picture work
Make the context setting of relevant task type association, wherein said task at least includes limiting the area in described multiple image
Domain, to the synthetic work in described regional work and described region, and at least one task wherein said includes completing institute
State the time that at least one task is distributed;
Time list items table, it quotes the described task mark in described item identifier and described task list in described project table
Know symbol, and it includes
At least one time list items including initial time and end time;
Described computer is configured to
Assume the first display including search display being configured to be checked by production worker, this search display is included up and down
Literary composition, project, camera lens, state and artist, and wherein said first display further include multiple artistical lists and
Based on the time spending in described at least one list items time relatively according to described in associating with least one camera lens described extremely
The corresponding statess of described time of few task distribution and actual achievement;
Present and be configured to be shown by artist is checked second, this second display includes at least one daily distribution, and it has
Context, project, camera lens and
Be configured to update the described state in described task list state input and
It is configured to update the described initial time in described time list items table and the input of the intervalometer of described end time;
Assume the 3rd display being configured to be checked by editor, the 3rd display inclusion be configured to accept with regard to institute
State the comment of at least piece image in the described multiple image of at least one camera lens association or drawing or comment and drawing two
The annotation frame of person.
2. the moving picture project management system of claim 1, further includes:
Snapshot table, it includes snapshot identifier and search-type, and described snapshot table includes
The snapshot of at least one camera lens described, it includes at least one position of resource associating with least one camera lens described.
3. the moving picture project management system of claim 1, wherein said context setting further includes region definition
Type, described subtype includes sheltering and outsourcing mask, and the work subtype on described region includes depth, key framing and fortune
Dynamic.
4. the moving picture project management system of claim 1, further includes
Asset request table, it includes asset request identifier and camera lens identifier.
5. the moving picture project management system of claim 1, further includes
Mask request table, it includes mask request identifier and camera lens identifier.
6. the moving picture project management system of claim 1, further includes
Remarks table, it includes remarks identifier and quotes described item identifier and include and from described motion picture
At least one relevant remarks of at least one width in described multiple image.
7. the moving picture project management system of claim 1, further includes
Pay table, it includes delivery identifier and quotes described item identifier and include the payment with described motion picture
Relevant information.
8. the system of claim 1, wherein said computer is further configured to
Described 3rd display being configured to be checked by described editor assumes covering in described multiple image
Annotation at least one width.
9. the system of claim 1, wherein said computer is configured to
The grading of the work that acceptance is executed based on described artist from described production worker or described editor is defeated
Enter.
10. the system of claim 1, wherein said computer is configured to
The grading of the work that acceptance is executed based on described artist from described production worker or described editor, its
In accept before the described grading of described production worker or described editor in described computer, described calculating
Machine does not show described artistical identity to production worker or described editor.
The system of 11. claim 1, wherein said computer is configured to
Accept the difficulty of at least one camera lens described, and the work based on the execution of described artist, the institute based on described camera lens
The Time Calculation grading spending is stated on difficulty and shooting base.
The system of 12. claim 1, wherein said computer is configured to
The grading input of the work that acceptance is executed based on described artist from expeditor or described editor;Or
Accept difficulty and the work based on the execution of described artist of at least one camera lens described, based on described in described camera lens
The Time Calculation grading spending in difficulty and shooting base, and
Accepted based on described computer or the described grading of described computer calculating shows for described artistical excitation.
The system of 13. claim 1, wherein said computer is configured to
Based on described actual achievement estimate residual cost, described actual achievement based on spend total time with respect to described project in described in
The time that the whole tasks at least one task described of whole camera lens associations at least one camera lens are distributed.
The system of 14. claim 1, wherein said computer is configured to
The described actual achievement being associated with first item and the actual achievement being associated with second items are compared;
At least one worker Yao Congsuo is shown based at least one grading of the first worker distributing to described first item
State first item and distribute to described second items.
The system of 15. claim 1, wherein said computer is configured to
Analysis has the perspective project of a number of camera lens and estimates the difficulty of every camera lens, and closes based on described project
The actual achievement of connection, calculates the forecast cost for described perspective project.
The system of 16. claim 1, wherein said computer is configured to
Analysis has the perspective project of a number of camera lens and estimates the difficulty of every camera lens, and based on previous execution
The actual achievement of the second items association of the previous execution completing after first item and the described first item that previously executes, meter
Calculate the derivative of described actual achievement, the described derivative calculations based on described actual achievement are used for the forecast cost of described perspective project.
The system of 17. claim 1, wherein said computer is configured to
Analysis and the described actual achievement of described item association, and with the camera lens that completes divided by the total camera lens with described item association;
And
Provide the deadline of described project.
The system of 18. claim 1, wherein said computer is configured to
Analysis and the described actual achievement of described item association, and with the camera lens that completes divided by the total camera lens with described item association;
Provide the deadline of described project;
Accept the input with least one additional artist of grading;
Accept a number of camera lens wherein using described additional artist;
Calculated time-consuming based at least one additional artist described and described number of shots,
Deduct described time-consuming from the described deadline of described project;
Provide the renewal time that described project completes.
The system of 19. claim 1, wherein said computer is configured to
Calculating can be utilized to described project filing amount of disk space, and show can from other assets rebuild to
Few assets are to avoid at least one assets described are filed.
The system of 20. claim 1, wherein said computer is configured to
Show error message in the case that described artist is to the frame number work being not currently at least one camera lens described.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/366,899 US9031383B2 (en) | 2001-05-04 | 2012-02-06 | Motion picture project management system |
PCT/US2013/035506 WO2013120115A2 (en) | 2012-02-06 | 2013-04-05 | Motion picture project management system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104272377A CN104272377A (en) | 2015-01-07 |
CN104272377B true CN104272377B (en) | 2017-03-01 |
Family
ID=48948173
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201380018690.7A Expired - Fee Related CN104272377B (en) | 2012-02-06 | 2013-04-05 | Moving picture project management system |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP2812894A4 (en) |
CN (1) | CN104272377B (en) |
AU (1) | AU2013216732B2 (en) |
CA (1) | CA2866672A1 (en) |
WO (1) | WO2013120115A2 (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017020077A1 (en) * | 2015-08-03 | 2017-02-09 | Commonwealth Scientific And Industrial Research Organisation | Monitoring systems and methods |
US10122996B2 (en) * | 2016-03-09 | 2018-11-06 | Sony Corporation | Method for 3D multiview reconstruction by feature tracking and model registration |
US10165258B2 (en) * | 2016-04-06 | 2018-12-25 | Facebook, Inc. | Efficient determination of optical flow between images |
CN107330410B (en) * | 2017-07-03 | 2020-06-30 | 南京工程学院 | Anomaly detection method based on deep learning in complex environment |
CN109218750B (en) * | 2018-10-30 | 2022-01-04 | 百度在线网络技术(北京)有限公司 | Video content retrieval method, device, storage medium and terminal equipment |
CN111526422B (en) * | 2019-02-01 | 2021-08-27 | 网宿科技股份有限公司 | Method, system and equipment for fitting target object in video frame |
CN110060213B (en) * | 2019-04-09 | 2021-06-15 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN112183629B (en) * | 2020-09-28 | 2024-05-28 | 海尔优家智能科技(北京)有限公司 | Image identification method and device, storage medium and electronic equipment |
TWI783718B (en) * | 2021-10-07 | 2022-11-11 | 瑞昱半導體股份有限公司 | Display control integrated circuit applicable to performing real-time video content text detection and speech automatic generation in display device |
CN115188091B (en) * | 2022-07-13 | 2023-10-13 | 国网江苏省电力有限公司泰州供电分公司 | Unmanned aerial vehicle gridding inspection system and method integrating power transmission and transformation equipment |
CN115131409B (en) * | 2022-08-26 | 2023-01-24 | 深圳深知未来智能有限公司 | Intimacy matrix viewpoint synthesis method, application and system based on deep learning |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3736790A1 (en) * | 1987-10-30 | 1989-05-11 | Broadcast Television Syst | METHOD FOR AUTOMATICALLY CORRECTING IMAGE ERRORS IN FILM SCANNING |
US5328073A (en) * | 1992-06-24 | 1994-07-12 | Eastman Kodak Company | Film registration and ironing gate assembly |
US5835163A (en) * | 1995-12-21 | 1998-11-10 | Siemens Corporate Research, Inc. | Apparatus for detecting a cut in a video |
US5841512A (en) * | 1996-02-27 | 1998-11-24 | Goodhill; Dean Kenneth | Methods of previewing and editing motion pictures |
US5959697A (en) * | 1996-06-07 | 1999-09-28 | Electronic Data Systems Corporation | Method and system for detecting dissolve transitions in a video signal |
US5778108A (en) * | 1996-06-07 | 1998-07-07 | Electronic Data Systems Corporation | Method and system for detecting transitional markers such as uniform fields in a video signal |
US5920360A (en) * | 1996-06-07 | 1999-07-06 | Electronic Data Systems Corporation | Method and system for detecting fade transitions in a video signal |
US5767923A (en) * | 1996-06-07 | 1998-06-16 | Electronic Data Systems Corporation | Method and system for detecting cuts in a video signal |
US6067125A (en) * | 1997-05-15 | 2000-05-23 | Minerva Systems | Structure and method for film grain noise reduction |
US6031564A (en) * | 1997-07-07 | 2000-02-29 | Reveo, Inc. | Method and apparatus for monoscopic to stereoscopic image conversion |
US8401336B2 (en) * | 2001-05-04 | 2013-03-19 | Legend3D, Inc. | System and method for rapid image sequence depth enhancement with augmented computer-generated elements |
EP1483909B1 (en) * | 2002-03-13 | 2010-04-28 | Imax Corporation | Systems and methods for digitally re-mastering or otherwise modifying motion pictures or other image sequences data |
EP1536357A4 (en) * | 2002-07-15 | 2007-11-21 | Sony Corp | Video program creation system, table providing device, terminal device, terminal processing method, program, recording medium |
EP1420407B1 (en) * | 2002-11-15 | 2012-01-04 | Sony Corporation | Method and apparatus for controlling editing image display |
US9047915B2 (en) * | 2004-04-09 | 2015-06-02 | Sony Corporation | Asset revision management in media production |
US7110605B2 (en) * | 2005-02-04 | 2006-09-19 | Dts Az Research, Llc | Digital intermediate (DI) processing and distribution with scalable compression in the post-production of motion pictures |
WO2007082171A2 (en) * | 2006-01-05 | 2007-07-19 | Eyespot Corporation | System and methods for online collaborative video creation |
US8443284B2 (en) * | 2007-07-19 | 2013-05-14 | Apple Inc. | Script-integrated storyboards |
US8225228B2 (en) * | 2008-07-10 | 2012-07-17 | Apple Inc. | Collaborative media production |
-
2013
- 2013-04-05 CN CN201380018690.7A patent/CN104272377B/en not_active Expired - Fee Related
- 2013-04-05 CA CA2866672A patent/CA2866672A1/en not_active Abandoned
- 2013-04-05 WO PCT/US2013/035506 patent/WO2013120115A2/en active Application Filing
- 2013-04-05 EP EP13746718.9A patent/EP2812894A4/en not_active Withdrawn
- 2013-04-05 AU AU2013216732A patent/AU2013216732B2/en not_active Ceased
Also Published As
Publication number | Publication date |
---|---|
CN104272377A (en) | 2015-01-07 |
WO2013120115A3 (en) | 2013-10-24 |
EP2812894A4 (en) | 2016-04-06 |
CA2866672A1 (en) | 2013-08-15 |
WO2013120115A2 (en) | 2013-08-15 |
AU2013216732B2 (en) | 2014-10-02 |
EP2812894A2 (en) | 2014-12-17 |
AU2013216732A1 (en) | 2014-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104272377B (en) | Moving picture project management system | |
US9595296B2 (en) | Multi-stage production pipeline system | |
US9615082B2 (en) | Image sequence enhancement and motion picture project management system and method | |
CN101479765B (en) | Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition | |
US8385684B2 (en) | System and method for minimal iteration workflow for image sequence depth enhancement | |
US9031383B2 (en) | Motion picture project management system | |
Wu et al. | Content‐based colour transfer | |
US8897596B1 (en) | System and method for rapid image sequence depth enhancement with translucent elements | |
US8078006B1 (en) | Minimal artifact image sequence depth enhancement system and method | |
US8730232B2 (en) | Director-style based 2D to 3D movie conversion system and method | |
US20120063681A1 (en) | Minimal artifact image sequence depth enhancement system and method | |
AU2015213286B2 (en) | System and method for minimal iteration workflow for image sequence depth enhancement | |
Wang et al. | People as scene probes | |
Willment et al. | What is virtual production? An explainer and research agenda | |
Herling | Advanced real-time manipulation of video streams | |
Chuang | New models and methods for matting and compositing | |
Knowles | The Temporal Image Mosaic and its Artistic Applications in Filmmaking | |
Hasic et al. | Movement bias in visual attention for perceptually-guided selective rendering of animations | |
Steckel | Integration of Z-Depth in compositing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170301 Termination date: 20190405 |
|
CF01 | Termination of patent right due to non-payment of annual fee |