GB2482140A - Automated video production - Google Patents

Automated video production Download PDF

Info

Publication number
GB2482140A
GB2482140A GB1012174.7A GB201012174A GB2482140A GB 2482140 A GB2482140 A GB 2482140A GB 201012174 A GB201012174 A GB 201012174A GB 2482140 A GB2482140 A GB 2482140A
Authority
GB
United Kingdom
Prior art keywords
view
target object
video
camera
skin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1012174.7A
Other versions
GB201012174D0 (en
Inventor
Damien Kelly
Frank Boland
Francois Pitie
Anil Kokaram
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
College of the Holy and Undivided Trinity of Queen Elizabeth near Dublin
Original Assignee
College of the Holy and Undivided Trinity of Queen Elizabeth near Dublin
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by College of the Holy and Undivided Trinity of Queen Elizabeth near Dublin filed Critical College of the Holy and Undivided Trinity of Queen Elizabeth near Dublin
Priority to GB1012174.7A priority Critical patent/GB2482140A/en
Publication of GB201012174D0 publication Critical patent/GB201012174D0/en
Publication of GB2482140A publication Critical patent/GB2482140A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • G06K9/00234
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A single video file of a participative lecture/seminar is made from multi-view video capture in an automated post production or editing procedure by gathering data from plural camera and microphone sources, locating and tracking a target (e.g. person) using this data and composing a single video sequence comprising an optimum (e.g. most frontal) view of the target. This may be done in a lecture theatre, seminar room or auditorium and the target may be the current active speaker e.g. an audience member. An automated tracking offline algorithm is used to track a sound emitting from a moving target object in a 3D space to provide localisation data of the target object to identify an optimum camera source to provide video data of said target object. A composite video sequence is composed from a user defined main view and a plurality of identified optimum camera sources to form a single video presentation. The algorithm may use both video data from multiple camera views and audio data from multiple microphone arrays to infer the 3D position of the active speaker over the duration of the captured presentation. A skin colour mask may be used to indicate likely regions of person occupancy.

Description

Automated Video Production System and Method
Field of Invention
The invention relates generally to a video production system and method, and specifically to automated video production.
Background to the Invention
The communication of information through lectures is fundamental for learning and teaching in academic institutions. Until recently, universities have only been able to offer lectures to attending students, severely restricting the university's reach to the confines of their campus.
However, with advancements in technologies for transmitting multi-media over the Internet, some universities now facilitate students with live lecture participation, or facilities to view lecture recordings over the Internet. As well as on-line lectures, academic institutions have recognised the greater opportunities of the Internet for content delivery and on-line video seminar and video conference proceedings are becoming popular. Universities have embraced technology in this way not only to broaden their reach but also to meet the growing demands of students and academics who wish for greater flexibility to learning.
The efforts of universities to provide students with on-line lecture content fits into the domain of eLearriirig. Within eLearning, the ways in which universities are currently offering video content over the Internet fit into two categories: synchronous and asynchronous. In a synchronous manner some universities offer live video lectures to remote participants. In many cases where lecture videos are provided, students are often given the opportunity to view content in an asynchronous manner or on-demand.
Choosing to offer lectures online is a significant and costly under-taking for any academic institution. Not least of the difficulties associated with this task, is the capturing and editing of video lectures into a suitable form for presentation over the Internet. The expectation among students in relation video lectures is high. The modern student has regular exposure through the Internet and television to professionally edited video content. This sets a high level of expectation among student in relation to video lectures.
There is recent move away from traditional single camera lecture videos towards more dynamic video presentations including shots from multiple cameras. Such productions which aim to capture all visually interesting aspects of lectures are generally agreed to be much more engaging for viewers.
A key component of any lecture or seminar is the conversational interaction among participants, such as that which often occurs between a presenter and an audience.
Capturing this information for inc]usion in a video]ecture production presently requires significant manual editing. In the case where the lecture is to be transmitted live this editing must be performed at the time of capturing usually by large production teams. In the off-line case such editing can be performed as a post-productJori step but also ri most cases requires skilled manual editing.
Automatic systems for editing multi-camera lecture captures do exist such as that proposed by Rui et al. (Patent No. US 7,349,005). This system incorporates expert video production rules for editing multi-view video data of a lecture and also enables the capture of conversational interactions. The limitation of this system is that active speakers are only tracked in a single view at any given time. Although the system uses multiple cameras, each camera is dedicated to a specific capture task such as capturing the audience or the presenter. The problem with such a configuration is that the success of the system to capture facial view of speakers requires audience members to face a designated camera. This means that speakers are restricted to a defined seating zone which is undesirable. Furthermore, the system can only provide frontal facial views of speakers if they are orientated towards the camera assigned to track them.
It is an object of the invention to provide a system and method for the automated production of a single-view video presentation from a multi-camera capture of a lecture.
Summary of the Invention
The aim of the invention is to provide an automated video editing system that tracks conversational interactions but overcomes the above mentioned limitations of existing techniques, as described in detail below. Instead of designating a single camera to track specific speakers in a lecture room, the invention uses multiple cameras to completely observe the lecture room. The system then tracks conversational interactions between speakers and extracts the most frontal view of the active speaker from the available cameras.
According to the invention there is provided, as set out in the appended claims, a method for the automated production of a single video file from a multi-view video capture, the method comprising the steps of: i) gathering video data from a plurality of camera sources; ii) gathering audio data from a plurality of microphone sources; iii) using audio and video information to automatically locate and track a moving target object in a 3D space, so as to determine the region occupied by the said target object in each available camera; iv) determining from the identified regions in each camera view, the most optimum view of said target object; and v) composing a single view video sequence consisting of a user defined main view and an automatically inserted optimum view of said target object over the duration of the video capture.
In one embodiment the target object is a person In one embodiment the 3D space is a lecture theatre, seminar room or auditorium.
In one embodiment there is provided the step of voxelization to spatially sample the 3D space of the tracking environment in order to determine hypothesised target object positions.
In one embodiment, each voxel represents a hypothesised target object position which is confirmed or rejected dependent on a predefined criteria.
In one embodiment, where the target object is a person, the predefined criteria comprises of a skin colour mask which is used to indicate likely regions of person occupancy.
In one embodiment there is provided the step of analysing 3D foreground denoting possible object target occupancy from which individual regions of the foreground can be determined through a 3D connect component analysis and shape analysis.
In one embodiment there is provided the step of using a 2D connected component analysis on each skin colour mask to enable individual connected 3D foreground regions to be associated with connected skin colour regions in each camera view.
In one embodiment, where the target object is a person, there is provided the step of defining an ellipsoida] head modelL and constraining the fitting of the ellipsoid to the 3D foreground as well as its corresponding connected skin region in each view.
In one embodiment, where the target object is a person, there is provided the step of resolving the location of the active speaker from the plurality of identified head positions using a plurality of time-delay estimates extracted from multiple pairs of microphones.
In one embodiment there is provided the step of modelling said skin colour under varying illumination.
In one embodiment said skin colour modelling step is performed for skiri colour detectiori under coriditioris of low illumination.
In one embodiment there is provided the step of examining target object activity over a window of a pre-defined number of time steps centred at the current time instance to assign a high probability to target object positions which correspond to significant target object activity.
In one embodiment the said target object activity corresponds to speech activity where the target object is a person.
In one embodiment there is provided the method of using a Viterbi algorithm to obtain a Maximum a Posteriori (MAP) estimate of the path of target object activity through the identified plurality of target object positions over the duration of the audio and video capture.
In one embodiment there is provided the further step of segmenting said target object in each available view and using a best-view selection criteria to determine the most optimum segmented view displaying the target object.
In one embodiment, where the target object is a person, the best-view selection criteria is determined as the segmented head view corresponding to that in which the largest area of detected skin is visible.
In a further embodiment of the invention there is provided system for the automated production of a single video file from a multi-view video capture, the system comprising: i) means for gathering video data from a plurality of camera sources; ii) means for gathering audio data from a plurality of microphone sources; ii) means for using audio and video information to automatically locate and track a moving target object in a 3D space, so as to determine the region occupied by the said target object in each available camera; iv) means for determining from the identified regions in each camera view, the most optimum view of said target object; and v) means for composing a single view video sequence consisting of a user defined main view and an automatically inserted optimum view of said target object over the duration of the video capture.
There is also provided a computer program comprising program instructions for causing a computer program to carry out the above method which may be embodied on a record medium, carrier signal or read-only memory.
Brief Description of the Drawings
The invention will be more clearly understood from the following description of an embodiment thereof, given by way of example only, with reference to the accompanying drawings, in which:-Figure 1 illustrates a block diagram representation of the system to provide automated post production of a single video file according to one embodiment of the invention; Figure 2 is a visual illustration of the eight possible states of speech activity which the system considers for speakers tracked by the system; and Figure 3 illustrates an example of an estimated path of speaker activity through the set of speakers locations and speaker activity states using the Viterbi algorithm.
Detailed Description of the Drawings
The following sections with reference to the included drawings present a detailed description of the invention. This description refers to one implementation of the invention detailing core aspects of the overall system. It is not the purpose of this description to limit the scope of the invention. The overall scope of the invention is specified only by the accompanying claims.
The invention comprises of a system for automatically editing multiple camera views of a lecture/seminar into a single-view video presentation. The single-view video presentation produced by the system consists of a user-defined main view and an automatically inserted view of the current active speaker. As such, the system includes a method for automatically locating and tracking the current active speaker over the duration of the captured footage. In addition to this, the invention incorporates a technique for extracting the best view of the tracked speaker ensuring that the most frontal facial view of the speaker is included in the single view video output.
The single-view video output of the system consists of a user-defined main view and an automatically inserted best view of the current active speaker. The inserted view of the active speaker acts as a virtual camera whereby it simulates the active tracking of conversational interactions between people.
For example; in a lecture scenario, the system will automatically determine the best view of the lecturer's face while they are talking. However, if an audience member asks a question, the system will determine a best facial view of the audience member for the duration of the question. Once the audience member has finished speaking, the camera will then return to tracking the lecturer. The best view composed by the system is determined by analyzing all views in which the track speaker is visible.
This level of video editing, which includes people tracking and best-view selection, currently requires significant manual intervention. This is a time consuming process which is costly since it requires skilled video editing teams. The invention is aimed towards alleviating the overhead required to generate effective video lectures for distribution over the Internet either for on-line or off-line viewing. For the off-line case, the system can perform the editing task as a post-production step. In the online case the system can be employed for automated editing as the footage is being captured. When the system is used for on-line editing however, the output is generated with a small time-delay relative to the time of capture.
To track people who are speaking, the system uses audio data from multiple microphones and video information from multiple cameras. The use of multiple camera views is important to the invention since it identifies potential speaker locations in the 3D space of the lecture/seminar room. In order to track speakers in 3D space the system requires the use of at least
two cameras with over-lapping fields of view.
Using the tracked 3D location of the active speaker, the system identifies from the available cameras, multiple views of the speaker's face. The system chooses from these multiple views, the best facial view of the tracked speaker for inclusion in the video output. The best view is selected using visual appearance based rules and ensures that the most frontal facial view is selected.
Referring now to the Figures and initially Figure 1, Figure 1 illustrates a block diagram of the invention. The following text describes the blocks which form one embodiment of the system.
(1) Multi-camera Data Processing Block (1) in the diagram of Figure 1, referred to as Multi-camera Data Processing, is responsible for the task of preparing the multi-view video data for use by the system.
This component handles the capturing of the current frame from each of the available cameras. The algorithm requires that the tracking area is captured by at least two cameras with overlapping fields of view. A typical configuration for the system employed in a rectangular room, consists of four cameras in each corner orientated over the room's centre.
There is no restriction on the maximum number of cameras to which the algorithm can be applied. Additional cameras can be added or removed from the system depending on specific requirements and available hardware resources. For instance, more than four cameras may be necessary to completely service a lecture room which is very large or irregularly shaped.
For the purpose of description it is useful to define a video frame captured at time t from camera in the configuration as 1. The total set of video frames available to the system at time t therefore is f=[J, tJ where N. is the total number of cameras.
It is within block (1) that any necessary pre-processes may be applied the video data I such as gamma correction, colour correction or brightness/contrast adjustments. The algorithm does not rely on any video pre-processing but some pre-processes may be necessary to compensated for hardware related capture quality issues. In normal circumstances the raw video data is used. The only requirement of the algorithm is that the video data is of an RGS colour format. Conversion to RGB is necessary where the video data is captured in some other colour format.
The system also requires that the cameras are fully calibrated within the tracking environment such that the intrinsic and extrinsic calibration parameters for each camera are known.
Any existing automatic, semi-automatic or manual technique for camera calibration can be used. However, the accuracy of the estimated calibration parameters will have a limiting effect on the accuracy of the speaker tracking system. Once this information is determined, a projection operator I() can be defined for camera view to map any 3D point X in the lecture room to a pixel location p, in that view such that, (i) Based on the calibration information, the system maintains a projection operator for each camera in a set PP1 P\)J Two outputs are provided by block (1) within the system. These outputs are; (a) A set of video frames I=[l.i] at the current time instance t from N, available camera views. The total number of available views must be N> 2.
(b) A set of projection operators P{P1( P\ )J defining a projection operator for each of the camera views.
(2) Skin Colour Detection The system requires a skin mask to be determined for each video frame generated in the output of (a) . Skin colour modeling is used to generate a binary mask indicating regions of skin and non-skin in each of the video frames contained in the set of video frames I. The lecture room setting presents a difficult scenario for skin detection since it is often the case that skin regions such as faces are captured under low illumination. This is normally the case particularly over audience regions. A suitable skin-colour model must therefore account for the variation of skin colour over varying level of luminance.
Most existing skin colour modeling techniques transform pixel data into chrominance colour spaces to decouple chromatic colour from that of luminance. Once the chromatic colour information is obtained the luminance component is usually discarded. Skin colour is then modeled using only the chromatic colour information. Such methods assume that in a chrominance colour space, skin-tone is independent of the luminance component.
One of the difficulties with skin colour modeling is that it varies non-linearly with respect to luminance. Typically, the transformation of skin colour into chrominance colour spaces does not adequately account for this non-linear relation. As a result the accuracy of a skin colour model using chromiance information can be inconsistent over the luminance range.
Using such skin colour models for skin detection can result in poor performance in the low luminance range.
The particular skin detection method employed by the system, utlises a novel technique for modeling skin colour over varying luminance. The new method aims to capture the non-linear dependence of skin-tone on luminance using RG colour information only and does not require any colour space transformations. This new model of skin colour is formed by learning a suitable model using a training-set of RGB skin colour pixels corresponding to values of low-to-high luminance. In this way an estimated skin-tone can be made for any observed level of illumination.
The modeling technique is defined as follows. Consider a pixel p in frame n with red, green and blue intensity values of H, G and B respectively. The JR component is nonlinearly related to the B and G components using two polynomials fG(R) and fXR) whereby, f(R) = + ÷ ...± aJR (ii) (iii) and k is the order of the polynomial. The order of the polynomials can be altered depending on the amount of training data available to the system or as additional training data is supplied to the system.
With polynomial relations defined in Equations (ii) and (iii), the classification of a pixel p as skin is defined by two conditions C1(p) and C(p) where, C1ip) = (IG -t) fl (B -f(R) < t1) (iv) and C(p)=(-< fi) n (I<)7I. (v) with t, t1 and ? being pre-defined threshold values. Using the conditions of Equations (iv) and (v) a binary skin colour mask for view i is defined as, S,(p) far C1 (p) = uie,C(p) = (vi) a Qtarwse Equivalently, the binary skin colour mask of Equation (vi) defines a set of pixels, = (vii) in view n which are classified as skin.
The resulting output of skin detection in block (2) is; (c) Binary skin colour masks Sap) and set of skin colour pixels p for indicating skin and non-skin colour pixels for each of the video frames contained in output (a).
(3) Voxel-based Scene Analysis Voxel-based analysis refers to the method of sampling the 3D space of the lecture room and relating these 3D locations to the video data to determine occupied regions in space. The concerned invention uses voxelization in block (3) to determine regions occupied by skin in the 3D space of the lecture room. In order to apply voxelization, the system must be supplied with a pre-defined sampling resolution for the:, y and z dimensions of the lecture room. The default installment of the system uses a 05m sampling interval in each dimension. Under this configuration, a single voxel represents a volume of in space.
The tracking region can be pre-defined by the user of the system if necessary and it is possible to restrict the system to only track within certain zones. This enables the user to define a custom region-of-interest within the lecture room representing a tracking zone. Without any user intervention, the default tracking zone corresponds to the volume of space where the field-of-view of at least two of the available cameras overlap. The facility to pre-define a tracking zone can also be useful to reduce the computational requirements of the system. For instance, in a normal lecture setting the best deployment of the system is to define a tracking zone to only cover the height range of a standing or seated person.
Typically, regions above 2Mm and below Om can be omitted from analysis by the system in this case since a person's head is not likely to be detected outside of this range.
The result of the process of voxel analysis in block (3) provides the system with a set of known 3D locations X, j=1,...,R, defining the centroid of R voxels in space. The locations Xr define a tracking zone in which speakers are to be tracked.
Using the projection operators defined in output (b) of block (1) the centroid of each voxel is projected irito the 2D view of each camera. In order to ease the computational burden of this analysis, the system performs a once-off projection of each voxel centroid to its corresponding pixel location in each camera view. The system then maintains in memory, look-up-tables defining mappings of voxel locations to pixel locations for each camera view. Since the configuration of cameras is un-restricted, it is possible some voxel regions will not be visible in all camera views. The look-up tables which the system maintains, also records additional information for each voxel such as its visibility in each view. A voxel is deemed visible in a camera view only if its corresponding pixel location is within the bounds of the camera's resolution. For instance, for a video camera with a resolution of 6.40x480, if the projection of a voxel into this view results in a pixel location outside of the known pixel resolution, then it is classified as not visible in that view.
With the voxel-to-pixel mappings defined by the system, a binary decision of occupancy is made for each voxel using the skin colour masks S(p), obtained through output (c) . If the pixel location of a voxel in two or more video frames is found to occupy a location classified as skin, then that voxel is classified as occupied. Otherwise, the voxel location is deemed to be unoccupied. For example, a voxel with associated pixel locations of p, for is classified as occupied if, c > =I The result of this analysis is a set X={X1.X.} of K occupied voxels within the space of the lecture room. These occupied voxels represent 3D volumes in space occupied by skin thus indicating the likely location of faces. This set of occupied voxels is later used by the system to infer likely head locations within the lecture room. The output of the voxel-based analysis in block (3) is; (d) A set X, of occupied voxels defining regions of skin in the tracking zone of the lecture room.
(4) Connect Component Analysis Since a single voxel only represents a small volume of space, it is likely that multiple closely positioned voxels will occupied by skin at locations corresponding to faces. As a consequence, it can be assumed that single isolated occupied voxels are unlikely to represent head locations and more likely due to hands, arms or inaccurate skin colour detection.
Such small skin regions are discarded by the system for analysis. To perform this task, it is necessary to sub-divide the set X of occupied voxels from output (d) into separate groups of connected voxels representing more compact occupied regions. The system employs a connected component analysis to group occupied voxels based on their relative proximity to other occupied voxels. Once this analysis has been completed the system ranks each connected voxel region based on its size and removes the smallest connected regions containing only one voxel.
The above steps are necessary so as to filter skin regions corresponding to non-faces from the voxel data. In brief, a 3D connected component analysis on the voxel data X. is employed to define a set X-[X1X-} of K. connected occupied voxel groups.
In a similar manner to the above, the system uses connected component analysis to identify possible face locations from the skin colour masks of each view. This acts to sub-divide the output (c) into groups of connected skin colour pixels.
This results in transforming the set P of view from Equation (vii) into P=[F15. containing K groups of connected skin pixel regions.
Two outputs are generated by the connected component analysis block in the system. These are; (e) K. groups of connected occupied voxels defined in a set x.
(f) .K groups of connected skin colour pixels in view n defined in a set F. (5) 3D Person Localisation Using the set of connected voxel groups and connected pixel groups it is necessary to identify head locations within these occupied regions. An ellipsoidal head model is defined based on the average size of a person's head. The system assumes a default ellipsoidal head model with axes in the;:, y and z dimensions of Oi94n, .i4Sm and 1241m respectively. The head model has four degrees of freedom; three degrees of freedom corresponding to a 3D translation and one degree of freedom corresponding to a rotation in the plane.
In block (5) of the system connected voxel regions X. are associated with their corresponding connected 2D skin colour regions within the sets P. Once this is determined, the head model is fitted to each connected voxel regions in X as well as its corresponding connected 2D skin colour region in P. When fitting the head model to a group of connected voxels, the fitting process ensures that the estimated location and rotation of the head best describes the observed shape of both the voxel and skin mask data. Once the head head model has been fitted to each group of occupied voxels in X block (5) outputs; (g) A set x=frx} of head centroid positions and a set of corresponding values of head rotation in the x-y plane.
(6) Microphone Array Data Processing Microphone array data processing relates to pre-processing tasks such as filtering which are applied to the available audio streams before being utilised by the system. These pre-processes can be applied to remove channel noise, or background noise sources. Additional band-pass filtering (in the speech frequency range of approximately 400Hz-5000Hz) can be applied to accentuate the speech content of the signals.
The default configuration of the system uses the raw multi-channel audio data captured by the multiple microphones.
However, the necessary audio pre-filtering is hardware specific and also dependent on the noise conditions of the tracking environment or noise floor associated with the audio hardware. In general, in noisy tracking environments, noise filtering and speech band-pass filtering will help to improve the task of time-delay estimation later described in block (7) The system also requires knowledge of the positions of the microphones within the lecture room. This information can be obtained by manual measurement or existing automatic or semi-automated microphone calibration techniques. Using the positions of the microphones, the system determines a projection operator M), m for each of the N microphone pairs utilized by the system. The projection operator M(') enables the expected time-delay r observed at microphone pair m to be determined for a speech source signal emitted at any 3D location X i.e, := M(X). (ix) The system maintains a set of these projection operators in a set II one for each available pair of microphones.
The pre-processing of the audio data results in the generation of two outputs; (h) Pre-filtered, multi-channel audio data.
(i) The set of projection operators M[M1)...M(.)J relating points in 3D space to expected time-delay values at all pairs of microphones.
(7) Time-delay Estimation The algorithm for tracking the current active speaker relies on obtaining estimates of time-delays between speech signals received at multiple spatially separated microphones.
Mathematically, a signal s(t) at time t received at two spatially separated microphones can be represented as, *x.(t) =as(t) +v() (x) = cs(t + r) (xi) where and a represent the source signal attenuation factors at the microphones and v1(t) and v?(t) denote noise sources.
Using this representation, the time-delay -between the received signals can be estimated using Generalized Cross-Correlation. The Generalized Cross-Correlation method defines the time-delay estimate as, = rgnuuc (-r). (xii)
where R(-r) is the generalized cross correlation function defined as R(r) P (wi G(ci)} (xiii) with P'(} defining the inverse Fourier transform, Go) the cross power spectrum of x1(t) and x(t} and = (1 (xiv) This is known as the phase transform generalized cross-correlation approach to time-delay estimation. Using Equation (xii), the relative time delay between each pair of microphones available to the system can be determined to generate a set of time-delay estimates at time t. The algorithm also estimates time-delays at the previous time step t-1 and the next time step +1. Therefore time-delay data is analysed by the system over a window of three time-step. In order to estimate delays at time t+i. the system operates at a delay of one time-step relative to the current time t. Each set of £ and contains one time-delay estimate for each pair of microphones.
The output of block (7) is: (j) Three sets of time-delay estimates, and at time steps t-1, t and c±i respectively.
(8) 3D Active Speaker Localisation The task of 3D active speaker localization builds a probabilistic likelihood function over the estimated head positions x from output (g) based on the time delay estimates £, i. and In building the likelihood function the system uses the projection operators of Equation (ix) to evaluate the expected set of time-delays observed at the microphones due to a speaker at every head position in the set x. The likelihood function is then formed over the set of head position based on how closely the expected time-delays match the time-delay estimates and Since the system analyses time-delays over three time instances eight possible states of speaker activity are possible for each head location. These states of speaker activity sft) are summarized in Figure 2. A location is deemed to relate to a specific state of speaker activity based on how the expected time-delays at the location correspond to the time-delay estimate and For example, if for some head position the expected time-delays match the time-delay estimates over the three time steps t-i, t and t+1 then the speaker activity state for that location is s(t)=:[i,1il.
The likelihood of a head location corresponding to a particular sate of speaker activity is defined by the system as being proportional to the number of microphone pairs where a match for that state of speaker activity is observed.
By this, a probabilistic likelihood function p(+i, , ij x(f). sit)), (xv) is defined. In this definition, the notation x(t) and s(t) is used to show the dependence of both the speaker location x and speaker activity state s on time.
The likelihood function is then output through; (k) A probabilistic likelihood function over each estimated head location in x for each possible state of speaker activity s.
(9) Speaker Activity Path Tracking The tracking of the active speaker through the set of estimated head positions requires the definition of two priors p(t)x(t -i) and p(ti st)) The first prior pV(t)x( -defines a motion model for the active speaker. This is set to best reflect the expected motion of active speaker being tracked by the system. The second prior p(x(t)js(t)) defines a prior probabilistic weighting on a head position x(t) being the location of the active speaker given the speaker's state of speech activity. This prior is used to place a low weighting on states of speaker activity containing silence. For instance, the system ensures that speech activity states such as s(t).[OO.0] are given a low prior weighting in the tracking algorithm. Given that the set of possible speaker positions x and speaker activity states s are both discrete, the tracking space can be represented as a 3D trellis. This is illustrated in Figure 3. Using the, two priors and the likelihood obtained through output (k), the system employs the Viterbi algorithm to jointly estimate the speaker's location and state of speech activity at each time step. This joint estimate at each time step is referred to as the speaker activity path. An example speaker activity path through the 3D trellis structure of the tracking space is illustrated in Figure 2. The Viterbi algorithm operates over a temporal window of the observed audio and video data. In the offline case, the temporal window used by the system consists of the complete duration of the captured data. When the system is implemented on-line however, a smaller temporal window about the current time is used. The estimation of speaker locations over a temporal window means that speakers are tracked at a time slightly delayed relative to the current time.
The output of block (9) in the system is; (1) Joint estimate of speaker location and speech activity over a temporal window.
(10) Visual Segmentation With the location of the active speaker define in (1), the system uses the camera projection operators of Equation (i) to determine the head location and outline of the speaker in each camera view. Since heads are defined as ellipsoids by the system, they appear as ellipses when projected into the camera views. The purpose of the visual segmentation step is to evaluate the ellipse regions in each camera view where the active speaker's head is located. The visual segmentation component of the system then outputs; (m) A set of 2D ellipsoid locations in each camera view corresponding to the head location of the tracked active speaker.
(11) Best-view Selection The elliptical regions from output (m) enable the speaker's head region to be segmented in the skin colour masks 5Jp) of each view. Using this information, the system evaluates which of the ellipses encompasses the largest number of skin colour pixels. The view corresponding to this ellipse is then classified as the best view of the speaker. The premise in this classification scheme is that the most frontal view of a speaker's face will contain the most visible skin. This view is then cropped about the location of the active speaker's head which forms output; (n) Segmented frontal view of the active speaker's face.
(12) Compose Composite View Block (12) retrieves the segmented view of the active speaker from output (n) and embeds this view into a pre-defined main lecture view. The main lecture view is specified by the user which can change over time or remain constant for the duration of the capture. The final output of the system is; (o) A single view video presentation consisting of a user defined main view and an inserted front facial view of the current active speaker.
The embodiments in the invention described with reference to the drawings comprise a computer apparatus and/or processes performed in a computer apparatus. However, the invention also extends to computer programs, particularly computer programs stored on or in a carrier adapted to bring the invention into practice. The program may be in the form of source code, object code, or a code intermediate source and object code, such as in partially compiled form or in any other form suitable for use in the implementation of the method according to the invention. The carrier may comprise a storage medium such as ROM, e.g. CD ROM, or magnetic recording medium, e.g. a floppy disk or hard disk. The carrier may be an electrical or optical signal which may be transmitted via an electrical or an optical cable or by radio or other means.
In the specification the terms "comprise, comprises, comprised and comprising" or any variation thereof and the terms include, includes, included and including" or any variation thereof are considered to be totally interchangeable and they should all be afforded the widest possible interpretation and vice versa.
The invention is not limited to the embodiments hereinbefore described but may be varied in both construction and detail.

Claims (19)

  1. Claims 1. A method for the automated production of a single video file from a multi-view video capture, the method comprising the steps of: i) gathering video data from a plurality of camera sources; ii) gathering audio data from a plurality of microphone sources; iii) using audio and video information to automatically locate and track a moving target object in a 3D space, so as to determine the region occupied by the said target object in each available camera; iv) determining from the identified regions in each camera view, the most optimum view of said target object; and v) composing a single view video sequence consisting of a user defined main view and an automatically inserted optimum view of said target object over the duration of the video capture.
  2. 2. The method as claimed in claim 1 wherein the target object is a person.
  3. 3. The method as claimed in claims 1 or 2 wherein the 3D space is a lecture theatre, seminar room or auditorium.
  4. 4. The method as claimed in any preceding claim comprising the step of employing voxelizatiori to spatially sample the 3D space of the tracking environment in order to determine hypothesised target object positions.
  5. 5. The method as claimed in claim 4 wherein each voxel represents a hypothesised target object position which is confirmed or rejected dependent on a predefined criteria.
  6. 6. The method as claimed in claim 5 wherein said predefined criteria comprises a skin colour mask obtained from multiple camera views to indicate likely regions of person occupancy.
  7. 7. The method as claimed in claims 5 or 6 comprising the step of analysing 3D foreground denoting possible object target occupancy from which individual regions of the foreground can be determined through a 3D connect component and shape analysis.
  8. 8. The method as claimed in claim 7 comprising the step of using a 2D connected component analysis on each skin colour mask to enable individual connected 3D foreground regions to be associated with connected skin colour regions in each view.
  9. 9. The method as claimed in any preceding claim comprising the step of defining an ellipsoidal head model and constraining the fitting of the ellipsoid to the 3D foreground corresponding said target object as well as its corresponding connected skin region in each view.
  10. 10. The method as claimed in any preceding claim where the target object is a person, comprising the step of resolving the location of the current active speaker from a plurality of identified head positions using a plurality of time-delay estimates obtained from multiple pairs of microphones.
  11. 11. The method as claimed in any of claim 7 to 10 comprising the step of modelling said skin colour under varying illumination.
  12. 12. The method as claimed in claim 11 wherein said modelling step is performed for skin colour detection under conditions of low illumination.
  13. 13. The method as claimed in any preceding claim comprising the step of examining object activity over a window of a pre-defined number of time steps centred at the current time instance to assign a high probability to target object positions which correspond to significant target object activity.
  14. 14. The method as claimed in 13 where said target object is a person and activity corresponds to speech activity.
  15. 15. The method as claimed in any preceding claim comprising the step of using the Viterbi algorithm to obtain a Maximum a Posteriori (MAP) estimate of the path of the target object activity through the identified plurality of target object positions over the duration of audio and video capture.
  16. 16. The method as claimed in any preceding claim comprising the further step of segmenting said target object in each available view and using a best-view selection criteria to determine the most optimum segmented view displaying the target object.
  17. 17. The method as claimed in 16 where the target object is a person and the best-view selection criteria is determined as the segmented head view corresponding to that in which the largest area of detected skin is visible.
  18. 18. A computer program comprising program instructions for causing a computer to perform the method of any one of claims 1 to 17.
  19. 19. A system for the automated production of a single video file from a multi-view video capture, the system comprising: iii) means for gathering video data from a plurality of camera sources; ii) means for gathering audio data from a plurality of microphone sources; iv) means for using audio and video information to automatically locate and track a moving target object in a 3D space, so as to determine the region occupied by the said target object in each available camera; iv) means for determining from the identified regions in each camera view, the most optimum view of said target object; and v) means for composing a single view video sequence consisting of a user defined main view and an automatically inserted optimum view of said target object over the duration of the video capture.
GB1012174.7A 2010-07-20 2010-07-20 Automated video production Withdrawn GB2482140A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1012174.7A GB2482140A (en) 2010-07-20 2010-07-20 Automated video production

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1012174.7A GB2482140A (en) 2010-07-20 2010-07-20 Automated video production

Publications (2)

Publication Number Publication Date
GB201012174D0 GB201012174D0 (en) 2010-09-01
GB2482140A true GB2482140A (en) 2012-01-25

Family

ID=42735207

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1012174.7A Withdrawn GB2482140A (en) 2010-07-20 2010-07-20 Automated video production

Country Status (1)

Country Link
GB (1) GB2482140A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2528060A (en) * 2014-07-08 2016-01-13 Ibm Peer to peer audio video device communication
US9305600B2 (en) 2013-01-24 2016-04-05 Provost Fellows And Scholars Of The College Of The Holy And Undivided Trinity Of Queen Elizabeth, Near Dublin Automated video production system and method
US9742976B2 (en) 2014-07-08 2017-08-22 International Business Machines Corporation Peer to peer camera communication
US9781320B2 (en) 2014-07-08 2017-10-03 International Business Machines Corporation Peer to peer lighting communication
US10735882B2 (en) 2018-05-31 2020-08-04 At&T Intellectual Property I, L.P. Method of audio-assisted field of view prediction for spherical video streaming

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2351628A (en) * 1999-04-14 2001-01-03 Canon Kk Image and sound processing apparatus
US20020196327A1 (en) * 2001-06-14 2002-12-26 Yong Rui Automated video production system and method using expert video production rules for online publishing of lectures
KR20050013461A (en) * 2003-07-28 2005-02-04 이솔정보통신(주) System for editing an educational video
GB2409030A (en) * 2003-12-11 2005-06-15 Sony Uk Ltd Face detection
US20060005136A1 (en) * 2004-06-30 2006-01-05 Microsoft Corporation Portable solution for automatic camera management
WO2010016059A1 (en) * 2008-08-04 2010-02-11 Lior Friedman System for automatic production of lectures and presentations for live or on-demand publishing and sharing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2351628A (en) * 1999-04-14 2001-01-03 Canon Kk Image and sound processing apparatus
US20020196327A1 (en) * 2001-06-14 2002-12-26 Yong Rui Automated video production system and method using expert video production rules for online publishing of lectures
KR20050013461A (en) * 2003-07-28 2005-02-04 이솔정보통신(주) System for editing an educational video
GB2409030A (en) * 2003-12-11 2005-06-15 Sony Uk Ltd Face detection
US20060005136A1 (en) * 2004-06-30 2006-01-05 Microsoft Corporation Portable solution for automatic camera management
WO2010016059A1 (en) * 2008-08-04 2010-02-11 Lior Friedman System for automatic production of lectures and presentations for live or on-demand publishing and sharing

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9305600B2 (en) 2013-01-24 2016-04-05 Provost Fellows And Scholars Of The College Of The Holy And Undivided Trinity Of Queen Elizabeth, Near Dublin Automated video production system and method
GB2528060A (en) * 2014-07-08 2016-01-13 Ibm Peer to peer audio video device communication
GB2528060B (en) * 2014-07-08 2016-08-03 Ibm Peer to peer audio video device communication
US9742976B2 (en) 2014-07-08 2017-08-22 International Business Machines Corporation Peer to peer camera communication
US9781320B2 (en) 2014-07-08 2017-10-03 International Business Machines Corporation Peer to peer lighting communication
US9948846B2 (en) 2014-07-08 2018-04-17 International Business Machines Corporation Peer to peer audio video device communication
US9955062B2 (en) 2014-07-08 2018-04-24 International Business Machines Corporation Peer to peer audio video device communication
US10257404B2 (en) 2014-07-08 2019-04-09 International Business Machines Corporation Peer to peer audio video device communication
US10270955B2 (en) 2014-07-08 2019-04-23 International Business Machines Corporation Peer to peer audio video device communication
US10735882B2 (en) 2018-05-31 2020-08-04 At&T Intellectual Property I, L.P. Method of audio-assisted field of view prediction for spherical video streaming
US11463835B2 (en) 2018-05-31 2022-10-04 At&T Intellectual Property I, L.P. Method of audio-assisted field of view prediction for spherical video streaming
US12010504B2 (en) 2018-05-31 2024-06-11 At&T Intellectual Property I, L.P. Method of audio-assisted field of view prediction for spherical video streaming

Also Published As

Publication number Publication date
GB201012174D0 (en) 2010-09-01

Similar Documents

Publication Publication Date Title
US10074012B2 (en) Sound and video object tracking
US9305600B2 (en) Automated video production system and method
CN109345556B (en) Neural network foreground separation for mixed reality
US7512883B2 (en) Portable solution for automatic camera management
US8922628B2 (en) System and process for transforming two-dimensional images into three-dimensional images
JP7034666B2 (en) Virtual viewpoint image generator, generation method and program
CN107820037B (en) Audio signal, image processing method, device and system
US20060251384A1 (en) Automatic video editing for real-time multi-point video conferencing
KR20030036747A (en) Method and apparatus for superimposing a user image in an original image
CN104169842B (en) For controlling method, the method for operating video clip, face orientation detector and the videoconference server of video clip
GB2482140A (en) Automated video production
US20160360150A1 (en) Method an apparatus for isolating an active participant in a group of participants
CN109814718A (en) A kind of multi-modal information acquisition system based on Kinect V2
US20180082716A1 (en) Auto-directing media construction
Stoll et al. Automatic camera selection, shot size and video editing in theater multi-camera recordings
KR102299565B1 (en) Method for real time person object cognition and real time image processing on real time broadcasting image and apparatus for performing the method
KR20130001635A (en) Method and apparatus for generating depth map
Lampi et al. An automatic cameraman in a lecture recording system
Gandhi et al. A computational framework for vertical video editing
Takacs et al. Hyper 360—towards a unified tool set supporting next generation VR film and TV productions
US11308586B2 (en) Method for applying a vignette effect to rendered images
CN108540867B (en) Film correction method and system
Kyriakakis et al. Video-based head tracking for improvements in multichannel loudspeaker audio
EP3101839A1 (en) Method and apparatus for isolating an active participant in a group of participants using light field information
Spors et al. Joint audio-video object tracking

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)