WO2010059481A1 - Multiple video camera processing for teleconferencing - Google Patents

Multiple video camera processing for teleconferencing Download PDF

Info

Publication number
WO2010059481A1
WO2010059481A1 PCT/US2009/064061 US2009064061W WO2010059481A1 WO 2010059481 A1 WO2010059481 A1 WO 2010059481A1 US 2009064061 W US2009064061 W US 2009064061W WO 2010059481 A1 WO2010059481 A1 WO 2010059481A1
Authority
WO
WIPO (PCT)
Prior art keywords
view
people
views
camera
participant
Prior art date
Application number
PCT/US2009/064061
Other languages
English (en)
French (fr)
Inventor
Joseph T. Friel
J. William Mauchly
Original Assignee
Cisco Technology, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Technology, Inc. filed Critical Cisco Technology, Inc.
Priority to CN200980155006.3A priority Critical patent/CN102282847B/zh
Priority to EP09752672.7A priority patent/EP2368364B1/en
Publication of WO2010059481A1 publication Critical patent/WO2010059481A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Definitions

  • the present disclosure relates generally to videoconferencing systems.
  • Telepresence systems One example is the CISCO CTS3000 Telepresence system, by Cisco Systems, Inc.
  • CISCO CTS3000 Telepresence system Cisco Systems, Inc.
  • Seat locations are fixed.
  • Cameras have a fixed focus, zoom, and angle to reproduce each member in a life-size "close-up" on the matched video display.
  • pan-tilt-zoom and/or electronic PTZ (EPTZ) cameras.
  • PTZ pan-tilt-zoom
  • EPTZ electronic PTZ
  • the cameras must be manually steered by a person to achieve a good view. While this is bothersome with one camera, it becomes untenable in a multi-camera situation.
  • FIG. IA shows a top view of a first example arrangement of a conference room in which three cameras are used for videoconferencing according to an embodiment of the present invention.
  • FIG. IB shows a top view of a second example arrangement of a conference room in which two cameras are used for videoconferencing according to an embodiment of the present invention.
  • FIG. 1C shows a top view of a third example in which three video cameras 121, 123, and 125 are used for videoconferencing according to an embodiment of the present invention.
  • FIG. 2 shows a simplified functional block diagram of one embodiment of the invention, applicable, for example, to the arrangement of participants shown in FIG. IA.
  • FIG. 3 shows a simplified functional block diagram of one embodiment of the invention, applicable, for example, to the arrangements of participants shown in FIGS. IB and 1C.
  • FIG. 4 shows a flowchart of a method embodiment of operating a processing system according to an embodiment of the present invention.
  • FIG. 5 shows a flowchart of another method embodiment of operating a processing system according to an embodiment of the present invention.
  • FIG. 6 shows a line drawing from a photograph of an example of a wide angle camera view in a typical conference room for a video teleconference.
  • FIG. 7 shows a line drawing from a photograph of an example wide angle camera view from a camera on one side of a display screen, according to an embodiment of the present invention.
  • FIG. 8 shows a line drawing from a photograph of an example wide angle camera view from a camera on the opposite side of a display screen to that shown in FIG. 7, according to an embodiment of the present invention.
  • FIG. 9 shows a line drawing from a photograph of a people view that would be transmitted to a remote endpoint in the example shown in FIGS. 6 and 7, according to an embodiment of the present invention.
  • FIG. 10 shows a simplified block diagram of a teleconferencing system that includes teleconference terminal that includes an embodiment of the present invention, and that is coupled to a network to which at least one endpoint is also coupled.
  • Described herein is a teleconference system with video camera that adapts to the seating positions of a number of people in a room.
  • One or more wide-angle cameras capture wide angle camera views of the participants, who, e.g., are around a table.
  • each face is located by a combination of audio and video information.
  • People shots are composed or selected as if there is a set of "virtual" close-up cameras each producing a people view.
  • the people views generated by the virtual cameras are then used in a teleconference, e.g. a teleconference using multiple display screens.
  • the system does not require a fixed seating arrangement, because it automatically analyzes the scene and positions the virtual electronic pan-tilt-zoom cameras to capture a correct "head and shoulder" people view.
  • Embodiments of the system can produce one or multiple video output streams each containing one or multiple people without requiring a fixed seating arrangement.
  • a feature of some embodiments is that the system can be dynamically deployed. That is, it is not necessary to permanently mount it in a specific location, but rather it may be moved to whatever room is convenient.
  • embodiments of the present invention include an apparatus and a method, can add electronic pan-tilt-zoom function and multiple view capability to a simple telepresence system.
  • Particular embodiments include an apparatus comprising a plurality of video cameras each configured to capture a respective camera view of at least some participants of a conference. The camera views together including at least one view of each participant.
  • the apparatus further includes a plurality of microphones and an audio processing module coupled to the plurality of microphones and configured to generate audio data and direction information indicative of the direction of sound received at the microphones.
  • the apparatus also includes a composition element coupled to the video cameras and configured to generate one or more candidate people views, each people view being of an area enclosing a head and shoulders view of at least one participant.
  • the apparatus also has a video director element coupled to the composition module and to the audio processing module and configured to make a selection, according to the direction information, of which at least one of the candidate people views are to be transmitted to one or more remote endpoints.
  • the cameras are set to each generate a candidate people view.
  • the composition element is configured to make a selection of which at least one camera views is to be transmitted to the one or more remote endpoints according to the direction information.
  • the apparatus in such a version also includes a video selector element coupled to the video director and to the video cameras and configured to switch in, according to the selection by the video director, at least one of the camera views for compression and transmission to one or more remote endpoints.
  • Other versions of the apparatus further include a face detection element coupled to the cameras and configured to determine the location of each participant's face in each camera view and to output the determined location(s) to the composition element.
  • the camera views in these versions are not necessarily people views.
  • the composition module is coupled to cameras via the face detection element, and further configured to generate according to the determined face locations, one or more candidate people views, each candidate people view being of an area enclosing a head and shoulders view of at least one participant, and to output to the video director candidate view information.
  • the video director is further configured to output selected view information according to the selection by the video director
  • the apparatus further includes an electronic pan-tilt-zoom element coupled to the video director and to the video cameras and configured to generate, according to the selection selected view information, video corresponding to the selected at least one of the candidate views for compression and transmission to one or more remote endpoints.
  • the composition element includes a first composition element configured to compose people views, and a second composition element configured to select the candidate people views from the composed people view, such that each participant appears in only one candidate people view.
  • Particular embodiments include a method of operating a processing system.
  • the method includes accepting a plurality of camera views of at least some participants of a conference. Each camera view is from a corresponding video camera, with the camera views together including at least one view of each participant.
  • the method includes accepting audio from a plurality of microphones, and processing the audio from the plurality of microphones to generate audio data and direction information indicative of the direction of sound received at the microphones.
  • the method further includes generating one or more candidate people views, with each people view being of an area enclosing a head and shoulders view of at least one participant.
  • the method also includes making a selection, according to the direction information, of which at least one of the candidate people views are to be transmitted to one or more remote endpoints.
  • the accepted camera views are each a candidate people view
  • the method further includes, in response to the made selection, switching in at least one of the accepted camera views for compression and transmission to one or more remote endpoints.
  • Other versions include detecting any faces in the camera views and determining the location of each detected face in each camera view.
  • the camera views are not necessarily people views, and the generating of the one or more candidate people views is according to the determined face locations, such that each candidate people view is of an area enclosing a head and shoulders view of at least one participant, the generating determining candidate view information.
  • making the selection according to the direction information includes providing selected view information according to the made selection.
  • Such versions include generating according to the selected view information, video corresponding to the selected at least one of the candidate views for compression and transmission to one or more remote endpoints.
  • each participant appears in only one people view.
  • each participant may appear in more than one people view, and the method for such versions further includes composing possible people views, and selecting the candidate people views from the composed possible people view, such that each participant appears in only one candidate people view.
  • Particular embodiments include a method of operating a processing system.
  • the method includes, for a plurality of camera views from corresponding video cameras in a room, detecting any faces in the camera view, determining the location of participants in the room, determining which face or faces is or are in more than one camera view,, and for each subgroup of one or more adjacent faces, composing a people view, selecting respective people views for each respective participant, mapping each people view to one or more determined voice directions, such each determined voice direction is associated with one of the people views; and selecting one or more people views for transmission to remote endpoints, such that video for the people views selected for transmission can be formed.
  • the method when a voice direction changes, the method includes switching between people views according to the sound direction.
  • Particular embodiments include a computer-readable medium having encoded thereon executable instructions that when executed by at least one processor of a processing system cause carrying out a method.
  • the method includes, for a plurality of camera views from corresponding video cameras in a room, detecting any faces in the camera view, determining the location of participants in the room, determining which face or faces is or are in more than one camera view,, and for each subgroup of one or more adjacent faces, composing a people view, selecting respective people views for each respective participant, mapping each people view to one or more determined voice directions, such each determined voice direction is associated with one of the people views; and selecting one or more people views for transmission to remote endpoints, such that video for the people views selected for transmission can be formed.
  • Particular embodiments may provide all, some, or none of these aspects, features, or advantages. Particular embodiments may provide one or more other aspects, features, or advantages, one or more of which may be readily apparent to a person skilled in the art from the figures, descriptions, and claims herein.
  • Embodiments of the present invention use two or more wide-angle cameras, e.g., high definition video cameras. Some embodiments and electronic pan-tilt-zoom applied to one or more of the camera views with face detection to determine one or more close-up views, each of one or more, e.g., two or three of the participants.
  • FIG. IA shows a top view of a first example arrangement of a conference room in which three cameras 121, 123, and 125 are used for videoconferencing according to a first embodiment of the present invention. At least one display screen 127 is located at one end of the conference room in which a table 111 is positioned.
  • FIG. IB shows a top view of a second example arrangement of a conference room in which two cameras 121, 123 are used for videoconferencing according to an embodiment of the present invention
  • FIG. 1C shows a top view of a third example in which three video cameras 121, 123, and 125 are used.
  • the display is usually in landscape orientation, showing one or two people side-by-side and life-size, vertically positioned so that the image of their eyes are at the same elevation as the people in the room.
  • the table is a typical conference room table, which might be an elongated table, e.g., a rectangular table as shown in FIG. IA, or, as shown in FIGS. IB and 1C, an oval table. Participants 101, 102, 103, 104, 105, 106, and 107 in FIG. IA, and 101, 102, 103, 104, 105, 106, 107, 108, and 109 in each of FIGS. IB and 1C are around the table.
  • a plurality of cameras is used in a cross-fire arrangement to provide wide angle camera views that in some arrangements, e.g., those of FIGS IB and 1C overlap so that each participant is in at least one view.
  • each participant is in exactly one camera view, while in the arrangements of FIGS. IB or 1C there may be a least one participant who is in more than one view.
  • the cameras are angled so that each participant's face is in at least one wide-angle view. Thus, for example, if there are participants on opposite sides of the table, by angling the cameras, each such participant's face is in at least one view.
  • Modern videoconferencing systems that use high-definition video cameras in especially configured rooms are often called telepresence systems because they provide for the participants around the table life size images of remote participants on the at least one display screen, as if the remote participants are present.
  • the display is usually in landscape orientation, showing one or two people side -by-side and life-size, vertically positioned so that the image of their eyes are at the same elevation as the people in the room.
  • One mechanism is to set up a video conferencing room with a plurality of cameras fixed and located around the room in a radial manner, or spaced apart and pointed out parallel to each other and perpendicular to the display(s), such when the participants sit around a conference table, a people view of the head and shoulders of each participant is obtained suitable for displaying on a remote screen to give the impression that the participant or participants is/are present at the remote location.
  • One feature of embodiments of the present invention is providing the same effect with a less expensive arrangement of the plurality of cameras set up near the display screen(s) at angles arranged to capture wide-angle views as shown in the example arrangements of FIGS. IA-I C.
  • the cameras are near the display; with two cameras near the two sides of the display and if there is a third camera (or only one camera), it is centered directly over the display.
  • the cameras are approximately at eye level of the participants, and may be, in one example, 18 inches from either side of the display.
  • FIG. 2 shows a simplified functional block diagram of one embodiment of the invention, applicable, for example, to the arrangement of participants shown in FIG. IA.
  • a plurality of cameras 203 e.g., high definition video cameras that each provide a resolution with at least 600 lines of video, e.g., with 1920x1080 at 60 frames per second, are arranged such that each camera view shows two or at most three people side by side and close up.
  • each camera has a fixed wide-angle view.
  • the depth of field is arranged fir the participants sitting at the table 111 such that for each participant, there is at least one camera that has the participant's face view in focus.
  • the framing is adjusted per camera such that each frame is suitable for a people view of the head and shoulders of the participants suitable for displaying on a remote screen to give the impression that the participant or participants is/are present at the remote location.
  • Each camera view has one, two or possibly three participants. In such an embodiment, every participant appears in one and only one camera view.
  • the cameras are arranged such that the two or three participants that appear in a camera's people view do not significantly obscure each other.
  • a particular participant is captured by the camera position that is farthest away from him or her, which is also the position closest to a "frontal" people view of that participant.
  • the framing is adjusted per camera such that camera view has one, two or possibly three participants, already framed to be suitable to a people view.
  • the framing is not necessarily adjusted per camera such that camera view is a people view. Some additional composition may be needed.
  • the cameras are again arranged such that the two or three participants that appear in a camera view's people view(s) do not significantly obscure each other.
  • the people views are such that each person appears in only one people view. A particular participant is captured by the camera position that is farthest away from him or her, which is also the position closest to a "frontal" people view of that participant.
  • the framing may not necessarily be a people view of the head and shoulders of the participants suitable for displaying on a remote screen to give the impression that the participant or participants is/are present at the remote location, electronic composition is carried out to achieve such functions.
  • a directional microphone subsystem includes two or more microphones 113, arranged, for example as a microphone array and an audio processing module 209 coupled to the microphones and configured to generate audio data and direction information indicative of the direction of sound received at the microphones.
  • the direction information is in the form of the angle of sound.
  • One aspect of the invention is applicable to such arrangements, and includes a method of determining which camera view shows the current speaker, in cases where there is not a one-to-one correspondence between microphones and camera views.
  • the orientation, framing and scale of each camera e.g., the location of each person relative to that camera is arranged such that participants' eye levels and the people view for such a camera shows two or at most three people in a head and shoulders view that would scale to be life size in a typical teleconference room display screen.
  • the composition module 223 generates information as to which direction is associated with which camera view (a people view in this case).
  • a video director element 225 is coupled to the composition module 223 and to the audio processing module and configured to make a selection, according to the direction information, of which at least one of the candidate people views are to be transmitted to one or more remote endpoints.
  • the video director outputs information to a video selector element 227 to select, according to the selection by the video director, at least one of the camera views for compression and transmission together with a processed version of the audio data to one or more remote endpoints.
  • the selected camera view(s) correspond(s) to the selected candidate people view(s) and become(s) the active people view(s) sent to remote endpoints of the teleconference.
  • a face detection element 221 accepts the camera views and locates the faces in each camera view.
  • a composition module 223 is coupled to the face detection element 221 and configured to generate candidate people views, with one person in only one candidate people view, and typically, one per camera, each people view being of an area enclosing a head and shoulders view of at least one participant, typically two or three participants.
  • the composition module is arranged such that each people view provides images of a size and layout such that when displayed remotely on a remote display screen, each participant is displayed life size and facing the expected audience in the remote location where the remote display screen is situated.
  • the composition element composes, using information on the frame border locations and on the location and sizes of the heads, the candidate people views, and outputs candidate view information, e.g., in the form of people view size and positions relative to the corresponding camera view frame. These are the possible candidate people views.
  • the video director element 225 is coupled to the composition module 223 and to the audio processing module and configured to make a selection, according to the direction information, of which at least one of the candidate people views are to be transmitted to one or more remote endpoints. As soon as a participant speaks, any change in directional information causes the video director to switch its selection to include the people view that contains the participant who is speaking.
  • One method uses a two-dimensional overhead mapping of the location of the participants in the room for making the selection.
  • the video director element 225 outputs selected candidate view information, e.g., in the form of the selected people view size(s) and position(s) relative to the corresponding camera view frame such that an electronic real-time electronic pan-tilt-zoom (EPTZ) element 227 can form a high definition video frame(s)from the corresponding camera view(s) according to the selection by the video director element.
  • the real time electronic pan-tilt-zoom element 227 is configured to form, e.g., using video rate interpolation, a high definition video frame for each selected people view to be the active people view(s) sent to remote endpoints of the teleconference.
  • a video codec and audio codec subsystem 231 is configured to accept the audio and the selected one or more active people video views, and in some embodiment, any other views and to compress the video and audio for transmission to the other endpoints of the video teleconference.
  • the invention is not limited to any particular architecture for the codecs.
  • the codec subsystem 231 encodes the video in high definition at 60 frames per second.
  • a second set of embodiments is applicable for the case wherein each camera view is a wide angle view that need not be restricted to be a people view or that need not be limited such that each participant can appear in one and only one camera view.
  • the arrangements shown in FIGS. IB and 1C have overlapping camera views that might have the same participant in more than one camera view.
  • Electronic pan-tilt- zoom (EPTZ) is used to create the people views by processing of the video signals in real time, with each people view displaying one, or more typically two or three, e.g., not more than three participants suitable for transmission to the remote endpoints.
  • Face detection is used to detect the participants in each camera view.
  • the plurality of microphones is arranged as a microphone array 113 together with an audio processing module configured to associate particular people views with the sensed sounds such that when a particular participant speaks, the constructed people view that includes the best view of that participants is selected one of the at least one people view that is transmitted to the other endpoints in the teleconference.
  • FIG. 3 shows a simplified functional block diagram of one embodiment of the invention, applicable, for example, to the arrangements of participants shown in FIGS. IB and 1C.
  • a plurality of cameras 303 e.g., high definition video cameras is arranged such that each camera view overlaps so that together, the camera views show all participants.
  • the camera views are wide-angle, and it is possible and likely that one or more participants appear in more than one camera view.
  • a view selection/ composition element 305 includes a face detection element 321 to locate the human faces within each of the camera views, a first composition element 323 ("composition 1") that is coupled to the face detection element 321, and configured to accept face size and positions of the camera views, and compose from the camera views people views of one, two, or three faces.
  • the composition module 323 is arranged such that each people view provides images of a size and layout such that when displayed remotely on a remote display screen, each participant is displayed life size and facing the expected audience in the remote location where the remote display screen is situated.
  • the output of the composition element 323 in one embodiment includes people view information, e.g., in the form of the sizes and locations of the people view(s) relative to the framing of the corresponding camera view(s).
  • the view selection/ composition element 305 further includes a second composition element 325 ("composition 2") that is a people view selection element 325 configured to accept people view information, e.g., people view size(s) and position(s) relative to the framing of the corresponding camera view(s) from the composition element 323 and to select the people view for each participant to form candidate people views.
  • the output of the people view selection element 325 is in the form of candidate people view information for each candidate people view, e.g., candidate people view size(s) and position(s) relative to the framing of the corresponding camera view(s).
  • first and second composition elements 325 and 327 together form a composition element that is configured to generate candidate people views.
  • a directional microphone subsystem includes two or more microphones 113, arranged, for example as a microphone array and an audio processing module 209 coupled to the microphones and configured to generate audio data and direction information indicative of the direction of sound received at the microphones.
  • the direction information is in the form of the angle of sound.
  • One aspect of the invention is applicable to such an arrangement, and includes a method of mapping, e.g., in the people selection element 325, which of the selected people views to use for which sound direction.
  • a video director element 327 is coupled to the second composition elements (the people selection element) 325 and to the audio processing module and configured to make a selection, according to the direction information, of which at least one of the candidate people views are to be transmitted, the selection in the form of information for real-time video composition in an electronic pan tilt zoon (EPTZ) element 329, and for compression and transmission ion with a processed version of the audio data to one or more remote endpoints.
  • EPTZ electronic pan tilt zoon
  • the output of the video director is in the form of the people view information for the one or more, typically one people view that is to be transmitted, e.g., as people view size(s) and position(s) relative to the framing of the corresponding camera view(s).
  • An electronic pan tilt zoom (EPTZ) element 329 is coupled to the view selection/composition module 305, in particular to the video director 327 and to the video outputs of the video cameras 303, and forms, at video rate, the video frames of the people views according to the people view information. This forms the video signal(s) for the active video view(s).
  • a video codec and audio codec subsystem 231 is configured to accept the audio and the video signal(s) for the active video view(s), and in some embodiment, any other views and to compress the video and audio for transmission to the other endpoints of the video teleconference.
  • the invention is not limited to any particular architecture for the codecs.
  • the codec subsystem 231 encodes the video in high definition at 60 frames per second.
  • some existing telepresence systems also use a face detection mechanism.
  • the face detection system determines the size and position of a detected face within the view of the camera is used to steer the camera.
  • Older systems might use a separate wide angle camera and close up pan-tilt-zoom (PTZ) camera.
  • PTZ pan-tilt-zoom
  • Some systems might simulate this with electronic pan-tilt-zoom that is used to track the location of the speaker and direct the pan-tilt-zoom view to that person.
  • Such tracking approaches differ from those of the present invention by at least the difference that in embodiments of the present invention, for a "telepresence" experience, the people views are constrained and kept fixed during the duration of a teleconference session. That is, every time a particular participant shows up, that participant is in the same place to simulate fixed cameras used.
  • the direction of sound does not steer an actual or virtual camera, but rather chooses between several fixed virtual (EPTZ) camera views obtained by the composition module and selected by the people selection module such that each person appears in one and only one selected composed people view.
  • Face detection does not directly steer the PTZ, which would only produce simple close-ups of a face in the center of the picture.
  • Each face is ultimately located by a combination of audio and video information.
  • the system is capable of producing multiple video output streams containing multiple people, and yet it does not require a fixed seating arrangement.
  • the high definition video cameras have at least 1280 by 620 at 60 frames per second, and in some embodiments, 1920x1080 at 60 frames per second.
  • the cameras are arranged to provide fixed, wide-angle views to maintain reasonable image quality even if only a portion of the image is selected.
  • the cameras have a relatively large depth-of-field so as to keep all participants in its camera view in focus.
  • the cameras are placed slightly above eye level.
  • FIG. 4 shows a flowchart of one method embodiment of operating a processing system.
  • the method includes in 401 accepting a plurality of camera views of at least some participants of a conference. Each camera view is from a corresponding video camera, with the camera views together including at least one view of each participant.
  • the method also includes in 403 accepting audio from a plurality of microphones and in 405 processing the audio from the plurality of microphones to generate audio data and direction information indicative of the direction of sound received at the microphones.
  • the method includes in 407 generating one or more candidate people views, each people view being of an area enclosing a head and shoulders view of at least one participant.
  • the accepted camera views are each a candidate people view. That is, the cameras are pre-framed to provide people views. 407 in such a case is a trivial step.
  • the camera views are not necessarily pre-set to be people views
  • the method further includes, in 407, detecting any faces in the camera views and determining the location of each detected face in each camera view.
  • the generating of the one or more candidate people views in 407 is according to the determined face locations, such that each candidate people view is of an area enclosing a head and shoulders view of at least one participant, the generating determining candidate view information.
  • the method includes in 409, making a selection, according to the direction information, of which at least one of the candidate people views are to be transmitted to one or more remote endpoints.
  • making the selection according to the direction information includes providing selected view information according to the made selection
  • the method further includes in a 411, in response to the made selection, switching in at least one of the accepted camera views for compression and transmission to one or more remote endpoints.
  • the method includes generating according to the selected view information, video corresponding to the selected at least one of the candidate views for compression and transmission to one or more remote endpoint. The generating uses EPTZ.
  • the method further includes in a step 413, compressing the switched in video, and the audio data, and transmitting the compressed data to one or more endpoints
  • each participant appears in only one people view.
  • each participant may appear in more than one people view.
  • 407 further includes composing possible people views, and selecting the candidate people views from the composed possible people view, such that each participant appears in only one candidate people view.
  • FIG. 5 shows a flowchart of another method embodiment of operating a processing system.
  • the method includes in a face detection step 501, for each camera view from a corresponding view camera in a room, detecting any faces in the camera view.
  • the method further includes, in step 503, determining the location of the participants in the room, e.g., creating a map of the location of faces in the room to locate each participant.
  • the method further includes, in step 505, for composition, determining which face or faces is or are in more than one camera view. That is, detecting the image of each participant who is in more than one camera view.
  • the method further includes, in step 507, again for composition, determining a zoom factor, e.g., for each face, based on face size and/or distance from camera.
  • the method further includes, for each subgroup of one or more adjacent faces, e.g., for each pair of faces, or subgroup of three faces, composing a people view.
  • the zoom for the people view is the average of the zoom factors for the two individual faces.
  • the composition of the people view contains the subgroup of faces inside the people view, e.g., without touching a perimeter band.
  • the method includes selecting respective people views for each respective participant by choosing a subset of the composed people views such that each face is presented in only one of the composed people views in the subset, and such that the subset includes the face of each participant.
  • These candidate views can be considered "virtual camera” views as if each pair of participants had its own fixed "virtual” camera.
  • Step 513 includes mapping each people view to one or more voice directions, each voice direction determined by an audio process performed in audio processing element 209 is coupled to two or more microphones and that determines from which direction a voice comes, such that each determined voice direction is associated with one of the people views of the subset of people views.
  • Step 515 includes selecting one or more people views for transmission to remote endpoints, including, when the sound changes, e.g., a voice direction changes, switching between people views according to the sound direction.
  • the sound changes e.g., a voice direction changes
  • Step 517 includes forming the video for the people views selected for transmission.
  • the video output is made of cuts or possibly cross fades between the candidate views — the virtual camera views.
  • multiple streams of such virtual camera views — the active people views — are formed for simultaneous transmission and viewing on multiple display screens at an endpoint.
  • the method includes switching automatically between a group shot, showing most or all of the local participants of the conference, and a people view, showing just one or two participants.
  • Step 519 includes encoding or transmitting the audio and those one or more people views selected in 515 and formed in step 517 for transmission to the endpoints of the teleconference.
  • the people view composition of steps 503 to 513 of the method of FIG. 5 occurs at the beginning of a teleconference session.
  • the method uses camera views and constructs people views, each a rectangular region-of-interest within one of the camera views.
  • a people view is essentially a close-up of a subset of the participants, e.g., two of the participants.
  • the view construction occurs at the beginning of the session.
  • the face detection step 501 includes a face detection method reporting, for each view, the position, as an x,y coordinate of each face within the camera view, and a measure size of the face.
  • face detection method reporting, for each view, the position, as an x,y coordinate of each face within the camera view, and a measure size of the face.
  • many face detection methods are known. The invention does not depend on any particular type of face detection method being used.
  • One embodiment of face detection includes eye detection, and includes determining a face size measure according to the distance between the eyes of a face.
  • Another method includes fitting elliptical shape, e.g., half ellipses to edges detected in the camera views to detect the face.
  • one method is as described in commonly assigned U.S. Patent Application No.
  • the face detecting includes at least one of eye detection and/or fitting an elliptical shapes to edges detected in the camera views corresponding to a face.
  • the measure of size of the face is determined by the distance between the detected eyes of the face.
  • the measure of the face is determined from properties of the elliptical shape fitted to the edges of a face.
  • the participant mapping step 503 includes, given the known location and angle of the cameras for each camera view, creating a map of the location of the faces in the room, using the (x,y) location of each face and the multiple views.
  • the method includes converting the determined face size to a depth, that is, a distance from the camera, using the zoom factor of the camera that is known a priori. Thus, each face's approximate distance from the known camera position is determined. Since two or more cameras are used, the faces are matched and triangulation is used to determine their physical position in the room. The method thus locates each participant's face in the room.
  • the method includes unique face view selection.
  • Step 505 includes identifying redundant views, including determining which face or faces appear(s) in more than one camera view but are co-located on the map.
  • One embodiment includes verification, including approximate image comparison.
  • the method includes choosing one preferred camera view of each participant from among redundant camera views for any participant. For a particular participant, the best camera view is either the only one if there is only one camera view for the participant, or if more than one, the one in which the face is more head-on or a full-face view, as opposed to a profile view. For this, information from the face detection stage is used. For example, for methods that fit an ellipse or half-ellipse to each face, the widths of the ellipse of half for the same participant's are compared. In another embodiment, the location map of 503 is used and the camera view of the camera that is most opposite a participant's face is selected.
  • a desired composition is pre-determined. For example, one embodiment selects a 16:9 screen containing two participants side-by-side, with the two faces centered at certain positions, e.g., with the eyes nearest predefined locations on the screen and the faces being of a preselected size.
  • the composition element of determining candidate people views includes steps 507 and 509. Two (or more) faces that are adjacent in some camera view are candidates for a people view. A scaling factor (magnification or zoom) is chosen/determined for the group that optimizes face size for all. The faces are framed within the rectangle of the pre-determined desired composition. Thus, a candidate people view is composed for each pair (or more) of participants in a camera view.
  • One method includes evaluating candidate group views.
  • One method includes computing a merit score based on the distance of the faces from the optimal position of the faces according to the pre-determined desired composition. The rectangle of the desired composition is moved to optimize the view, equivalent to carrying out electronic panning.
  • Step 511 includes selecting the composed people view for each participant, such that the selected composed people views include all the participants just once and have the highest total score.
  • the set of group views remains fixed.
  • the views do not actively pan or tilt or zoom to follow movements.
  • the view selection method re-computes a new set of views.
  • are-computation of the set of people views, i.e., steps 501-513 includes re-computing in the case the number of faces in one of the people views changes.
  • FIGS. 6-9 show line drawings produced from actual photographs.
  • FIG. 6 shows an example of a wide angle camera view in a typical conference room for a video teleconference from a camera that is positioned approximately at the center of a display screen of the room. This is what is typically seen with a conventional prior art video teleconference system.
  • This camera view also corresponds to what the camera view from camera 125 might be in an arrangement similar to that of FIG. 1C.
  • FIG. 7 shows a wide angle camera view from a camera on one side of the display screen, and corresponds to what the camera view from camera 121 might be in arrangements similar to those of FIGS. IB and 1C. Also shown in FIG. 7 are the locations of two people composed views, each of two participants. The participant closest to the camera on the left of FIG. 7 obscures a participant behind him.
  • FIG. 8 shows a wide angle camera view from a camera on the other side of the display screen, and corresponds to what the camera view from camera 123 might be in arrangements similar to those of FIGS. IB and 1C. Also shown in FIG. 8 are the locations of two composed people views, each of two participants.
  • FIG. 9 shows the video people view that would be transmitted to remote endpoints for any of the two participants that are furthest from the camera for the camera view of FIG. 7, i.e., the two rightmost participants shown in FIG. 6.
  • the result is a set of virtual close-up cameras. These virtual cameras are then used in a multi-screen teleconference.
  • the effective "life-size" images are very similar to those provided by existing "telepresence" teleconferencing systems, such as the CISCO CTS3000 Telepresence System, made by Cisco Systems, Inc., related to the assignee of the present invention.
  • using an embodiment of the present invention does not require a fixed seating arrangement, because it automatically analyzes the scene and positions the virtual cameras to capture the correct "head and shoulder" people view.
  • a teleconference camera system that adapts to the seating positions of a number of participants in a room.
  • One or more, typically two or more wide-angle cameras capture a group shot of the people, e.g., around a table, and uses captured video and audio information and automatically composes people views for "virtual cameras” and chooses between them to generate the life-size, close-up experience of a multi-camera "telepresence" system with fewer cameras with the cameras located on one side of the room.
  • An embodiment of the invention thus provides the benefits of current telepresence systems, e.g., close-up life-size images, from a conference room that was not specifically designed for telepresence.
  • embodiments of the present invention use two or more camera that are located in the front near the screens, and this may be portable, to generate positions of multiple virtual cameras that adapt to the seating arrangement.
  • a system such as described herein can be dynamically deployed; it is not necessary to permanently mount the system in a specific location, but rather it may be moved to whatever room is convenient.
  • processing to select the people view is relatively simple, in another embodiment, processing is carried our, e.g., in the EPTZ element and the composition element, to correct for at least some of the distortions that might be caused by the cameras 303 being at different location from the "virtual camera" locations being simulated. That is, the electronic pan-tilt-zoom element jointly with the composition element is further configured to construct head one views and correct for at least some of the distortions that occur because the cameras 303 do not take head-on views of the participants.
  • One embodiment uses perspective correction.
  • Such an embodiment uses a perspective model of straight lines that converge at a distant point and assume that each face is planar. Using the distances of each fitted face, e.g., the distance between eyes, or the width of a half-ellipse fitted, and the known locations of the camera, geometric transformations are applied to the cameras to correct for the distortion. More sophisticated methods also are possible that correct for any lens distortion caused by the wide angle camera lens. See for example Steve Mann and Rosalind Picard, "Virtual bellows: constructing high quality still from Video," Proceedings, First IEEE International Conference on Image Processing ICIP-94, Volume 1, 13-16 Nov. 1994, Page(s):363 - 367, Austin Texas, November 1994.
  • Those embodiments of the invention that include correction for distortion are not limited to any particular method of carrying out correction for distortions, and many such methods are known. See for example, Shum, H. -Y., and Sing ,Bing Kang, "A review of image-based rendering techniques," in SPIE Proceedings Vol. 5067 (3), pp. 2-13, Proceedings of the Conference on Visual communications and image processing 2000, Perth , AUSTRALIA, 20-23 June 2000 for a survey of a few such methods. Many more have been developed since that paper was written.
  • FIG. 10 shows a simplified block diagram of a teleconferencing system that includes teleconference terminal 1001 coupled to a network 1007 to which at least one endpoint 1009 is also coupled so that a video teleconference can take place between the terminal 1001 and the at least one endpoint 1009.
  • Terminal 1001 includes an embodiment of the present invention, e.g., that of FIG. 3.
  • the terminal 1009 includes a plurality of video cameras 303, and a plurality of microphones 113.
  • a different version implements the apparatus shown in FIG. 2, in which case the cameras are cameras 203.
  • a set of one or more display screens 921 also is included.
  • a processing system 1003 includes at least one programmable processor 1011 and a storage subsystem 1013.
  • the storage subsystem includes at least memory, and is encoded with software, shown as program 1015. Different version of the program 1015, when executed by the at least one processor 1011, causes the processing system 1003 to carry out the method embodiments described in this description.
  • the processing system includes a coder/decoder subsystem 1017 that in one embodiment includes, for the video coding/decoding, a plurality of processors and memory, the memory including program code that causes the processors to execute a method such that the coder/decoder subsystem codes high definition video and/or decode high definition video.
  • the processing system further includes a communication subsystem 1019 that, together with the at least one programmable processor 1011, takes care of communication aspects of operation of the terminal, and that includes an interface to the network 1007.
  • processing system 1003 is shown in simplified form only, without a lot of the inner working shown, in order not to obscure the inventive aspects of the present invention.
  • a computer-readable storage medium is encoded with instructions that when executed by one or more processors of a processing system, e.g., in a virtual camera people view composition apparatus of a teleconferencing terminal, cause carrying out any of the methods described herein.
  • processors refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities into other data similarly represented as physical quantities.
  • processor or “machine” may refer to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory.
  • a "computer” or a “computing machine” or a “computing platform” may include one or more processors.
  • the methodologies described herein are, in one embodiment, performable by one or more processors that accept computer-readable (also called machine -readable) logic encoded on one or more computer-readable tangible media in which are encoded a set of instructions that when executed by one or more of the processors carry out at least one of the methods described herein.
  • Any processor capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken are included.
  • a typical processing system that includes one or more processors.
  • Each processor may include one or more of a CPU, a graphics processing unit, and a programmable DSP unit.
  • the processing system further may include a memory subsystem including main RAM and/or a static RAM, and/or ROM.
  • a bus subsystem may be included for communicating between the components.
  • the processing system further may be a distributed processing system with processors coupled by a network. If the processing system requires a display, such a display may be included, e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT) display. If manual data entry is required, the processing system also includes an input device such as one or more of an alphanumeric input unit such as a keyboard, a pointing control device such as a mouse, and so forth.
  • the term memory unit as used herein, if clear from the context and unless explicitly stated otherwise, also encompasses a storage system such as a disk drive unit.
  • the processing system in some configurations may include a sound output device, and a network interface device.
  • the memory subsystem thus includes a computer-readable medium that carries logic (e.g., software) including a set of instructions to cause performing, when executed by one or more processors, one of more of the methods described herein.
  • logic e.g., software
  • the software may reside in the hard disk, or may also reside, completely or at least partially, within the RAM and/or within the processor during execution thereof by the computer system.
  • the memory and the processor also constitute a computer-readable medium on which is encoded logic, e.g., in the form of instructions.
  • a computer-readable medium may form, or be includes in a computer program product.
  • the one or more processors operate as a standalone device or may be connected, e.g., networked to other processor(s), in a networked deployment, the one or more processors may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to- peer or distributed network environment.
  • the one or more processors may form a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA Personal Digital Assistant
  • each of the methods described herein is in the form of a medium in which are encoded a set of instructions, e.g., a computer program that are for execution on one or more processors, e.g., one or more processors that are part of an encoding system.
  • a set of instructions e.g., a computer program that are for execution on one or more processors, e.g., one or more processors that are part of an encoding system.
  • embodiments of the present invention may be embodied as a method, an apparatus such as a special purpose apparatus, an apparatus such as a data processing system, or a medium, e.g., a computer program product.
  • the computer-readable medium carries logic including a set of instructions that when executed on one or more processors cause the apparatus that includes the processor or processors to implement a method.
  • aspects of the present invention may take the form of a method, an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects.
  • the present invention may take the form of medium (e.g., a computer program product on a computer-readable storage medium) carrying computer-readable program code embodied in the medium.
  • a medium is shown in an example embodiment to be a single medium, the term “medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term “medium” shall also be taken to include any medium that is capable of storing, encoding a set of instructions for execution by one or more of the processors and that cause the carrying out of any one or more of the methodologies of the present invention.
  • a medium may take many forms, including tangible storage media.
  • Non-volatile media includes, for example, optical, magnetic disks, and magneto-optical disks.
  • Volatile media includes dynamic memory, such as main memory.
  • the term “medium” shall accordingly be taken to included, but not be limited to, solid-state memories, a computer product embodied in optical and magnetic media.
  • any one of the terms comprising, comprised of or which comprises is an open term that means including at least the elements/features that follow, but not excluding others.
  • the term comprising, when used in the claims should not be interpreted as being limitative to the means or elements or steps listed thereafter.
  • the scope of the expression a device comprising A and B should not be limited to devices consisting only of elements A and B.
  • Any one of the terms including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising.
  • Coupled when used in the claims, should not be interpreted as being limitative to direct connections only.
  • the terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other.
  • the scope of the expression a device A coupled to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means.
  • Coupled may mean that two or more elements are either in direct physical or electrical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Telephonic Communication Services (AREA)
  • Studio Devices (AREA)
PCT/US2009/064061 2008-11-20 2009-11-11 Multiple video camera processing for teleconferencing WO2010059481A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN200980155006.3A CN102282847B (zh) 2008-11-20 2009-11-11 用于远程会议的多视频相机处理
EP09752672.7A EP2368364B1 (en) 2008-11-20 2009-11-11 Multiple video camera processing for teleconferencing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/275,119 2008-11-20
US12/275,119 US8358328B2 (en) 2008-11-20 2008-11-20 Multiple video camera processing for teleconferencing

Publications (1)

Publication Number Publication Date
WO2010059481A1 true WO2010059481A1 (en) 2010-05-27

Family

ID=41647043

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/064061 WO2010059481A1 (en) 2008-11-20 2009-11-11 Multiple video camera processing for teleconferencing

Country Status (4)

Country Link
US (1) US8358328B2 (zh)
EP (1) EP2368364B1 (zh)
CN (1) CN102282847B (zh)
WO (1) WO2010059481A1 (zh)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8477175B2 (en) 2009-03-09 2013-07-02 Cisco Technology, Inc. System and method for providing three dimensional imaging in a network environment
US8694658B2 (en) 2008-09-19 2014-04-08 Cisco Technology, Inc. System and method for enabling communication sessions in a network environment
US8692862B2 (en) 2011-02-28 2014-04-08 Cisco Technology, Inc. System and method for selection of video data in a video conference environment
US8896655B2 (en) 2010-08-31 2014-11-25 Cisco Technology, Inc. System and method for providing depth adaptive video conferencing
US8902244B2 (en) 2010-11-15 2014-12-02 Cisco Technology, Inc. System and method for providing enhanced graphics in a video environment
US8917309B1 (en) 2012-03-08 2014-12-23 Google, Inc. Key frame distribution in video conferencing
US8934026B2 (en) 2011-05-12 2015-01-13 Cisco Technology, Inc. System and method for video coding in a dynamic environment
US8947493B2 (en) 2011-11-16 2015-02-03 Cisco Technology, Inc. System and method for alerting a participant in a video conference
US9055332B2 (en) 2010-10-26 2015-06-09 Google Inc. Lip synchronization in a video conference
US9082297B2 (en) 2009-08-11 2015-07-14 Cisco Technology, Inc. System and method for verifying parameters in an audiovisual environment
US9111138B2 (en) 2010-11-30 2015-08-18 Cisco Technology, Inc. System and method for gesture interface control
US9143725B2 (en) 2010-11-15 2015-09-22 Cisco Technology, Inc. System and method for providing enhanced graphics in a video environment
US9204096B2 (en) 2009-05-29 2015-12-01 Cisco Technology, Inc. System and method for extending communications between participants in a conferencing environment
US9210302B1 (en) 2011-08-10 2015-12-08 Google Inc. System, method and apparatus for multipoint video transmission
US9225916B2 (en) 2010-03-18 2015-12-29 Cisco Technology, Inc. System and method for enhancing video images in a conferencing environment
US9313452B2 (en) 2010-05-17 2016-04-12 Cisco Technology, Inc. System and method for providing retracting optics in a video conferencing environment
US9331948B2 (en) 2010-10-26 2016-05-03 Cisco Technology, Inc. System and method for provisioning flows in a mobile network environment
US9338394B2 (en) 2010-11-15 2016-05-10 Cisco Technology, Inc. System and method for providing enhanced audio in a video environment
US9386273B1 (en) 2012-06-27 2016-07-05 Google Inc. Video multicast engine
US9681154B2 (en) 2012-12-06 2017-06-13 Patent Capital Group System and method for depth-guided filtering in a video conference environment
US9699410B1 (en) 2016-10-28 2017-07-04 Wipro Limited Method and system for dynamic layout generation in video conferencing system
EP3758368A1 (en) 2019-06-28 2020-12-30 Pexip AS Intelligent adaptive and corrective layout composition
US11438549B2 (en) 2018-11-22 2022-09-06 Poly, Inc. Joint use of face, motion, and upper-body detection in group framing
WO2023064153A1 (en) * 2021-10-15 2023-04-20 Cisco Technology, Inc. Dynamic video layout design during online meetings
US12069396B2 (en) 2021-10-15 2024-08-20 Cisco Technology, Inc. Dynamic video layout design during online meetings

Families Citing this family (123)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101496387B (zh) 2006-03-06 2012-09-05 思科技术公司 用于移动无线网络中的接入认证的系统和方法
US8570373B2 (en) * 2007-06-08 2013-10-29 Cisco Technology, Inc. Tracking an object utilizing location information associated with a wireless device
US8797377B2 (en) 2008-02-14 2014-08-05 Cisco Technology, Inc. Method and system for videoconference configuration
US8355041B2 (en) * 2008-02-14 2013-01-15 Cisco Technology, Inc. Telepresence system for 360 degree video conferencing
US8319819B2 (en) 2008-03-26 2012-11-27 Cisco Technology, Inc. Virtual round-table videoconference
US8390667B2 (en) 2008-04-15 2013-03-05 Cisco Technology, Inc. Pop-up PIP for people not in picture
KR20100070146A (ko) * 2008-12-17 2010-06-25 삼성전자주식회사 디스플레이 방법 및 이를 이용한 촬영 장치와 디스플레이 장치
US8659637B2 (en) 2009-03-09 2014-02-25 Cisco Technology, Inc. System and method for providing three dimensional video conferencing in a network environment
US20110169832A1 (en) * 2010-01-11 2011-07-14 Roy-G-Biv Corporation 3D Motion Interface Systems and Methods
USD628968S1 (en) 2010-03-21 2010-12-14 Cisco Technology, Inc. Free-standing video unit
USD626102S1 (en) 2010-03-21 2010-10-26 Cisco Tech Inc Video unit with integrated features
USD628175S1 (en) 2010-03-21 2010-11-30 Cisco Technology, Inc. Mounted video unit
USD626103S1 (en) 2010-03-21 2010-10-26 Cisco Technology, Inc. Video unit with integrated features
US9723260B2 (en) 2010-05-18 2017-08-01 Polycom, Inc. Voice tracking camera with speaker identification
US8395653B2 (en) * 2010-05-18 2013-03-12 Polycom, Inc. Videoconferencing endpoint having multiple voice-tracking cameras
US8248448B2 (en) 2010-05-18 2012-08-21 Polycom, Inc. Automatic camera framing for videoconferencing
US8842161B2 (en) 2010-05-18 2014-09-23 Polycom, Inc. Videoconferencing system having adjunct camera for auto-framing and tracking
US8599934B2 (en) 2010-09-08 2013-12-03 Cisco Technology, Inc. System and method for skip coding during video conferencing in a network environment
US9628755B2 (en) 2010-10-14 2017-04-18 Microsoft Technology Licensing, Llc Automatically tracking user movement in a video chat application
US8699457B2 (en) 2010-11-03 2014-04-15 Cisco Technology, Inc. System and method for managing flows in a mobile network environment
US8730297B2 (en) 2010-11-15 2014-05-20 Cisco Technology, Inc. System and method for providing camera functions in a video environment
US8542264B2 (en) 2010-11-18 2013-09-24 Cisco Technology, Inc. System and method for managing optics in a video environment
US8723914B2 (en) 2010-11-19 2014-05-13 Cisco Technology, Inc. System and method for providing enhanced video processing in a network environment
USD682864S1 (en) 2010-12-16 2013-05-21 Cisco Technology, Inc. Display screen with graphical user interface
USD682854S1 (en) 2010-12-16 2013-05-21 Cisco Technology, Inc. Display screen for graphical user interface
USD678307S1 (en) 2010-12-16 2013-03-19 Cisco Technology, Inc. Display screen with graphical user interface
USD678308S1 (en) 2010-12-16 2013-03-19 Cisco Technology, Inc. Display screen with graphical user interface
USD678894S1 (en) 2010-12-16 2013-03-26 Cisco Technology, Inc. Display screen with graphical user interface
USD682294S1 (en) 2010-12-16 2013-05-14 Cisco Technology, Inc. Display screen with graphical user interface
USD678320S1 (en) 2010-12-16 2013-03-19 Cisco Technology, Inc. Display screen with graphical user interface
USD682293S1 (en) 2010-12-16 2013-05-14 Cisco Technology, Inc. Display screen with graphical user interface
US8537195B2 (en) * 2011-02-09 2013-09-17 Polycom, Inc. Automatic video layouts for multi-stream multi-site telepresence conferencing system
US20120206568A1 (en) * 2011-02-10 2012-08-16 Google Inc. Computing device having multiple image capture devices and image modes
US20120259638A1 (en) * 2011-04-08 2012-10-11 Sony Computer Entertainment Inc. Apparatus and method for determining relevance of input speech
US8670019B2 (en) 2011-04-28 2014-03-11 Cisco Technology, Inc. System and method for providing enhanced eye gaze in a video conferencing environment
US8786631B1 (en) 2011-04-30 2014-07-22 Cisco Technology, Inc. System and method for transferring transparency information in a video environment
US8705812B2 (en) * 2011-06-10 2014-04-22 Amazon Technologies, Inc. Enhanced face recognition in video
US8872878B2 (en) * 2011-07-20 2014-10-28 Cisco Technology, Inc. Adaptation of video for use with different number of cameras and displays at endpoints
US9288331B2 (en) 2011-08-16 2016-03-15 Cisco Technology, Inc. System and method for muting audio associated with a source
US10048933B2 (en) * 2011-11-30 2018-08-14 Nokia Technologies Oy Apparatus and method for audio reactive UI information and display
US8682087B2 (en) 2011-12-19 2014-03-25 Cisco Technology, Inc. System and method for depth-guided image filtering in a video conference environment
US9386276B2 (en) * 2012-03-23 2016-07-05 Polycom, Inc. Method and system for determining reference points in video image frames
US20130321564A1 (en) * 2012-05-31 2013-12-05 Microsoft Corporation Perspective-correct communication window with motion parallax
US9007465B1 (en) 2012-08-31 2015-04-14 Vce Company, Llc Obtaining customer support for electronic system using first and second cameras
CN102843542B (zh) * 2012-09-07 2015-12-02 华为技术有限公司 多流会议的媒体协商方法、设备和系统
US9076028B2 (en) * 2012-10-08 2015-07-07 Citrix Systems, Inc. Facial recognition and transmission of facial images in a videoconference
US9154731B2 (en) * 2012-11-16 2015-10-06 Citrix Systems, Inc. Systems and methods for modifying an image in a video feed
US8957940B2 (en) * 2013-03-11 2015-02-17 Cisco Technology, Inc. Utilizing a smart camera system for immersive telepresence
US20140320592A1 (en) * 2013-04-30 2014-10-30 Microsoft Corporation Virtual Video Camera
JP6201440B2 (ja) * 2013-06-11 2017-09-27 株式会社リコー 配置算出方法、及びプログラム
CN104283857A (zh) * 2013-07-08 2015-01-14 华为技术有限公司 多媒体会议的建立方法、装置及系统
US9363476B2 (en) 2013-09-20 2016-06-07 Microsoft Technology Licensing, Llc Configuration of a touch screen display with conferencing
US20150085060A1 (en) 2013-09-20 2015-03-26 Microsoft Corporation User experience for conferencing with a touch screen display
US20150146078A1 (en) * 2013-11-27 2015-05-28 Cisco Technology, Inc. Shift camera focus based on speaker position
US10325591B1 (en) * 2014-09-05 2019-06-18 Amazon Technologies, Inc. Identifying and suppressing interfering audio content
US11099465B2 (en) 2014-09-25 2021-08-24 Steve H. McNelley Communication stage and display systems
US9819907B2 (en) * 2014-09-25 2017-11-14 Steve H. McNelley Communication stage and related systems
US11750772B2 (en) 2014-09-25 2023-09-05 Steve H. McNelley Rear illuminated transparent communication terminals
US10129506B2 (en) 2014-09-25 2018-11-13 Steve H. McNelley Advanced transparent projection communication terminals
US9930290B2 (en) * 2014-09-25 2018-03-27 Steve H. McNelley Communication stage and integrated systems
US11258983B2 (en) 2014-09-25 2022-02-22 Steve H. McNelley Immersive communication terminals
US10298877B2 (en) 2014-09-25 2019-05-21 Steve H. McNelley Communication stage and display systems
US9848169B2 (en) 2014-09-25 2017-12-19 Steve H. McNelley Transparent projection communication terminals
US10841535B2 (en) 2014-09-25 2020-11-17 Steve H. McNelley Configured transparent communication terminals
CN104469320A (zh) * 2014-12-22 2015-03-25 龚文基 单人操控多机位微型电视转播系统
US9270941B1 (en) * 2015-03-16 2016-02-23 Logitech Europe S.A. Smart video conferencing system
WO2016159938A1 (en) * 2015-03-27 2016-10-06 Hewlett-Packard Development Company, L.P. Locating individuals using microphone arrays and voice pattern matching
CA3239163A1 (en) * 2015-04-01 2016-10-06 Owl Labs, Inc. Compositing and scaling angularly separated sub-scenes
JP6528574B2 (ja) 2015-07-14 2019-06-12 株式会社リコー 情報処理装置、情報処理方法、および情報処理プログラム
JP2017028375A (ja) 2015-07-16 2017-02-02 株式会社リコー 映像処理装置、及びプログラム
JP2017028633A (ja) 2015-07-27 2017-02-02 株式会社リコー 映像配信端末、プログラム、及び、映像配信方法
EP3335418A1 (en) * 2015-08-14 2018-06-20 PCMS Holdings, Inc. System and method for augmented reality multi-view telepresence
US9769419B2 (en) * 2015-09-30 2017-09-19 Cisco Technology, Inc. Camera system for video conference endpoints
US10397546B2 (en) 2015-09-30 2019-08-27 Microsoft Technology Licensing, Llc Range imaging
US9930270B2 (en) 2015-10-15 2018-03-27 Microsoft Technology Licensing, Llc Methods and apparatuses for controlling video content displayed to a viewer
US9888174B2 (en) 2015-10-15 2018-02-06 Microsoft Technology Licensing, Llc Omnidirectional camera with movement detection
US10277858B2 (en) * 2015-10-29 2019-04-30 Microsoft Technology Licensing, Llc Tracking object of interest in an omnidirectional video
US10523923B2 (en) * 2015-12-28 2019-12-31 Microsoft Technology Licensing, Llc Synchronizing active illumination cameras
US10462452B2 (en) 2016-03-16 2019-10-29 Microsoft Technology Licensing, Llc Synchronizing active illumination cameras
US10762712B2 (en) 2016-04-01 2020-09-01 Pcms Holdings, Inc. Apparatus and method for supporting interactive augmented reality functionalities
USD838129S1 (en) 2016-04-15 2019-01-15 Steelcase Inc. Worksurface for a conference table
US10219614B2 (en) 2016-04-15 2019-03-05 Steelcase Inc. Reconfigurable conference table
US10887628B1 (en) * 2016-04-27 2021-01-05 United Services Automobile Services (USAA) Systems and methods for adaptive livestreaming
US9549153B1 (en) 2016-05-26 2017-01-17 Logitech Europe, S.A. Method and apparatus for facilitating setup, discovery of capabilites and interaction of electronic devices
US10637933B2 (en) 2016-05-26 2020-04-28 Logitech Europe S.A. Method and apparatus for transferring information between electronic devices
US9798933B1 (en) 2016-12-12 2017-10-24 Logitech Europe, S.A. Video conferencing system and related methods
US10762653B2 (en) * 2016-12-27 2020-09-01 Canon Kabushiki Kaisha Generation apparatus of virtual viewpoint image, generation method, and storage medium
US10115396B2 (en) 2017-01-03 2018-10-30 Logitech Europe, S.A. Content streaming system
US9942518B1 (en) 2017-02-28 2018-04-10 Cisco Technology, Inc. Group and conversational framing for speaker tracking in a video conference system
US10231051B2 (en) * 2017-04-17 2019-03-12 International Business Machines Corporation Integration of a smartphone and smart conference system
US10433051B2 (en) * 2017-05-29 2019-10-01 Staton Techiya, Llc Method and system to determine a sound source direction using small microphone arrays
WO2018226508A1 (en) 2017-06-09 2018-12-13 Pcms Holdings, Inc. Spatially faithful telepresence supporting varying geometries and moving users
CN107277459A (zh) * 2017-07-29 2017-10-20 安徽博威康信息技术有限公司 一种基于人体特征识别和目标跟踪的摄像机画面切换方法
CN109413359B (zh) * 2017-08-16 2020-07-28 华为技术有限公司 摄像跟踪方法、装置及设备
US10356362B1 (en) * 2018-01-16 2019-07-16 Google Llc Controlling focus of audio signals on speaker during videoconference
CN108391057B (zh) * 2018-04-04 2020-10-16 深圳市冠旭电子股份有限公司 摄像头拍摄控制方法、装置、智能设备及计算机存储介质
US10516852B2 (en) * 2018-05-16 2019-12-24 Cisco Technology, Inc. Multiple simultaneous framing alternatives using speaker tracking
US10951859B2 (en) * 2018-05-30 2021-03-16 Microsoft Technology Licensing, Llc Videoconferencing device and method
GB201809960D0 (en) 2018-06-18 2018-08-01 Eyecon As Video conferencing system
US10642573B2 (en) 2018-07-20 2020-05-05 Logitech Europe S.A. Content streaming apparatus and method
JP7204421B2 (ja) * 2018-10-25 2023-01-16 キヤノン株式会社 検知装置およびその制御方法
CN113016002A (zh) * 2018-11-23 2021-06-22 宝利通公司 来自具有广角镜头的相机的图像中的选择性失真或变形校正
US11258982B2 (en) 2019-08-16 2022-02-22 Logitech Europe S.A. Video conference system
US11095467B2 (en) 2019-08-16 2021-08-17 Logitech Europe S.A. Video conference system
US11088861B2 (en) 2019-08-16 2021-08-10 Logitech Europe S.A. Video conference system
US11038704B2 (en) * 2019-08-16 2021-06-15 Logitech Europe S.A. Video conference system
US20240077941A1 (en) * 2019-11-15 2024-03-07 Sony Group Corporation Information processing system, information processing method, and program
US10904446B1 (en) 2020-03-30 2021-01-26 Logitech Europe S.A. Advanced video conferencing systems and methods
US10965908B1 (en) 2020-03-30 2021-03-30 Logitech Europe S.A. Advanced video conferencing systems and methods
US10972655B1 (en) 2020-03-30 2021-04-06 Logitech Europe S.A. Advanced video conferencing systems and methods
US10951858B1 (en) 2020-03-30 2021-03-16 Logitech Europe S.A. Advanced video conferencing systems and methods
US11729342B2 (en) 2020-08-04 2023-08-15 Owl Labs Inc. Designated view within a multi-view composited webcam signal
US11562638B2 (en) 2020-08-24 2023-01-24 Logitech Europe S.A. Electronic system and method for improving human interaction and activities
WO2022046810A2 (en) * 2020-08-24 2022-03-03 Owl Labs Inc. Merging webcam signals from multiple cameras
US11418559B2 (en) 2020-09-21 2022-08-16 Logitech Europe S.A. Content distribution system
US11445457B2 (en) 2020-09-21 2022-09-13 Logitech Europe S.A. Content distribution system
US10979672B1 (en) 2020-10-20 2021-04-13 Katmai Tech Holdings LLC Web-based videoconference virtual environment with navigable avatars, and applications thereof
US11350029B1 (en) 2021-03-29 2022-05-31 Logitech Europe S.A. Apparatus and method of detecting and displaying video conferencing groups
US12068872B2 (en) 2021-04-28 2024-08-20 Zoom Video Communications, Inc. Conference gallery view intelligence system
US11736660B2 (en) * 2021-04-28 2023-08-22 Zoom Video Communications, Inc. Conference gallery view intelligence system
US11843898B2 (en) 2021-09-10 2023-12-12 Zoom Video Communications, Inc. User interface tile arrangement based on relative locations of conference participants
US11882383B2 (en) 2022-01-26 2024-01-23 Zoom Video Communications, Inc. Multi-camera video stream selection for in-person conference participants
US20240257553A1 (en) * 2023-01-27 2024-08-01 Huddly As Systems and methods for correlating individuals across outputs of a multi-camera system and framing interactions between meeting participants

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994016517A1 (en) * 1993-01-12 1994-07-21 Bell Communications Research, Inc. Sound localization system for teleconferencing using self-steering microphone arrays
US20020149672A1 (en) * 2001-04-13 2002-10-17 Clapp Craig S.K. Modular video conferencing system
US20040263636A1 (en) * 2003-06-26 2004-12-30 Microsoft Corporation System and method for distributed meetings
WO2008101117A1 (en) * 2007-02-14 2008-08-21 Teliris, Inc. Telepresence conference room layout, dynamic scenario manager, diagnostics and control system and method

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3541339B2 (ja) * 1997-06-26 2004-07-07 富士通株式会社 マイクロホンアレイ装置
US6704048B1 (en) 1998-08-27 2004-03-09 Polycom, Inc. Adaptive electronic zoom control
US6614474B1 (en) 1998-08-27 2003-09-02 Polycom, Inc. Electronic pan tilt zoom video camera with adaptive edge sharpening filter
JP2002007294A (ja) 2000-06-22 2002-01-11 Canon Inc 画像配信システム及び方法並びに記憶媒体
US6577333B2 (en) * 2000-12-12 2003-06-10 Intel Corporation Automatic multi-camera video composition
US20020140804A1 (en) 2001-03-30 2002-10-03 Koninklijke Philips Electronics N.V. Method and apparatus for audio/image speaker detection and locator
US6583808B2 (en) 2001-10-04 2003-06-24 National Research Council Of Canada Method and system for stereo videoconferencing
US6611281B2 (en) * 2001-11-13 2003-08-26 Koninklijke Philips Electronics N.V. System and method for providing an awareness of remote people in the room during a videoconference
US20040008423A1 (en) * 2002-01-28 2004-01-15 Driscoll Edward C. Visual teleconferencing apparatus
JP2003345379A (ja) * 2002-03-20 2003-12-03 Japan Science & Technology Corp 音声映像変換装置及び方法、音声映像変換プログラム
EP1589758A1 (en) 2004-04-22 2005-10-26 Alcatel Video conference system and method
EP1613082A1 (en) * 2004-06-30 2006-01-04 Sony Ericsson Mobile Communications AB Face image correction
US7864210B2 (en) * 2005-11-18 2011-01-04 International Business Machines Corporation System and methods for video conferencing
US8223186B2 (en) * 2006-05-31 2012-07-17 Hewlett-Packard Development Company, L.P. User interface for a video teleconference
JP2008259000A (ja) * 2007-04-06 2008-10-23 Sony Corp テレビ会議装置、制御方法、およびプログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994016517A1 (en) * 1993-01-12 1994-07-21 Bell Communications Research, Inc. Sound localization system for teleconferencing using self-steering microphone arrays
US20020149672A1 (en) * 2001-04-13 2002-10-17 Clapp Craig S.K. Modular video conferencing system
US20040263636A1 (en) * 2003-06-26 2004-12-30 Microsoft Corporation System and method for distributed meetings
WO2008101117A1 (en) * 2007-02-14 2008-08-21 Teliris, Inc. Telepresence conference room layout, dynamic scenario manager, diagnostics and control system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WEI-CHAO WEN ET AL: "Toward a compelling sensation of telepresence: demonstrating a portal to a distant (static) office", PROCEEDINGS VISUALIZATION 2000. VIS 2000. SALT LAKE CITY, UT, OCT. 8 - 13, 2000; [ANNUAL IEEE CONFERENCE ON VISUALIZATION], LOS ALAMITOS, CA : IEEE COMP. SOC, US, 1 January 2000 (2000-01-01), pages 327 - 333, XP031172708, ISBN: 978-0-7803-6478-3 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8694658B2 (en) 2008-09-19 2014-04-08 Cisco Technology, Inc. System and method for enabling communication sessions in a network environment
US8477175B2 (en) 2009-03-09 2013-07-02 Cisco Technology, Inc. System and method for providing three dimensional imaging in a network environment
US9204096B2 (en) 2009-05-29 2015-12-01 Cisco Technology, Inc. System and method for extending communications between participants in a conferencing environment
US9082297B2 (en) 2009-08-11 2015-07-14 Cisco Technology, Inc. System and method for verifying parameters in an audiovisual environment
US9225916B2 (en) 2010-03-18 2015-12-29 Cisco Technology, Inc. System and method for enhancing video images in a conferencing environment
US9313452B2 (en) 2010-05-17 2016-04-12 Cisco Technology, Inc. System and method for providing retracting optics in a video conferencing environment
US8896655B2 (en) 2010-08-31 2014-11-25 Cisco Technology, Inc. System and method for providing depth adaptive video conferencing
US9331948B2 (en) 2010-10-26 2016-05-03 Cisco Technology, Inc. System and method for provisioning flows in a mobile network environment
US9055332B2 (en) 2010-10-26 2015-06-09 Google Inc. Lip synchronization in a video conference
US8902244B2 (en) 2010-11-15 2014-12-02 Cisco Technology, Inc. System and method for providing enhanced graphics in a video environment
US9338394B2 (en) 2010-11-15 2016-05-10 Cisco Technology, Inc. System and method for providing enhanced audio in a video environment
US9143725B2 (en) 2010-11-15 2015-09-22 Cisco Technology, Inc. System and method for providing enhanced graphics in a video environment
US9111138B2 (en) 2010-11-30 2015-08-18 Cisco Technology, Inc. System and method for gesture interface control
US8692862B2 (en) 2011-02-28 2014-04-08 Cisco Technology, Inc. System and method for selection of video data in a video conference environment
US8934026B2 (en) 2011-05-12 2015-01-13 Cisco Technology, Inc. System and method for video coding in a dynamic environment
US9210302B1 (en) 2011-08-10 2015-12-08 Google Inc. System, method and apparatus for multipoint video transmission
US8947493B2 (en) 2011-11-16 2015-02-03 Cisco Technology, Inc. System and method for alerting a participant in a video conference
US8917309B1 (en) 2012-03-08 2014-12-23 Google, Inc. Key frame distribution in video conferencing
US9386273B1 (en) 2012-06-27 2016-07-05 Google Inc. Video multicast engine
US9681154B2 (en) 2012-12-06 2017-06-13 Patent Capital Group System and method for depth-guided filtering in a video conference environment
US9699410B1 (en) 2016-10-28 2017-07-04 Wipro Limited Method and system for dynamic layout generation in video conferencing system
US11438549B2 (en) 2018-11-22 2022-09-06 Poly, Inc. Joint use of face, motion, and upper-body detection in group framing
EP3758368A1 (en) 2019-06-28 2020-12-30 Pexip AS Intelligent adaptive and corrective layout composition
US10972702B2 (en) 2019-06-28 2021-04-06 Pexip AS Intelligent adaptive and corrective layout composition
WO2023064153A1 (en) * 2021-10-15 2023-04-20 Cisco Technology, Inc. Dynamic video layout design during online meetings
US12069396B2 (en) 2021-10-15 2024-08-20 Cisco Technology, Inc. Dynamic video layout design during online meetings

Also Published As

Publication number Publication date
EP2368364B1 (en) 2017-01-18
CN102282847B (zh) 2014-10-15
US8358328B2 (en) 2013-01-22
US20100123770A1 (en) 2010-05-20
EP2368364A1 (en) 2011-09-28
CN102282847A (zh) 2011-12-14

Similar Documents

Publication Publication Date Title
EP2368364B1 (en) Multiple video camera processing for teleconferencing
US10171771B2 (en) Camera system for video conference endpoints
US11695900B2 (en) System and method of dynamic, natural camera transitions in an electronic camera
US8773498B2 (en) Background compression and resolution enhancement technique for video telephony and video conferencing
JP5638997B2 (ja) 会議出席者間の相互作用に従ってcp配置を適合させるための方法およびシステム
US9426419B2 (en) Two-way video conferencing system
US8508576B2 (en) Remote presenting system, device, and method
US8860775B2 (en) Remote presenting system, device, and method
EP2352290B1 (en) Method and apparatus for matching audio and video signals during a videoconference
US11076127B1 (en) System and method for automatically framing conversations in a meeting or a video conference
US9143727B2 (en) Dual-axis image equalization in video conferencing
WO2010130084A1 (zh) 远程呈现系统、方法及视频采集设备
US20210271911A1 (en) Differentiating a rendered conference participant from a genuine conference participant
US11477393B2 (en) Detecting and tracking a subject of interest in a teleconference
US11496675B2 (en) Region of interest based adjustment of camera parameters in a teleconferencing environment
JP6004978B2 (ja) 被写体画像抽出装置および被写体画像抽出・合成装置
CN113632458A (zh) 广角相机透视体验的系统、算法和设计
WO2023150078A1 (en) Enhancing remote visual interaction

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200980155006.3

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09752672

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2009752672

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2009752672

Country of ref document: EP