US20150365628A1 - System and method for video conferencing - Google Patents
System and method for video conferencing Download PDFInfo
- Publication number
- US20150365628A1 US20150365628A1 US14/763,840 US201414763840A US2015365628A1 US 20150365628 A1 US20150365628 A1 US 20150365628A1 US 201414763840 A US201414763840 A US 201414763840A US 2015365628 A1 US2015365628 A1 US 2015365628A1
- Authority
- US
- United States
- Prior art keywords
- data
- stream
- individual
- remote user
- orientation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G06K9/60—
-
- H04N13/02—
-
- H04N13/04—
-
- H04N13/0477—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/376—Image reproducers using viewer tracking for tracking left-right translational head movements, i.e. lateral movements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/142—Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
- H04N7/144—Constructional details of the terminal equipment, e.g. arrangements of the camera and the display camera and display on the same optical axis, e.g. optically multiplexing the camera and display for eye to eye contact
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
Definitions
- the present invention in some embodiments thereof, relates to the processing and displaying of image data and, more particularly, but not exclusively, to a system and method for video conferencing.
- Videoconferencing has become a useful form of communication between parties for remotely conducting various forms of business, corporate meetings, training, and the like without traveling for face-to-face meetings.
- videophone service or videotelephony enables individuals to communicate vocally and visually using special equipment.
- Inexpensive video cameras and microphones have been developed for interfacing to personal computers to enable vocal and visual communication over the Internet.
- Web cameras or webcams can be attached to liquid crystal display (LCD) monitors, directed at the user, and interfaced to the computer for acquiring live images of the user.
- LCD liquid crystal display
- a microphone mounted on a desk or the monitor is connected to a microphone input of the computer to receive the user's voice input.
- Many portable computer devices such as laptops, notebooks, netbooks, tablet computers, pad computers, and the like are provided with built-in video cameras and microphones.
- Most cellular or cell phones also have cameras capable of recording still or moving images. Such cameras and microphones allow computer users to engage in an informal kind of vocal and visual communication over the Internet, which is sometimes referred to as “video chatting”.
- a system for video conferencing comprising a data processor configured for receiving from a remote location a stream of imagery data of a remote user; displaying an image of the remote user on a display device; receiving a stream of imagery data of an individual in a local scene in front of the display device; extracting a to gaze direction and/or the head orientation of the individual; and varying a view of the image responsively to the gaze direction and/or the head orientation.
- the system comprises the display device, and an imaging system constituted to receive a view of the local scene and transmit a stream of imagery data of the local scene to the data processor.
- the imaging system is configured for capturing a video stream and range data.
- the imaging system is configured for capturing stereoscopic image data.
- the data processor is configured for calculating range data from the stereoscopic image data.
- the system comprises a light source configured for projecting a light beam onto the individual, wherein the data processor is configured for calculating range data by a time-of-flight technique.
- the system comprises a light source configured for projecting a light pattern onto the individual, wherein the data processor system is configured for calculating range data by a triangulation technique, such as, but not limited to, a structured light technique.
- a triangulation technique such as, but not limited to, a structured light technique.
- the stream of imagery data of the remote user comprises a video stream and range data
- the data processor is configured for reconstructing a three-dimensional image from the stream of imagery data and displaying the three-dimensional image on the display device.
- the variation of the view comprises varying a displayed orientation of the remote user such that a rate of change of the orientation matches a rate of change of the gaze direction and/or head orientation.
- the variation of the view comprises varying a displayed orientation of the remote user oppositely to a change of the gaze direction and/or the head orientation.
- the variation of the view to comprises maintaining a fixed orientation of the remote user relative to the gaze direction.
- the system comprises at least one infrared light source constituted for illuminating at least a portion of the local scene, wherein the stream of imagery data of the individual comprises image data received from the individual in the visible range and in the infrared range.
- a system for at least two-way video conferencing between at least a first party at a first location and a second party at a second location comprises a first system at the first location and a second system at the second location, each of the first and the second systems being the system as delineated above and optionally as further detailed hereinunder.
- a method of video conferencing comprises: receiving from a remote location a stream the individual of a remote user; displaying an image of the remote user on a display device; using an imaging system for capturing a stream the individual of an individual in a local scene in front of the display device; and using a data processor for extracting a gaze direction and/or the head orientation of the individual, and varying a view of the image responsively to the gaze direction and/or the head orientation.
- the capturing of the stream of imagery data comprises capturing a video stream and range data.
- the capturing of stream of imagery data comprises capturing stereoscopic image data.
- the method comprises calculating range data from the stereoscopic image data.
- the method comprises projecting a light beam onto the individual, and calculating the range data by a time-of-flight technique.
- the method comprises projecting a light pattern onto the individual, and calculating the range data by a triangulation technique, such as, but not limited to, a structured light technique.
- the stream of imagery data of the remote user comprises a video stream and range data
- the method comprises reconstructing a three-dimensional image from the stream of imagery data and displaying the three-dimensional image on the display device.
- the method comprises varying a displayed orientation of the remote user such that a rate of change of the orientation matches a rate of change of the gaze direction and/or the head orientation.
- the method comprises varying a displayed orientation of the remote user oppositely to a change of the gaze direction and/or the head orientation.
- the method comprises maintaining a fixed orientation of the remote user relative to the gaze direction.
- the method comprises illuminating at least a portion of the local scene by infrared light, wherein the capturing of the stream of imagery data of the individual comprises imaging the individual in the visible range and in the infrared range.
- the computer software product comprises a computer-readable medium in which program instructions are stored, which instructions, when read by a data processor, cause the data processor to receive a video stream of a remote user and a video stream of an individual in a local scene, and to execute the method as delineated above and optionally as further detailed hereinunder.
- Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a to combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
- hardware for performing selected tasks according to embodiments of the invention could be implemented as a chip or a circuit.
- selected tasks according to embodiments of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system.
- one or more tasks according to exemplary embodiments of method and/or system as described herein are performed by a data processor, such as a computing platform for executing a plurality of instructions.
- the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data.
- a network connection is provided as well.
- a display and/or a user input device such as a keyboard or mouse are optionally provided as well.
- FIG. 1 is a schematic illustration of a system for video conferencing, according to some embodiments of the present invention.
- FIG. 2 is a flowchart diagram of a method suitable for video conferencing, according to some embodiments of the present invention.
- the present invention in some embodiments thereof, relates to the processing and displaying of image data and, more particularly, but not exclusively, to a system and method for video conferencing.
- FIG. 1 illustrates a system 10 for video conferencing, according to some embodiments of the present invention.
- System 10 comprises a data processor 12 configured for receiving from a remote location (not shown) a stream of imagery data of a remote user in a remote scene.
- the stream (referred to below as the remote stream) can be broadcasted over a communication network 18 (e.g., a local area network, or a wide area network such as the Internet or a cellular network), and data processor 12 can be configured to communicate with network 18 , and receive the remote stream therefrom.
- data processor 12 also receives from the remote location an audio stream (referred to be low as the remote audio stream) accompanying the imagery data stream.
- imagery data refers to a plurality of values that represent a two- or three-dimensional image, and that can therefore be used to reconstruct a two- or three-dimensional image.
- the imagery data comprise values (e.g., grey-levels, intensities, color intensities, etc.), each corresponding to a picture-element (e.g., a pixel, a sub-pixel or a group of pixels) in the image.
- the imagery data also comprise range data as further detailed hereinbelow.
- stream of imagery data or “imagery data stream” refers to time-dependent imagery data, wherein the plurality of values varies with time.
- the stream of imagery data comprises a video stream which may include a plurality of time-dependent values (e.g., grey-levels, intensities, color intensities, etc.), wherein a particular value at a particular time-point corresponds to a picture-element (e.g., a pixel, a sub-pixel or a group of pixels) in a video frame.
- the stream of imagery data also comprises time-dependent range data.
- the remote imagery data stream comprises a video stream, referred to herein as the remote video stream.
- Data processor 12 is preferably configured for decoding the remote video stream such that it can be interpreted and properly displayed.
- Data processor 12 is preferably configured to support a number of known codecs, such as MPEG-2, MPEG-4, H.264, H.263+, or other codec's.
- Data processor 12 uses the remote video stream for displaying a video image 14 of the remote user on a display device 16 .
- system 10 also comprises display device 16 .
- Data processor 12 can decode the remote audio stream such that it can be interpreted and properly output. If the remote audio stream is compressed, data processor preferably decompresses it. Data processor 12 is preferably configured to support a number of known audio codecs. Data processor 12 transmits the decoded remote audio stream to an audio output device 34 , such as, but not limited to, a speaker, a headset and the like. In various exemplary embodiments of the invention system 10 comprises audio output device 36 .
- Data processor 12 can be any man-made machine capable of executing an instruction set and/or performing calculations. Representative examples including, without limitation, a general purpose computer supplemented by dedicated software, general purpose microprocessor supplemented by dedicated software, general purpose microcontroller supplemented by dedicated software, general purpose graphics processor supplemented by dedicated software and/or a digital signal processor (DSP) supplemented by dedicated software. Data processor can also comprise dedicated circuitry (e.g., a printed circuit board) and/or a programmable electronic chip into which dedicated software is burned.
- dedicated circuitry e.g., a printed circuit board
- the stream of imagery data comprises range data corresponding to the video stream, wherein both the video stream to and the range data are time dependent and synchronized, such that each video frame of the video stream has a corresponding set of range data.
- the range data describe topographical information of the remote user or remote scene and is optionally and preferably received in the form of a depth map.
- depth map refers to a representation of a scene (the remote scene, in the present example) as a two-dimensional matrix, in which each matrix element corresponds to a respective location in the scene and has a respective matrix element value indicative of the distance from a certain reference location to the respective scene location.
- the reference location is typically static and the same for all matrix elements.
- a depth map optionally and preferably has the form of an image in which the pixel values indicate depth information.
- an 8-bit grey-scale image can be used to represent depth information.
- the depth map can provide depth information on a per-pixel basis of the image data, but may also use a coarser granularity, such as a lower resolution depth map wherein each matrix element value provides depth information for a group of pixels of the image data.
- the range data can also be provided in the form of a disparity map.
- a disparity map refers to the apparent shift of objects or parts of objects in a scene (the remote scene, in the present example) when observed from two different viewpoints, such as from the left-eye and the right-eye viewpoint.
- Disparity map and depth map are related and can be mapped onto one another provided the geometry of the respective viewpoints of the disparity map are known, as is commonly known to those skilled in the art.
- data processor 12 reconstructs a three-dimensional image of the remote user from the imagery data and displays a view of the three-dimensional image 14 on display device 16 .
- the three-dimensional image comprises geometric properties of a non-planar surface which at least partially encloses a three-dimensional volume.
- the non-planar surface is a two-dimensional object embedded in a three-dimensional space.
- a non-planar surface is a metric space induced by a smooth connected and compact Riemannian 2-manifold.
- the geometric properties of the non-planar surface would be provided explicitly for example, the slope and curvature (or even other spatial derivatives or combinations thereof) for every point of the non-planar surface. to Yet, such information is rarely attainable and the spatial information of the three-dimensional image is provided for a sampled version of the non-planar surface, which is a set of points on the Riemannian 2-manifold and which is sufficient for describing the topology of the 2-manifold.
- the spatial information of the non-planar surface is a reduced version of a 3D spatial representation, which may be either a point-cloud or a 3D reconstruction (e.g., a polygonal mesh or a curvilinear mesh) based on the point cloud.
- the 3D image is expressed via a 3D coordinate system, such as, but not limited to, Cartesian, Spherical, Ellipsoidal, 3D Parabolic or Paraboloidal coordinate 3D system.
- a three-dimensional image of an object is typically a two-dimensional image which, in addition to indicating the lateral extent of object members, further indicates the relative or absolute distance of the object members, or portions thereof, from some reference point, such as the location of the imaging device.
- a three-dimensional image typically includes information residing on a non-planar surface of a three-dimensional body and not necessarily in the bulk. Yet, it is commonly acceptable to refer to such image as “three-dimensional” because the non-planar surface is conveniently defined over a three-dimensional system of coordinate.
- the term “three-dimensional image” primarily relate to surface entities.
- de-occlusion information In order to improve the quality of the reconstructed three-dimensional image, additional occlusion information, known in the art as de-occlusion information, can be provided.
- (De-)occlusion information relates to image and/or depth information which can be used to represent views for additional viewpoints (e.g., other than those used for generating the disparity map).
- the occlusion information may also comprise information in the vicinity of occluded regions. The availability of occlusion information enables filling in of holes which occur when reconstructing the three-dimensional image using a 2D+range data.
- Occlusion data can be generated by data processor 12 and/or the remote location.
- the occlusion data are generated at the remote location, it is optionally and preferably also transmitted over communication network and received by data processor 12 .
- the occlusion data is generated by data processor 12 , it can be approximated using the range data, optionally and preferably from visible background for occlusion areas. Receiving the occlusion data from the remote location is preferred from the standpoint of quality, and generating the occlusion data by processor 12 is preferred from the standpoint of reduced amount of transferred data.
- the reconstruction of three-dimensional image using the image and range data and optionally also occlusion data can be done using any procedure known in the art.
- suitable algorithms are found in Qingxiong Yang, “Spatial-Depth Super Resolution for Range Images,” IEEE Conference on Computer Vision and Pattern Recognition, 2007, pages 1-8; H. Hirschmuller, “Stereo Processing by Semiglobal Matching and Mutual Information,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 30(2):328-341; International Publication No. WO1999/006956; European Publication Nos. EP1612733 and EP2570079; U.S. Published Application Nos. 20120183238 and 20120306876; and U.S. Pat. Nos. 7,583,777 and 8,249,334, the contents of which are hereby incorporated by reference.
- data processor 12 also receives a stream of imagery data of an individual 20 in a local scene 22 in front of display device.
- Local scene 22 may also include other objects (e.g., a table, a chair, walls, etc.) which are not shown for clarity of presentation.
- the imagery data stream of an individual 20 is referred to herein as the local imagery data stream.
- the imagery data stream is generated by an imaging system 24 constituted to capture a view of local scene 22 and transmit a corresponding imagery data to data processor 12 .
- the local imagery data preferably comprises a video stream, referred to herein as the local video stream.
- Data processor 12 preferably encodes the video stream received from imaging system 24 in accordance with a known codec, such as MPEG-2, MPEG-4, H.264, H.263+, or other codecs suitable for video conferencing. In some embodiments of the present invention data processor 12 also compress the encoded video for transmission. Data processor preferably transmits the local video stream over communication network 18 to the remote location.
- system 10 comprises imaging system 24 .
- data processor 12 additionally receives an audio stream from individual 20 and/or scene 22 , which audio stream is accompanying the video stream.
- the audio stream received from individual 20 and/or scene 22 is referred to herein as the remote audio stream.
- the local audio stream can be captured by an audio pickup device 36 , such as, but not limited to, a microphone, a microphone array, a headset or the like, which transmits the audio stream to data processor 12 .
- data processor 12 preferably digitizes the audio stream.
- Data processor 12 is optionally and preferably configured for encoding and optionally compressing the audio data in accordance with a known codec and compression protocols suitable for video conferencing. Following audio processing, data processor 12 transmits the audio stream over communication network 18 to the remote location.
- system 10 comprises audio pickup device 36 .
- Imaging system 24 , display device 16 , audio output device 34 , audio pickup device 36 , and data processor 12 may be integrated into system 10 , or may be separate components that can be interchanged or replaced.
- imaging system 24 , display device 16 , audio output device 34 and audio pickup device 36 may be part of a laptop computer or dedicated video conferencing system.
- imaging system 24 can connect to data processor 12 via USB, FireWire, Bluetooth, Wi-Fi, or any other connection type.
- display device 16 may connect to data processor 16 using an appropriate connection mechanism, for example and without limitation, HDMI, DisplayPort, composite video, component video, S-Video, DVI, or VGA.
- Audio devices 34 and 36 may connect to data processor 12 via USB, an XLR connector, a 1 ⁇ 4′′ connector, a 3.5 mm connector or the like.
- Imaging system 24 is optionally and preferably configured for capturing the video stream and range data.
- the range data can be of any type described above with respect to the range data received from the remote location.
- the range data also comprises occlusion data.
- Range data can be captured by imaging system 24 in more than one way.
- imaging system 24 is configured for capturing stereoscopic image data, for example, by capturing scene 22 from two or more different view points.
- imaging system 24 comprises a plurality of spaced apart imaging sensors. Shown in FIG. 1 are two imaging sensors 24 a and 24 b, but it is not intended to limit the scope of the present invention to a system with two imaging sensors.
- the present embodiments contemplate configurations with one imaging sensor or more than two (e.g., 3, 4, 5 or more) imaging sensors.
- data processor optionally and preferably calculates range data.
- the calculated range data can be in the form of a depth map and/or a disparity map, as further detailed hereinabove.
- data processor also calculates occlusion data.
- imaging system 24 capture scene 22 from three or more viewpoints so as to allow data processor to calculate the occlusion data in higher precision.
- At least one of the imaging sensors of system 24 is configured to provide a field-of-view of scene 22 or a part thereof over a spectral range from infrared to visible light.
- at least one, more preferably, all the imaging sensors comprise a pixelated imager, such as, but not limited to, a CCD or CMOS matrix, which is devoid of IR CUT filter and which therefore generates a signal in response to light at any wavelength within the visible range and any wavelength within the IR range, more preferably the near IR range.
- a characteristic wavelength range detectable by the imaging sensors include, without limitation, any wavelength from about 400 nm to about 1100 nm, or any wavelength from about 400 nm to about 1000 nm.
- the imaging devices also provide signal responsively to light at the ultraviolet (UV) range.
- the characteristic wavelength range detectable by the imaging devices can be from about 300 nm to about 1100 nm. Other characteristic wavelength ranges are not excluded from the scope of the present invention.
- the imaging sensors optionally and preferably provide partially overlapping field-of-views of scene 22 .
- the overlap between the field-of-views allows data processor 12 to combine the field-of-views.
- the spacing between the imaging sensors is selected to allow data processor 12 to provide a three-dimensional reconstruction of individual 20 and optionally other objects in scene 22 .
- system 10 comprises one or more light sources 26 constituted for illuminating at least part of scene 22 .
- Shown in FIG. 1 is one light source, but it is not intended to limit the scope of the present invention to a to system with one light source.
- the present embodiments contemplate configurations with two light sources or more than two (e.g., 3, 4, 5 or more) light sources.
- a reference to light sources in the plural form should be construed as a reference to one or more light sources.
- the light source can be of any type known in the art, including, without limitation, light emitting diode (LED) and a laser device.
- LED light emitting diode
- laser device a laser device
- One or more of light sources 26 optionally and preferably emits infrared light.
- the infrared light sources generate infrared light at a wavelength detectable by the imaging sensors.
- the infrared light sources are optionally and preferably positioned adjacent to the imaging sensors, this need not necessarily be the case, since, for some applications, it may not be necessary for the light sources to be adjacent to the imaging sensors.
- the light sources can provide infrared illumination in more than one way. In some embodiments of the present invention one or more of the light sources provide flood illumination, and in some embodiments of the present invention one or more of the light sources generates an infrared pattern.
- illumination refers to illumination which is spatially continuous over an area that is illuminated by the illumination.
- Flood illumination is useful, for example, when scene 22 is at low ambient light conditions, and it is desired to increase the amount of light reflected back from the scene in the direction of the imaging device.
- infrared pattern refers to infrared illumination which is non-continuous and non-uniform in intensity over at least a central part of the area that is illuminated by the illumination.
- Infrared pattern is useful, for example, when it is desired to add identifiable features to the scene.
- the pattern typically includes a plurality of distinct features at horizontal and vertical angular resolutions of from about 0.2° to about 2° or from about 0.5° to about 1.5° , e.g., about 1° .
- horizontal and vertical angular views of 1° each typically corresponds to about 20 ⁇ 20 pixels of the imager.
- the present embodiments also contemplate one or more light sources which illuminate the local scene with a pattern in the visible range.
- the pattern varies with time.
- a series of patterns can be projected, one pattern at a time, in a rapid and periodic manner.
- This can be done in any of a number of ways.
- a plate having a periodically varying transmission coefficient can be moved in front of an illuminating device.
- a disk having a circumferentially varying transmission coefficient can be rotated in front of the illuminating device.
- strobing technique can be employed to rapidly project a series of stationary patterns, phase shifted with respect to each other.
- optical diffractive elements for forming the pattern.
- one or more of the light sources are used to illuminate local scene 22 , particularly individual 20 , by visible, infrared or ultraviolet light so as to allow processor 12 or imaging system 24 to calculate the range data using a time-of-flight (TOF) technique.
- the respective light sources are configured for emitting light with intensity that varies with time.
- the TOF technique can be employed, for example, using a phase-shift technique or pulse technique.
- the amplitude of the emitted light is periodically modulated (e.g., by sinusoidal modulation) and the phase of the modulation at emission is compared to the phase of the modulation at reception.
- the modulation period is optionally and preferably in the order of twice the difference between the maximum measurement distance and the minimum measurement distance divided by the velocity of light, and the propagation time interval can be determined as phase difference.
- the pulse technique With the pulse technique, light is emitted in discrete pulses without the requirement of periodicity. For each emitted pulse of light the time elapsed for the reflection to return is measured, and the range calculated as one-half the product of round-trip time and the velocity of the signal.
- data processor 12 or imaging system 24 calculates the range data based on a triangulation technique, such as, but not limited to, a structured light technique.
- a triangulation technique such as, but not limited to, a structured light technique.
- one or more of light sources 26 projects a light pattern in the visible, infrared or ultraviolet range onto scene 22 , particularly individual 20 .
- An image of the reflected light pattern is captured by an imaging sensor (e.g., imaging sensor 24 a ) and the range data are calculated by data processor or imaging system 24 based on the spacings and/or distortions associated with the reflected light pattern relative to the projected light pattern.
- the light pattern is invisible to the naked eye (e.g., in the infrared or ultraviolet range) so that when the remote user is presented with an image of individual 20 , the projected pattern does not obstruct or otherwise interfere with the presented image.
- patterns suitable for calculating of range data using the structured light technique including, without limitation, a single dot, a single line, and two-dimensional pattern (e.g., horizontal and vertical lines, checkerboard pattern, etc.).
- the light from light source 36 can scan individual 20 and the process is repeated for each of a plurality projection directions.
- data processor 12 extracts a gaze direction 28 of individual 20 , and varies the image 14 of the remote user on display device 16 responsively to the gaze direction.
- gaze direction refers to the direction of a predetermined identifiable point or a predetermined set of identifiable points on individual 20 relative to display device 16 or relative to imaging system 24 .
- gaze direction 28 is the direction of an eye or a nose of individual 20 relative to display device 16 .
- data processor 12 also extracts a head orientation of individual 20 , wherein data processor 12 varies the image 14 of the remote user on display device 16 responsively to the head orientation.
- image 14 is varied by to rotating the view of image 14 such that the remote user is displayed as if viewed from a different perspective.
- the rotation can be about any axis of rotation, preferably, but not necessarily, an axis of rotation that is parallel to the plane of display device 16 . Rotation about an axis of rotation that is perpendicular to the plane of display device 16 is not excluded from the scope of the present invention.
- the orientation of image 14 is optionally and preferably varied by data processor 12 such that a rate of change of the orientation of image 14 matches a rate of change of gaze direction 28 and/or the head orientation.
- data processor 12 varies the displayed orientation of the remote user oppositely to the change of gaze direction 28 , and/or the head orientation. It was found by the present inventors that such synchronization between the change in the gaze direction 28 and/or the head orientation and the change in the displayed orientation of the remote user mimics a three-dimensional reality for individual 20 while looking at image 14 . Thus, while individual 20 moves his or her head relative to display device 16 , image 14 is rotated in the opposite direction, such that individual 20 is provided with a different view of the remote user, as if the remote user were physically present in scene 22 .
- motion path 32 a is horizontal from right to left
- motion path 32 b is vertical from a lower to a higher position
- rotation direction 30 a is anticlockwise with a rotation axis pointing downwards (negative yaw)
- rotation direction 30 b is clockwise with a rotation axis pointing rightwards (negative pitch).
- a motion of individual 20 along path 32 a is preferably accompanied by rotation of image 14 along rotation direction 30 a
- a motion of individual 20 along path 32 b is preferably accompanied by rotation of image 14 along rotation direction 30 b
- a motion of individual 20 along a path opposite to 32 a is preferably accompanied by rotation of image 14 along a direction opposite to rotation direction 30 a
- a motion of individual 20 along a path opposite to 32 b is preferably accompanied by rotation of image 14 along a direction opposite to rotation 30 b.
- data processor 12 varies the to displayed orientation of the remote user so as to maintain a fixed orientation of the remote user relative to gaze direction 28 and/or the head orientation.
- the displayed orientation can be rotated such that the gaze of the remote user follows the location of individual 20 .
- a motion of individual 20 along path 32 a is preferably accompanied by rotation of image 14 along a rotation direction opposite to direction 30 a
- a motion of individual 20 along path 32 b is preferably accompanied by rotation of image 14 along a rotation direction opposite to direction 30 b
- a motion of individual 20 along a path opposite to 32 a is preferably accompanied by rotation of image 14 along rotation direction 30 a
- a motion of individual 20 along a path opposite to 32 b is preferably accompanied by rotation of image 14 along rotation direction 30 b.
- any motion path of individual 20 can be expressed as a linear combination of directions 32 a and 32 b.
- the ordinarily skilled person would know how to adjust the above description for the case of such motion path.
- the gaze direction and/or the head orientation can be extracted by processing the images in the local imagery data stream using any technique known in the art, either for a two-dimensional or for or a three-dimensional image of individual 20 .
- data processor 12 employs an eye-tracking procedure for tracking the eyes of the individual as known in the art, and then determines the gaze direction and/or the head orientation based on the position of the eyes on the image.
- an eye-tracking procedure the corners of one or more of the eyes can be detected, e.g., as described in Everingham et al. [In BMVC, 2006], the contents of which are hereby incorporated by reference. Following such detection, each eye can be defined as a region between two identified corners. Additional eye-tracking techniques are found, for example, in U.S. Pat. Nos. 6,526,159 and 8,342,687; European publication Nos. EP1403680, EP0596868, International Publication No. W01999/027412, the contents of which are hereby incorporated by reference.
- facial features in the face of individual 20 including, without limitation, the tip of the nose, the nostrils, the mouth corners, and the ears.
- data processor 12 determines the gaze direction and/or the head orientation based on the position of the identified facial features on the image.
- Techniques for the identification of facial features in an image are now in the art and are found, for example, in U.S. Pat. No. 8,369,586, European Publication Nos. EP1296279, EP1693782, and International Publication No. WO1999/053443, the contents of which are hereby incorporated by reference.
- the range data can be used to aid the extraction of gaze and/or the head orientation information.
- the nose of individual 20 can be identified based on the range data associated with the face of individual 20 . Specifically, the region in the range data of the face that has the shortest distance to imaging system 24 can be identified as the nose. Once the nose is identified, data processor 12 determines the gaze direction and/or the head orientation based on the position of the nose on the image.
- Two or more systems such as system 10 can be deployed at respective two or more locations and be configured to communicate with each other over communication network 18 .
- the present embodiments contemplate a system for two-way video conferencing between a first party at a first location and a second party as a second location, wherein each of the two locations is a remote location with respect to the other location.
- the two-way video conferencing system can include a first system at the first location and a second system at the second location, wherein each of the first and second systems comprises system 10 .
- the first party is the individual in the local scene whose gaze direction and/or head orientation is determined, and the second party (at the second location) is the remote user whose image is displayed and varied on the display device;
- the second party is the individual in the local scene whose gaze direction and/or head orientation is determined, and the first party (at the first location) is the remote user whose image is displayed and varied on the display device.
- N-way videoconferencing system for allowing videoconference among N parties at respective N locations, wherein each one of the N locations is a remote location with respect to another one of the N locations.
- a system such as system 10 is deployed at each of the N locations.
- One of ordinary skills in the art, provided with the details described herein would know how to deploy N systems like system 10 to respectively operate at N different locations.
- a party at a particular location of the N locations can be presented with a view of two or more other parties (e.g., of all other N ⁇ 1 parties) on the respective display device, in a so called “virtual conference room.”
- the virtual conference room can include, in addition to the images of the other parties, three-dimensional models of computer graphics representing, for example, a table and the inside of the conference room.
- the data processor of the system at the particular location varies the view (e.g., rotates) of the virtual conference room and/or the parties that is/are displayed on the device.
- each party when each party is arranged at its respective position in the virtual conference room, each party sees the virtual conference room from a location at which the party is arranged. Accordingly, the image of the virtual conference room which is viewed by each party on its display is different among the different display devices.
- FIG. 2 is a flowchart diagram of a method suitable for video conferencing, according to some embodiments of the present invention. At least some operations of the method can be executed by a data processor, to e.g., data processor 12 , and at least some operations of the method can be executed by an imaging system, e.g., imaging system 24 .
- a data processor to e.g., data processor 12
- an imaging system e.g., imaging system 24 .
- the method can be embodied in many forms. For example, it can be embodied in on a tangible medium such as a computer for performing the method operations. It can be embodied on a computer readable medium, comprising computer readable instructions for carrying out at least some of the method operations. In can also be embodied in electronic device having digital computer capabilities arranged to run the computer program on the tangible medium or execute the instruction on a computer readable medium.
- the method begins at 40 .
- the method receives from a remote location a stream imagery data of a remote user, and at 42 the method displays an image of the remote user on a display device, as further detailed hereinabove.
- the method decodes and/or decompresses the video data before the image is displayed.
- the imagery data stream of the remote user comprises a video stream and range data, and the method reconstructs a three-dimensional image from the data and displays the three-dimensional image on the display device, as further detailed hereinabove.
- the method also receives from the remote location an audio stream accompanying the video stream, and transmits the audio stream to an audio output device, as further detailed hereinabove.
- the method decodes and/or decompresses the audio data before the image is displayed.
- the method optionally illuminates at least a portion of a local scene in front of the display device, for example, by visible and/or infrared light.
- an imaging system such as imaging system 24 , is used for capturing a stream or imagery data of an individual in the local scene.
- the method captures a video stream and range data, as further detailed hereinabove.
- the method optionally and preferably captures imaging data in the visible range and in the infrared range.
- the method receives an audio stream from the individual and/or local scene, and transmits the audio stream to the remote location.
- the method digitizes, encodes and/or compresses the audio stream prior to its transmission.
- the method extracts a gaze direction and/or the head orientation of the individual using a data processor, and at 46 the method varies a view of the image of the remote user responsively to the gaze direction and/or the head orientation, as further detailed hereinabove.
- the variation optionally and preferably comprises varying a displayed orientation of the remote user such that a rate of change of the orientation matches a rate of change of the gaze direction and/or the head orientation.
- the displayed orientation of the remote user is rotated oppositely to a change of the gaze direction and/or the head orientation, and in some embodiments of the present invention a fixed orientation of the remote user relative to the gaze direction is maintained.
- the method optionally and preferably transmits the imagery data of the individual to the remote location.
- the method preferably transmits also the range data, optionally and preferably in the form of a depth map, to the remote location.
- the method ends at 48 .
- compositions, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
- a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
- range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
- a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range.
- the phrases “ranging/ranges between” a first indicate number and a second indicate number and “ranging/ranges from” a first indicate number “to” a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
- This application claims the benefit of priority of U.S. Provisional Patent Application No. 61/817,375 filed Apr. 30, 2013, the contents of which are incorporated herein by reference in their entirety.
- The present invention, in some embodiments thereof, relates to the processing and displaying of image data and, more particularly, but not exclusively, to a system and method for video conferencing.
- Videoconferencing has become a useful form of communication between parties for remotely conducting various forms of business, corporate meetings, training, and the like without traveling for face-to-face meetings. On a smaller or personal scale, videophone service or videotelephony enables individuals to communicate vocally and visually using special equipment.
- It is well known that non-verbal messages form an important aspect of personal communication. It is a reason why people have a natural need to see the persons they talk to. The advent of low-cost webcams and the growing bandwidth for IP-traffic start to enable personal visual communication for people at home or in the office. A way to add impressiveness to the visual communication system, particularly give the users the impression to really be with the one they talk to, is adding a third spatial dimension to the visualization.
- Inexpensive video cameras and microphones have been developed for interfacing to personal computers to enable vocal and visual communication over the Internet. Web cameras or webcams can be attached to liquid crystal display (LCD) monitors, directed at the user, and interfaced to the computer for acquiring live images of the user. A microphone mounted on a desk or the monitor is connected to a microphone input of the computer to receive the user's voice input. Many portable computer devices such as laptops, notebooks, netbooks, tablet computers, pad computers, and the like are provided with built-in video cameras and microphones. Most cellular or cell phones also have cameras capable of recording still or moving images. Such cameras and microphones allow computer users to engage in an informal kind of vocal and visual communication over the Internet, which is sometimes referred to as “video chatting”.
- According to an aspect of some embodiments of the present invention there is provided a system for video conferencing. The system comprises a data processor configured for receiving from a remote location a stream of imagery data of a remote user; displaying an image of the remote user on a display device; receiving a stream of imagery data of an individual in a local scene in front of the display device; extracting a to gaze direction and/or the head orientation of the individual; and varying a view of the image responsively to the gaze direction and/or the head orientation.
- According to some embodiments of the invention the invention the system comprises the display device, and an imaging system constituted to receive a view of the local scene and transmit a stream of imagery data of the local scene to the data processor.
- According to some embodiments of the invention the imaging system is configured for capturing a video stream and range data.
- According to some embodiments of the invention the imaging system is configured for capturing stereoscopic image data.
- According to some embodiments of the invention the data processor is configured for calculating range data from the stereoscopic image data.
- According to some embodiments of the invention the invention the system comprises a light source configured for projecting a light beam onto the individual, wherein the data processor is configured for calculating range data by a time-of-flight technique.
- According to some embodiments of the invention the invention the system comprises a light source configured for projecting a light pattern onto the individual, wherein the data processor system is configured for calculating range data by a triangulation technique, such as, but not limited to, a structured light technique.
- According to some embodiments of the invention the stream of imagery data of the remote user comprises a video stream and range data, wherein the data processor is configured for reconstructing a three-dimensional image from the stream of imagery data and displaying the three-dimensional image on the display device.
- According to some embodiments of the invention the variation of the view comprises varying a displayed orientation of the remote user such that a rate of change of the orientation matches a rate of change of the gaze direction and/or head orientation.
- According to some embodiments of the invention the variation of the view comprises varying a displayed orientation of the remote user oppositely to a change of the gaze direction and/or the head orientation.
- According to some embodiments of the invention the variation of the view to comprises maintaining a fixed orientation of the remote user relative to the gaze direction.
- According to some embodiments of the invention the system comprises at least one infrared light source constituted for illuminating at least a portion of the local scene, wherein the stream of imagery data of the individual comprises image data received from the individual in the visible range and in the infrared range.
- According to an aspect of some embodiments of the present invention there is provided a system for at least two-way video conferencing between at least a first party at a first location and a second party at a second location. The system comprises a first system at the first location and a second system at the second location, each of the first and the second systems being the system as delineated above and optionally as further detailed hereinunder.
- According to an aspect of some embodiments of the present invention there is provided a method of video conferencing. The method comprises: receiving from a remote location a stream the individual of a remote user; displaying an image of the remote user on a display device; using an imaging system for capturing a stream the individual of an individual in a local scene in front of the display device; and using a data processor for extracting a gaze direction and/or the head orientation of the individual, and varying a view of the image responsively to the gaze direction and/or the head orientation.
- According to some embodiments of the invention the capturing of the stream of imagery data comprises capturing a video stream and range data.
- According to some embodiments of the invention the capturing of stream of imagery data comprises capturing stereoscopic image data. According to some embodiments of the invention the method comprises calculating range data from the stereoscopic image data.
- According to some embodiments of the invention the method comprises projecting a light beam onto the individual, and calculating the range data by a time-of-flight technique.
- According to some embodiments of the invention the method comprises projecting a light pattern onto the individual, and calculating the range data by a triangulation technique, such as, but not limited to, a structured light technique.
- According to some embodiments of the invention the stream of imagery data of the remote user comprises a video stream and range data, and the method comprises reconstructing a three-dimensional image from the stream of imagery data and displaying the three-dimensional image on the display device.
- According to some embodiments of the invention the method comprises varying a displayed orientation of the remote user such that a rate of change of the orientation matches a rate of change of the gaze direction and/or the head orientation.
- According to some embodiments of the invention the method comprises varying a displayed orientation of the remote user oppositely to a change of the gaze direction and/or the head orientation.
- According to some embodiments of the invention the method comprises maintaining a fixed orientation of the remote user relative to the gaze direction.
- According to some embodiments of the invention the method comprises illuminating at least a portion of the local scene by infrared light, wherein the capturing of the stream of imagery data of the individual comprises imaging the individual in the visible range and in the infrared range.
- According to an aspect of some embodiments of the present invention there is provided a computer software product. The computer software product comprises a computer-readable medium in which program instructions are stored, which instructions, when read by a data processor, cause the data processor to receive a video stream of a remote user and a video stream of an individual in a local scene, and to execute the method as delineated above and optionally as further detailed hereinunder.
- Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
- Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a to combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
- For example, hardware for performing selected tasks according to embodiments of the invention could be implemented as a chip or a circuit. As software, selected tasks according to embodiments of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In an exemplary embodiment of the invention, one or more tasks according to exemplary embodiments of method and/or system as described herein are performed by a data processor, such as a computing platform for executing a plurality of instructions.
- Optionally, the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data. Optionally, a network connection is provided as well. A display and/or a user input device such as a keyboard or mouse are optionally provided as well.
- Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.
- In the drawings:
-
FIG. 1 is a schematic illustration of a system for video conferencing, according to some embodiments of the present invention; and -
FIG. 2 is a flowchart diagram of a method suitable for video conferencing, according to some embodiments of the present invention. - The present invention, in some embodiments thereof, relates to the processing and displaying of image data and, more particularly, but not exclusively, to a system and method for video conferencing.
- Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.
- Referring now to the drawings,
FIG. 1 illustrates asystem 10 for video conferencing, according to some embodiments of the present invention. -
System 10 comprises adata processor 12 configured for receiving from a remote location (not shown) a stream of imagery data of a remote user in a remote scene. For example, the stream (referred to below as the remote stream) can be broadcasted over a communication network 18 (e.g., a local area network, or a wide area network such as the Internet or a cellular network), anddata processor 12 can be configured to communicate withnetwork 18, and receive the remote stream therefrom. In various exemplary embodiments of theinvention data processor 12 also receives from the remote location an audio stream (referred to be low as the remote audio stream) accompanying the imagery data stream. - As used herein “imagery data” refers to a plurality of values that represent a two- or three-dimensional image, and that can therefore be used to reconstruct a two- or three-dimensional image.
- Typically, the imagery data comprise values (e.g., grey-levels, intensities, color intensities, etc.), each corresponding to a picture-element (e.g., a pixel, a sub-pixel or a group of pixels) in the image. In some embodiments of the present invention, the imagery data also comprise range data as further detailed hereinbelow.
- As used herein “stream of imagery data” or “imagery data stream” refers to time-dependent imagery data, wherein the plurality of values varies with time.
- Typically, the stream of imagery data comprises a video stream which may include a plurality of time-dependent values (e.g., grey-levels, intensities, color intensities, etc.), wherein a particular value at a particular time-point corresponds to a picture-element (e.g., a pixel, a sub-pixel or a group of pixels) in a video frame. In some embodiments of the present invention, the stream of imagery data also comprises time-dependent range data.
- In various exemplary embodiments of the invention the remote imagery data stream comprises a video stream, referred to herein as the remote video stream.
Data processor 12 is preferably configured for decoding the remote video stream such that it can be interpreted and properly displayed.Data processor 12 is preferably configured to support a number of known codecs, such as MPEG-2, MPEG-4, H.264, H.263+, or other codec's.Data processor 12 uses the remote video stream for displaying avideo image 14 of the remote user on adisplay device 16. In various exemplary embodiments of theinvention system 10 also comprisesdisplay device 16. -
Data processor 12 can decode the remote audio stream such that it can be interpreted and properly output. If the remote audio stream is compressed, data processor preferably decompresses it.Data processor 12 is preferably configured to support a number of known audio codecs.Data processor 12 transmits the decoded remote audio stream to anaudio output device 34, such as, but not limited to, a speaker, a headset and the like. In various exemplary embodiments of theinvention system 10 comprisesaudio output device 36. -
Data processor 12 can be any man-made machine capable of executing an instruction set and/or performing calculations. Representative examples including, without limitation, a general purpose computer supplemented by dedicated software, general purpose microprocessor supplemented by dedicated software, general purpose microcontroller supplemented by dedicated software, general purpose graphics processor supplemented by dedicated software and/or a digital signal processor (DSP) supplemented by dedicated software. Data processor can also comprise dedicated circuitry (e.g., a printed circuit board) and/or a programmable electronic chip into which dedicated software is burned. - In various exemplary embodiments of the invention the stream of imagery data comprises range data corresponding to the video stream, wherein both the video stream to and the range data are time dependent and synchronized, such that each video frame of the video stream has a corresponding set of range data.
- The range data describe topographical information of the remote user or remote scene and is optionally and preferably received in the form of a depth map.
- The term “depth map,” as used herein, refers to a representation of a scene (the remote scene, in the present example) as a two-dimensional matrix, in which each matrix element corresponds to a respective location in the scene and has a respective matrix element value indicative of the distance from a certain reference location to the respective scene location. The reference location is typically static and the same for all matrix elements. A depth map optionally and preferably has the form of an image in which the pixel values indicate depth information. By way of example, an 8-bit grey-scale image can be used to represent depth information. The depth map can provide depth information on a per-pixel basis of the image data, but may also use a coarser granularity, such as a lower resolution depth map wherein each matrix element value provides depth information for a group of pixels of the image data.
- The range data can also be provided in the form of a disparity map. A disparity map refers to the apparent shift of objects or parts of objects in a scene (the remote scene, in the present example) when observed from two different viewpoints, such as from the left-eye and the right-eye viewpoint. Disparity map and depth map are related and can be mapped onto one another provided the geometry of the respective viewpoints of the disparity map are known, as is commonly known to those skilled in the art.
- When the remote stream of imagery data includes a video stream as well as range data,
data processor 12 reconstructs a three-dimensional image of the remote user from the imagery data and displays a view of the three-dimensional image 14 ondisplay device 16. - The three-dimensional image comprises geometric properties of a non-planar surface which at least partially encloses a three-dimensional volume. Generally, the non-planar surface is a two-dimensional object embedded in a three-dimensional space.
- Formally, a non-planar surface is a metric space induced by a smooth connected and compact Riemannian 2-manifold. Ideally, the geometric properties of the non-planar surface would be provided explicitly for example, the slope and curvature (or even other spatial derivatives or combinations thereof) for every point of the non-planar surface. to Yet, such information is rarely attainable and the spatial information of the three-dimensional image is provided for a sampled version of the non-planar surface, which is a set of points on the Riemannian 2-manifold and which is sufficient for describing the topology of the 2-manifold. Typically, the spatial information of the non-planar surface is a reduced version of a 3D spatial representation, which may be either a point-cloud or a 3D reconstruction (e.g., a polygonal mesh or a curvilinear mesh) based on the point cloud. The 3D image is expressed via a 3D coordinate system, such as, but not limited to, Cartesian, Spherical, Ellipsoidal, 3D Parabolic or Paraboloidal coordinate 3D system.
- It is appreciated that a three-dimensional image of an object is typically a two-dimensional image which, in addition to indicating the lateral extent of object members, further indicates the relative or absolute distance of the object members, or portions thereof, from some reference point, such as the location of the imaging device. Thus, a three-dimensional image typically includes information residing on a non-planar surface of a three-dimensional body and not necessarily in the bulk. Yet, it is commonly acceptable to refer to such image as “three-dimensional” because the non-planar surface is conveniently defined over a three-dimensional system of coordinate. Thus, throughout this specification and in the claims section that follows, the term “three-dimensional image” primarily relate to surface entities.
- In order to improve the quality of the reconstructed three-dimensional image, additional occlusion information, known in the art as de-occlusion information, can be provided. (De-)occlusion information relates to image and/or depth information which can be used to represent views for additional viewpoints (e.g., other than those used for generating the disparity map). In addition to the information that was occluded by objects, the occlusion information may also comprise information in the vicinity of occluded regions. The availability of occlusion information enables filling in of holes which occur when reconstructing the three-dimensional image using a 2D+range data.
- Occlusion data can be generated by
data processor 12 and/or the remote location. When the occlusion data are generated at the remote location, it is optionally and preferably also transmitted over communication network and received bydata processor 12. When the occlusion data is generated bydata processor 12, it can be approximated using the range data, optionally and preferably from visible background for occlusion areas. Receiving the occlusion data from the remote location is preferred from the standpoint of quality, and generating the occlusion data byprocessor 12 is preferred from the standpoint of reduced amount of transferred data. - The reconstruction of three-dimensional image using the image and range data and optionally also occlusion data can be done using any procedure known in the art. Representative examples of suitable algorithms are found in Qingxiong Yang, “Spatial-Depth Super Resolution for Range Images,” IEEE Conference on Computer Vision and Pattern Recognition, 2007, pages 1-8; H. Hirschmuller, “Stereo Processing by Semiglobal Matching and Mutual Information,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 30(2):328-341; International Publication No. WO1999/006956; European Publication Nos. EP1612733 and EP2570079; U.S. Published Application Nos. 20120183238 and 20120306876; and U.S. Pat. Nos. 7,583,777 and 8,249,334, the contents of which are hereby incorporated by reference.
- In various exemplary embodiments of the
invention data processor 12 also receives a stream of imagery data of an individual 20 in alocal scene 22 in front of display device.Local scene 22 may also include other objects (e.g., a table, a chair, walls, etc.) which are not shown for clarity of presentation. The imagery data stream of an individual 20 is referred to herein as the local imagery data stream. The imagery data stream is generated by animaging system 24 constituted to capture a view oflocal scene 22 and transmit a corresponding imagery data todata processor 12. - The local imagery data preferably comprises a video stream, referred to herein as the local video stream.
Data processor 12 preferably encodes the video stream received fromimaging system 24 in accordance with a known codec, such as MPEG-2, MPEG-4, H.264, H.263+, or other codecs suitable for video conferencing. In some embodiments of the presentinvention data processor 12 also compress the encoded video for transmission. Data processor preferably transmits the local video stream overcommunication network 18 to the remote location. In various exemplary embodiments of theinvention system 10 comprisesimaging system 24. - In various exemplary embodiments of the
invention data processor 12 additionally receives an audio stream from individual 20 and/orscene 22, which audio stream is accompanying the video stream. The audio stream received fromindividual 20 and/orscene 22 is referred to herein as the remote audio stream. The local audio stream can be captured by anaudio pickup device 36, such as, but not limited to, a microphone, a microphone array, a headset or the like, which transmits the audio stream todata processor 12. If the audio stream is analogue,data processor 12 preferably digitizes the audio stream.Data processor 12 is optionally and preferably configured for encoding and optionally compressing the audio data in accordance with a known codec and compression protocols suitable for video conferencing. Following audio processing,data processor 12 transmits the audio stream overcommunication network 18 to the remote location. In various exemplary embodiments of theinvention system 10 comprisesaudio pickup device 36. -
Imaging system 24,display device 16,audio output device 34,audio pickup device 36, anddata processor 12 may be integrated intosystem 10, or may be separate components that can be interchanged or replaced. For example,imaging system 24,display device 16,audio output device 34 andaudio pickup device 36 may be part of a laptop computer or dedicated video conferencing system. Alternatively,imaging system 24 can connect todata processor 12 via USB, FireWire, Bluetooth, Wi-Fi, or any other connection type. Similarly,display device 16 may connect todata processor 16 using an appropriate connection mechanism, for example and without limitation, HDMI, DisplayPort, composite video, component video, S-Video, DVI, or VGA.Audio devices data processor 12 via USB, an XLR connector, a ¼″ connector, a 3.5 mm connector or the like. -
Imaging system 24 is optionally and preferably configured for capturing the video stream and range data. The range data can be of any type described above with respect to the range data received from the remote location. Optionally, the range data also comprises occlusion data. - Range data can be captured by imaging
system 24 in more than one way. - In some embodiments of the present
invention imaging system 24 is configured for capturing stereoscopic image data, for example, by capturingscene 22 from two or more different view points. Thus, in various exemplary embodiments of theinvention imaging system 24 comprises a plurality of spaced apart imaging sensors. Shown inFIG. 1 are twoimaging sensors 24 a and 24 b, but it is not intended to limit the scope of the present invention to a system with two imaging sensors. Thus, the present embodiments contemplate configurations with one imaging sensor or more than two (e.g., 3, 4, 5 or more) imaging sensors. - From the stereoscopic image data, data processor optionally and preferably calculates range data. The calculated range data can be in the form of a depth map and/or a disparity map, as further detailed hereinabove. Optionally, data processor also calculates occlusion data. In some embodiments,
imaging system 24capture scene 22 from three or more viewpoints so as to allow data processor to calculate the occlusion data in higher precision. - In various exemplary embodiments of the invention at least one of the imaging sensors of
system 24 is configured to provide a field-of-view ofscene 22 or a part thereof over a spectral range from infrared to visible light. Preferably, at least one, more preferably, all the imaging sensors comprise a pixelated imager, such as, but not limited to, a CCD or CMOS matrix, which is devoid of IR CUT filter and which therefore generates a signal in response to light at any wavelength within the visible range and any wavelength within the IR range, more preferably the near IR range. - Representative examples of a characteristic wavelength range detectable by the imaging sensors include, without limitation, any wavelength from about 400 nm to about 1100 nm, or any wavelength from about 400 nm to about 1000 nm. In some embodiments of the present invention the imaging devices also provide signal responsively to light at the ultraviolet (UV) range. In these embodiments, the characteristic wavelength range detectable by the imaging devices can be from about 300 nm to about 1100 nm. Other characteristic wavelength ranges are not excluded from the scope of the present invention.
- As used herein the term “about” refers to ±10%.
- The imaging sensors optionally and preferably provide partially overlapping field-of-views of
scene 22. The overlap between the field-of-views allowsdata processor 12 to combine the field-of-views. In various exemplary embodiments of the invention the spacing between the imaging sensors is selected to allowdata processor 12 to provide a three-dimensional reconstruction ofindividual 20 and optionally other objects inscene 22. - In some embodiments of the
present invention system 10 comprises one or morelight sources 26 constituted for illuminating at least part ofscene 22. Shown inFIG. 1 is one light source, but it is not intended to limit the scope of the present invention to a to system with one light source. Thus, the present embodiments contemplate configurations with two light sources or more than two (e.g., 3, 4, 5 or more) light sources. Herein, unless explicitly stated, a reference to light sources in the plural form should be construed as a reference to one or more light sources. - The light source can be of any type known in the art, including, without limitation, light emitting diode (LED) and a laser device.
- One or more of
light sources 26 optionally and preferably emits infrared light. In these embodiments, the infrared light sources generate infrared light at a wavelength detectable by the imaging sensors. The infrared light sources are optionally and preferably positioned adjacent to the imaging sensors, this need not necessarily be the case, since, for some applications, it may not be necessary for the light sources to be adjacent to the imaging sensors. - The light sources can provide infrared illumination in more than one way. In some embodiments of the present invention one or more of the light sources provide flood illumination, and in some embodiments of the present invention one or more of the light sources generates an infrared pattern.
- As used herein, “flood illumination” refers to illumination which is spatially continuous over an area that is illuminated by the illumination.
- Flood illumination is useful, for example, when
scene 22 is at low ambient light conditions, and it is desired to increase the amount of light reflected back from the scene in the direction of the imaging device. - As used herein, “infrared pattern” refers to infrared illumination which is non-continuous and non-uniform in intensity over at least a central part of the area that is illuminated by the illumination.
- Infrared pattern is useful, for example, when it is desired to add identifiable features to the scene. The pattern typically includes a plurality of distinct features at horizontal and vertical angular resolutions of from about 0.2° to about 2° or from about 0.5° to about 1.5° , e.g., about 1° . In computer vision applications employing a CMOS imager, horizontal and vertical angular views of 1° each typically corresponds to about 20×20 pixels of the imager.
- The present embodiments also contemplate one or more light sources which illuminate the local scene with a pattern in the visible range.
- Optionally, the pattern varies with time. For example, a series of patterns can be projected, one pattern at a time, in a rapid and periodic manner. This can be done in any of a number of ways. For example, a plate having a periodically varying transmission coefficient can be moved in front of an illuminating device. Alternatively, a disk having a circumferentially varying transmission coefficient can be rotated in front of the illuminating device. Still alternatively, strobing technique can be employed to rapidly project a series of stationary patterns, phase shifted with respect to each other. Also contemplated is the use of optical diffractive elements for forming the pattern.
- In some embodiments of the present invention one or more of the light sources are used to illuminate
local scene 22, particularly individual 20, by visible, infrared or ultraviolet light so as to allowprocessor 12 orimaging system 24 to calculate the range data using a time-of-flight (TOF) technique. In these embodiments, the respective light sources are configured for emitting light with intensity that varies with time. - The TOF technique can be employed, for example, using a phase-shift technique or pulse technique.
- With the phase-shift technique, the amplitude of the emitted light is periodically modulated (e.g., by sinusoidal modulation) and the phase of the modulation at emission is compared to the phase of the modulation at reception. The modulation period is optionally and preferably in the order of twice the difference between the maximum measurement distance and the minimum measurement distance divided by the velocity of light, and the propagation time interval can be determined as phase difference.
- With the pulse technique, light is emitted in discrete pulses without the requirement of periodicity. For each emitted pulse of light the time elapsed for the reflection to return is measured, and the range calculated as one-half the product of round-trip time and the velocity of the signal.
- In some embodiments,
data processor 12 orimaging system 24 calculates the range data based on a triangulation technique, such as, but not limited to, a structured light technique. In these embodiments, one or more oflight sources 26 projects a light pattern in the visible, infrared or ultraviolet range ontoscene 22, particularly individual 20. An image of the reflected light pattern is captured by an imaging sensor (e.g.,imaging sensor 24 a) and the range data are calculated by data processor orimaging system 24 based on the spacings and/or distortions associated with the reflected light pattern relative to the projected light pattern. Preferably, the light pattern is invisible to the naked eye (e.g., in the infrared or ultraviolet range) so that when the remote user is presented with an image ofindividual 20, the projected pattern does not obstruct or otherwise interfere with the presented image. Representative examples of patterns suitable for calculating of range data using the structured light technique including, without limitation, a single dot, a single line, and two-dimensional pattern (e.g., horizontal and vertical lines, checkerboard pattern, etc.). The light fromlight source 36 can scan individual 20 and the process is repeated for each of a plurality projection directions. - Additional techniques for calculating range data suitable for the present embodiments are described in, e.g., S. Inokuchi, K. Sato, and F. Matsuda, “Range imaging system for 3D object recognition”, in Proceedings of the International Conference on Pattern Recognition, pages 806-808, 1984; U.S. Pat. Nos. 4,488,172, 4,979,815, 5,110,203, 5,703,677, 5,838,428, 6,349,174, 6,421,132, 6,456,793, 6,507,706, 6,584,283, 6,823,076, 6,856,382, 6,925,195 and 7,194,112; and International Publication No. WO 2007/043036, the contents of which are hereby incorporated by reference.
- In some embodiments of the present
invention data processor 12 extracts agaze direction 28 ofindividual 20, and varies theimage 14 of the remote user ondisplay device 16 responsively to the gaze direction. - As used herein, “gaze direction” refers to the direction of a predetermined identifiable point or a predetermined set of identifiable points on individual 20 relative to display
device 16 or relative toimaging system 24. Typically, but not necessarily,gaze direction 28 is the direction of an eye or a nose of individual 20 relative to displaydevice 16. - In some embodiments of the present
invention data processor 12 also extracts a head orientation ofindividual 20, whereindata processor 12 varies theimage 14 of the remote user ondisplay device 16 responsively to the head orientation. - In various exemplary embodiments of the
invention image 14 is varied by to rotating the view ofimage 14 such that the remote user is displayed as if viewed from a different perspective. The rotation can be about any axis of rotation, preferably, but not necessarily, an axis of rotation that is parallel to the plane ofdisplay device 16. Rotation about an axis of rotation that is perpendicular to the plane ofdisplay device 16 is not excluded from the scope of the present invention. - The orientation of
image 14 is optionally and preferably varied bydata processor 12 such that a rate of change of the orientation ofimage 14 matches a rate of change ofgaze direction 28 and/or the head orientation. - In some embodiments of the present invention,
data processor 12 varies the displayed orientation of the remote user oppositely to the change ofgaze direction 28, and/or the head orientation. It was found by the present inventors that such synchronization between the change in thegaze direction 28 and/or the head orientation and the change in the displayed orientation of the remote user mimics a three-dimensional reality forindividual 20 while looking atimage 14. Thus, while individual 20 moves his or her head relative to displaydevice 16,image 14 is rotated in the opposite direction, such thatindividual 20 is provided with a different view of the remote user, as if the remote user were physically present inscene 22. - In the representative illustration of
FIG. 1 , which is not to be considered is limiting, twopossible motion paths individual 20 and twopossible rotation directions image 14 are shown.Motion path 32 a is horizontal from right to left,motion path 32 b is vertical from a lower to a higher position,rotation direction 30 a is anticlockwise with a rotation axis pointing downwards (negative yaw), androtation direction 30 b is clockwise with a rotation axis pointing rightwards (negative pitch). When it is desired to mimic a three-dimensional reality, a motion ofindividual 20 alongpath 32 a is preferably accompanied by rotation ofimage 14 alongrotation direction 30 a, a motion ofindividual 20 alongpath 32 b is preferably accompanied by rotation ofimage 14 alongrotation direction 30 b, a motion ofindividual 20 along a path opposite to 32 a is preferably accompanied by rotation ofimage 14 along a direction opposite torotation direction 30 a, and a motion ofindividual 20 along a path opposite to 32 b is preferably accompanied by rotation ofimage 14 along a direction opposite torotation 30 b. - Also contemplated, are embodiments in which
data processor 12 varies the to displayed orientation of the remote user so as to maintain a fixed orientation of the remote user relative to gazedirection 28 and/or the head orientation. For example, the displayed orientation can be rotated such that the gaze of the remote user follows the location ofindividual 20. Thus, when a fixed relative orientation of the remote user is desired, a motion ofindividual 20 alongpath 32 a is preferably accompanied by rotation ofimage 14 along a rotation direction opposite todirection 30 a, a motion ofindividual 20 alongpath 32 b is preferably accompanied by rotation ofimage 14 along a rotation direction opposite todirection 30 b, a motion ofindividual 20 along a path opposite to 32 a is preferably accompanied by rotation ofimage 14 alongrotation direction 30 a, and a motion ofindividual 20 along a path opposite to 32 b is preferably accompanied by rotation ofimage 14 alongrotation direction 30 b. - It is appreciated that any motion path of individual 20 can be expressed as a linear combination of
directions - While the embodiments above are described with a particular emphasis to rotations that are opposite to the change in the gaze direction and to rotations that maintain fixed relative orientation of
image 14, it is to be understood that more detailed reference to these rotations is not to be interpreted as limiting the scope of the invention in any way. Thus, some embodiments of the present invention contemplate any relation between the variation of the view ofimage 14 and the change ingaze direction 28 and/or the head orientation. Furthermore while the embodiments above are described with a particular emphasis to variations realized by rotations about axes that are parallel to the plane of imaging device 16 (the yaw and/or pitch axes), it is to be understood that more detailed reference to these variations is not to be interpreted as limiting the scope of the invention in any way. Thus, some embodiments of the present invention contemplate any variation of the view ofimage 14, including rotations about an axis perpendicular to the plane of display device 16 (the roll axis), and translation ofimage 14 across display device 16). - The gaze direction and/or the head orientation can be extracted by processing the images in the local imagery data stream using any technique known in the art, either for a two-dimensional or for or a three-dimensional image of
individual 20. - For example, in some embodiments of the present
invention data processor 12 employs an eye-tracking procedure for tracking the eyes of the individual as known in the art, and then determines the gaze direction and/or the head orientation based on the position of the eyes on the image. As a representative example for an eye-tracking procedure, the corners of one or more of the eyes can be detected, e.g., as described in Everingham et al. [In BMVC, 2006], the contents of which are hereby incorporated by reference. Following such detection, each eye can be defined as a region between two identified corners. Additional eye-tracking techniques are found, for example, in U.S. Pat. Nos. 6,526,159 and 8,342,687; European publication Nos. EP1403680, EP0596868, International Publication No. W01999/027412, the contents of which are hereby incorporated by reference. - Also contemplated are other face recognition techniques that use geometrical characteristics of a face, to identify facial features in the face of
individual 20, including, without limitation, the tip of the nose, the nostrils, the mouth corners, and the ears. Once these facial features are identified,data processor 12 determines the gaze direction and/or the head orientation based on the position of the identified facial features on the image. Techniques for the identification of facial features in an image are now in the art and are found, for example, in U.S. Pat. No. 8,369,586, European Publication Nos. EP1296279, EP1693782, and International Publication No. WO1999/053443, the contents of which are hereby incorporated by reference. - When
data processor 12 generates range data ofindividual 20, the range data can be used to aid the extraction of gaze and/or the head orientation information. For example, the nose of individual 20 can be identified based on the range data associated with the face ofindividual 20. Specifically, the region in the range data of the face that has the shortest distance toimaging system 24 can be identified as the nose. Once the nose is identified,data processor 12 determines the gaze direction and/or the head orientation based on the position of the nose on the image. - Two or more systems such as
system 10 can be deployed at respective two or more locations and be configured to communicate with each other overcommunication network 18. - For example, the present embodiments contemplate a system for two-way video conferencing between a first party at a first location and a second party as a second location, wherein each of the two locations is a remote location with respect to the other location. The two-way video conferencing system can include a first system at the first location and a second system at the second location, wherein each of the first and second systems comprises
system 10. Thus, for the first system, the first party is the individual in the local scene whose gaze direction and/or head orientation is determined, and the second party (at the second location) is the remote user whose image is displayed and varied on the display device; for the second system, the second party is the individual in the local scene whose gaze direction and/or head orientation is determined, and the first party (at the first location) is the remote user whose image is displayed and varied on the display device. - It is to be understood that it is not intended to limit the scope of the present invention to a two-way videoconferencing system. The present embodiments contemplate N-way videoconferencing system for allowing videoconference among N parties at respective N locations, wherein each one of the N locations is a remote location with respect to another one of the N locations. In these embodiments, a system such as
system 10 is deployed at each of the N locations. One of ordinary skills in the art, provided with the details described herein would know how to deploy N systems likesystem 10 to respectively operate at N different locations. - For example, a party at a particular location of the N locations can be presented with a view of two or more other parties (e.g., of all other N−1 parties) on the respective display device, in a so called “virtual conference room.” The virtual conference room can include, in addition to the images of the other parties, three-dimensional models of computer graphics representing, for example, a table and the inside of the conference room. When the gaze and/or head orientation of the party at the particular location is changed, the data processor of the system at the particular location varies the view (e.g., rotates) of the virtual conference room and/or the parties that is/are displayed on the device. Preferably, when each party is arranged at its respective position in the virtual conference room, each party sees the virtual conference room from a location at which the party is arranged. Accordingly, the image of the virtual conference room which is viewed by each party on its display is different among the different display devices.
- Reference is now made to
FIG. 2 which is a flowchart diagram of a method suitable for video conferencing, according to some embodiments of the present invention. At least some operations of the method can be executed by a data processor, to e.g.,data processor 12, and at least some operations of the method can be executed by an imaging system, e.g.,imaging system 24. - It is to be understood that, unless otherwise defined, the operations described hereinbelow can be executed either contemporaneously or sequentially in many combinations or orders of execution. Specifically, the ordering of the flowchart diagrams is not to be considered as limiting. For example, two or more operations, appearing in the following description or in the flowchart diagrams in a particular order, can be executed in a different order (e.g., a reverse order) or substantially contemporaneously. Additionally, several operations described below are optional and may not be executed.
- The method can be embodied in many forms. For example, it can be embodied in on a tangible medium such as a computer for performing the method operations. It can be embodied on a computer readable medium, comprising computer readable instructions for carrying out at least some of the method operations. In can also be embodied in electronic device having digital computer capabilities arranged to run the computer program on the tangible medium or execute the instruction on a computer readable medium.
- The method begins at 40. At 41 the method receives from a remote location a stream imagery data of a remote user, and at 42 the method displays an image of the remote user on a display device, as further detailed hereinabove. In various exemplary embodiments of the invention, the method decodes and/or decompresses the video data before the image is displayed. In some embodiments of the present invention the imagery data stream of the remote user comprises a video stream and range data, and the method reconstructs a three-dimensional image from the data and displays the three-dimensional image on the display device, as further detailed hereinabove. In some embodiments, the method also receives from the remote location an audio stream accompanying the video stream, and transmits the audio stream to an audio output device, as further detailed hereinabove. In various exemplary embodiments of the invention, the method decodes and/or decompresses the audio data before the image is displayed.
- At 43 the method optionally illuminates at least a portion of a local scene in front of the display device, for example, by visible and/or infrared light. to At 44, an imaging system, such as
imaging system 24, is used for capturing a stream or imagery data of an individual in the local scene. In some embodiments of the present invention the method captures a video stream and range data, as further detailed hereinabove. When the scene is illuminated by infrared light, the method optionally and preferably captures imaging data in the visible range and in the infrared range. - In various exemplary embodiments of the invention the method receives an audio stream from the individual and/or local scene, and transmits the audio stream to the remote location. Optionally and preferably the method digitizes, encodes and/or compresses the audio stream prior to its transmission.
- At 45 the method extracts a gaze direction and/or the head orientation of the individual using a data processor, and at 46 the method varies a view of the image of the remote user responsively to the gaze direction and/or the head orientation, as further detailed hereinabove. The variation optionally and preferably comprises varying a displayed orientation of the remote user such that a rate of change of the orientation matches a rate of change of the gaze direction and/or the head orientation. In some embodiments of the present invention the displayed orientation of the remote user is rotated oppositely to a change of the gaze direction and/or the head orientation, and in some embodiments of the present invention a fixed orientation of the remote user relative to the gaze direction is maintained.
- At 47 the method optionally and preferably transmits the imagery data of the individual to the remote location. When range data is captured, the method preferably transmits also the range data, optionally and preferably in the form of a depth map, to the remote location.
- The method ends at 48.
- The word “exemplary” is used herein to mean “serving as an example, instance or illustration.” Any embodiment described as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments.
- The word “optionally” is used herein to mean “is provided in some embodiments and not provided in other embodiments.” Any particular embodiment of the invention may include a plurality of “optional” features unless such features conflict.
- The terms “comprises”, “comprising”, “includes”, “including”, “having” and their conjugates mean “including but not limited to”.
- The term “consisting of” means “including and limited to”.
- The term “consisting essentially of” means that the composition, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
- As used herein, the singular form “a”, “an” and “the” include plural references unless the context clearly dictates otherwise. For example, the term “a compound” or “at least one compound” may include a plurality of compounds, including mixtures thereof.
- Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
- Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases “ranging/ranges between” a first indicate number and a second indicate number and “ranging/ranges from” a first indicate number “to” a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.
- It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
- Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
- All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.
Claims (39)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/763,840 US10341611B2 (en) | 2013-04-30 | 2014-04-29 | System and method for video conferencing |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361817375P | 2013-04-30 | 2013-04-30 | |
US14/763,840 US10341611B2 (en) | 2013-04-30 | 2014-04-29 | System and method for video conferencing |
PCT/IL2014/050384 WO2014178047A1 (en) | 2013-04-30 | 2014-04-29 | System and method for video conferencing |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150365628A1 true US20150365628A1 (en) | 2015-12-17 |
US10341611B2 US10341611B2 (en) | 2019-07-02 |
Family
ID=51843240
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/763,840 Active US10341611B2 (en) | 2013-04-30 | 2014-04-29 | System and method for video conferencing |
Country Status (2)
Country | Link |
---|---|
US (1) | US10341611B2 (en) |
WO (1) | WO2014178047A1 (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150309314A1 (en) * | 2014-01-21 | 2015-10-29 | Osterhout Group, Inc. | See-through computer display systems |
US9743040B1 (en) * | 2015-12-03 | 2017-08-22 | Symantec Corporation | Systems and methods for facilitating eye contact during video conferences |
US9971156B2 (en) | 2014-01-21 | 2018-05-15 | Osterhout Group, Inc. | See-through computer display systems |
US10001644B2 (en) | 2014-01-21 | 2018-06-19 | Osterhout Group, Inc. | See-through computer display systems |
US10007118B2 (en) | 2014-01-21 | 2018-06-26 | Osterhout Group, Inc. | Compact optical system with improved illumination |
US10078224B2 (en) | 2014-09-26 | 2018-09-18 | Osterhout Group, Inc. | See-through computer display systems |
US10422995B2 (en) | 2017-07-24 | 2019-09-24 | Mentor Acquisition One, Llc | See-through computer display systems with stray light management |
US20190342542A1 (en) * | 2018-05-06 | 2019-11-07 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Communication methods and systems, electronic devices, and readable storage media |
US20190342541A1 (en) * | 2018-05-06 | 2019-11-07 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Communication methods and systems, electronic devices, servers, and readable storage media |
US10534180B2 (en) | 2016-09-08 | 2020-01-14 | Mentor Acquisition One, Llc | Optical systems for head-worn computers |
US10564426B2 (en) | 2014-07-08 | 2020-02-18 | Mentor Acquisition One, Llc | Optical configurations for head-worn see-through displays |
US10578869B2 (en) | 2017-07-24 | 2020-03-03 | Mentor Acquisition One, Llc | See-through computer display systems with adjustable zoom cameras |
US10619787B1 (en) * | 2018-08-29 | 2020-04-14 | Facebook, Inc. | Mounting systems for a video-conferencing device, video conferencing systems, and related methods |
US10684478B2 (en) | 2016-05-09 | 2020-06-16 | Mentor Acquisition One, Llc | User interface systems for head-worn computers |
US10877270B2 (en) | 2014-06-05 | 2020-12-29 | Mentor Acquisition One, Llc | Optical configurations for head-worn see-through displays |
US10908422B2 (en) | 2014-08-12 | 2021-02-02 | Mentor Acquisition One, Llc | Measuring content brightness in head worn computing |
US10969584B2 (en) | 2017-08-04 | 2021-04-06 | Mentor Acquisition One, Llc | Image expansion optic for head-worn computer |
US11022808B2 (en) | 2016-06-01 | 2021-06-01 | Mentor Acquisition One, Llc | Modular systems for head-worn computers |
US11099380B2 (en) | 2014-01-21 | 2021-08-24 | Mentor Acquisition One, Llc | Eye imaging in head worn computing |
US20210358514A1 (en) * | 2020-01-17 | 2021-11-18 | Audiotelligence Limited | Audio cropping |
US11226691B2 (en) | 2016-05-09 | 2022-01-18 | Mentor Acquisition One, Llc | User interface systems for head-worn computers |
US11257511B1 (en) * | 2021-01-05 | 2022-02-22 | Dell Products L.P. | Voice equalization based on face position and system therefor |
US11409105B2 (en) | 2017-07-24 | 2022-08-09 | Mentor Acquisition One, Llc | See-through computer display systems |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10341611B2 (en) | 2013-04-30 | 2019-07-02 | Inuitive Ltd. | System and method for video conferencing |
JP2019128714A (en) * | 2018-01-23 | 2019-08-01 | シャープ株式会社 | Input display device, input display method and input display program |
US20200092245A1 (en) * | 2018-09-18 | 2020-03-19 | International Business Machines Corporation | Provoking and maintaining user attention for urgent messages by actively monitoring user activity and biometrics |
EP3926442A1 (en) * | 2020-06-19 | 2021-12-22 | Brainbox GmbH | Video conferencing method and video conferencing system |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020063957A1 (en) * | 2000-11-29 | 2002-05-30 | Akira Kakizawa | Viewing stereoscopic image pairs |
US20050248651A1 (en) * | 2004-05-10 | 2005-11-10 | Fuji Xerox Co., Ltd. | Conference recording device, conference recording method, and design method and storage media storing programs |
US20060210045A1 (en) * | 2002-12-30 | 2006-09-21 | Motorola, Inc. | A method system and apparatus for telepresence communications utilizing video avatars |
US7139767B1 (en) * | 1999-03-05 | 2006-11-21 | Canon Kabushiki Kaisha | Image processing apparatus and database |
US20080297589A1 (en) * | 2007-05-31 | 2008-12-04 | Kurtz Andrew F | Eye gazing imaging for video communications |
US20100315329A1 (en) * | 2009-06-12 | 2010-12-16 | Southwest Research Institute | Wearable workspace |
US20110085018A1 (en) * | 2009-10-09 | 2011-04-14 | Culbertson W Bruce | Multi-User Video Conference Using Head Position Information |
US20110273731A1 (en) * | 2010-05-10 | 2011-11-10 | Canon Kabushiki Kaisha | Printer with attention based image customization |
US20120139816A1 (en) * | 2010-12-05 | 2012-06-07 | Ford Global Technologies, Llc | In-vehicle display management system |
US20120147328A1 (en) * | 2010-12-13 | 2012-06-14 | Microsoft Corporation | 3d gaze tracker |
US20120224019A1 (en) * | 2011-03-01 | 2012-09-06 | Ramin Samadani | System and method for modifying images |
US20120274736A1 (en) * | 2011-04-29 | 2012-11-01 | Robinson Ian N | Methods and systems for communicating focus of attention in a video conference |
US8395656B1 (en) * | 2011-01-24 | 2013-03-12 | Hewlett-Packard Development Company, L.P. | Methods and apparatus to direct attention in a video content display |
US20140098179A1 (en) * | 2012-10-04 | 2014-04-10 | Mcci Corporation | Video conferencing enhanced with 3-d perspective control |
US9041915B2 (en) * | 2008-05-09 | 2015-05-26 | Ball Aerospace & Technologies Corp. | Systems and methods of scene and action capture using imaging system incorporating 3D LIDAR |
US20150341619A1 (en) * | 2013-01-01 | 2015-11-26 | Inuitive Ltd. | Method and system for light patterning and imaging |
US20160286209A1 (en) * | 2013-04-21 | 2016-09-29 | Zspace, Inc. | Non-linear Navigation of a Three Dimensional Stereoscopic Display |
Family Cites Families (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3971065A (en) | 1975-03-05 | 1976-07-20 | Eastman Kodak Company | Color imaging array |
US4488172A (en) | 1982-08-18 | 1984-12-11 | Novon, Inc. | Method and apparatus for range imaging |
US5255313A (en) | 1987-12-02 | 1993-10-19 | Universal Electronics Inc. | Universal remote control system |
US4959810A (en) | 1987-10-14 | 1990-09-25 | Universal Electronics, Inc. | Universal remote control device |
US4979815A (en) | 1989-02-17 | 1990-12-25 | Tsikos Constantine J | Laser range imaging system based on projective geometry |
DE4102895C1 (en) * | 1991-01-31 | 1992-01-30 | Siemens Ag, 8000 Muenchen, De | |
US5110203A (en) | 1991-08-28 | 1992-05-05 | The United States Of America As Represented By The Secretary Of The Navy | Three dimensional range imaging system |
US5500671A (en) * | 1994-10-25 | 1996-03-19 | At&T Corp. | Video conference system and method of providing parallax correction and a sense of presence |
US5703677A (en) | 1995-11-14 | 1997-12-30 | The Trustees Of The University Of Pennsylvania | Single lens range imaging method and apparatus |
US5838428A (en) | 1997-02-28 | 1998-11-17 | United States Of America As Represented By The Secretary Of The Navy | System and method for high resolution range imaging with split light source and pattern mask |
WO1999006956A1 (en) | 1997-07-29 | 1999-02-11 | Koninklijke Philips Electronics N.V. | Method of reconstruction of tridimensional scenes and corresponding reconstruction device and decoding system |
US6421132B1 (en) | 1999-10-15 | 2002-07-16 | Vladimir M. Brajovic | Method and apparatus for rapid range imaging |
US6806898B1 (en) * | 2000-03-20 | 2004-10-19 | Microsoft Corp. | System and method for automatically adjusting gaze and head orientation for video conferencing |
US6349174B1 (en) | 2000-05-17 | 2002-02-19 | Eastman Kodak Company | Method and apparatus for a color scannerless range imaging system |
US6456793B1 (en) | 2000-08-03 | 2002-09-24 | Eastman Kodak Company | Method and apparatus for a color scannerless range imaging system |
US6504479B1 (en) | 2000-09-07 | 2003-01-07 | Comtrak Technologies Llc | Integrated security system |
US6584283B2 (en) | 2001-02-02 | 2003-06-24 | Eastman Kodak Company | LED illumination device for a scannerless range imaging system |
US7194112B2 (en) | 2001-03-12 | 2007-03-20 | Eastman Kodak Company | Three dimensional spatial panorama formation with a range imaging system |
US6823076B2 (en) | 2001-07-20 | 2004-11-23 | Eastman Kodak Company | Method for embedding digital information in a three dimensional image from a scannerless range imaging system |
US6507706B1 (en) | 2001-07-27 | 2003-01-14 | Eastman Kodak Company | Color scannerless range imaging system using an electromechanical grating |
US6937742B2 (en) | 2001-09-28 | 2005-08-30 | Bellsouth Intellectual Property Corporation | Gesture activated home appliance |
US7515173B2 (en) * | 2002-05-23 | 2009-04-07 | Microsoft Corporation | Head pose tracking system |
US6925195B2 (en) | 2002-05-28 | 2005-08-02 | Eastman Kodak Company | Stabilization of three-dimensional images in a scannerless range imaging system |
US7464035B2 (en) | 2002-07-24 | 2008-12-09 | Robert Bosch Corporation | Voice control of home automation systems via telephone |
US6856382B2 (en) | 2003-02-06 | 2005-02-15 | Eastman Kodak Company | Formation of three-dimensional video sequences with a scannerless range imaging system |
JP4532856B2 (en) | 2003-07-08 | 2010-08-25 | キヤノン株式会社 | Position and orientation measurement method and apparatus |
US7532230B2 (en) * | 2004-01-29 | 2009-05-12 | Hewlett-Packard Development Company, L.P. | Method and system for communicating gaze in an immersive virtual environment |
US7324687B2 (en) | 2004-06-28 | 2008-01-29 | Microsoft Corporation | Color segmentation-based stereo 3D reconstruction system and process |
US7583777B2 (en) | 2004-07-21 | 2009-09-01 | General Electric Company | Method and apparatus for 3D reconstruction of images |
US7438414B2 (en) | 2005-07-28 | 2008-10-21 | Outland Research, Llc | Gaze discriminating electronic control apparatus, system, method and computer program product |
JP5001286B2 (en) | 2005-10-11 | 2012-08-15 | プライム センス リミティド | Object reconstruction method and system |
US9075441B2 (en) | 2006-02-08 | 2015-07-07 | Oblong Industries, Inc. | Gesture based control using three-dimensional information extracted over an extended depth of field |
CN101427155B (en) | 2006-04-21 | 2011-09-28 | 法罗技术股份有限公司 | Camera based six degree-of-freedom target measuring and target tracking device with rotatable mirror |
US8249334B2 (en) | 2006-05-11 | 2012-08-21 | Primesense Ltd. | Modeling of humanoid forms from depth maps |
US8077914B1 (en) | 2006-08-07 | 2011-12-13 | Arkady Kaplan | Optical tracking apparatus using six degrees of freedom |
NL1033310C2 (en) | 2007-01-31 | 2008-08-01 | Nuon Retail B V | Method and device for energy saving. |
WO2008141460A1 (en) | 2007-05-23 | 2008-11-27 | The University Of British Columbia | Methods and apparatus for estimating point-of-gaze in three dimensions |
US7676145B2 (en) | 2007-05-30 | 2010-03-09 | Eastman Kodak Company | Camera configurable for autonomous self-learning operation |
WO2010102037A2 (en) | 2009-03-03 | 2010-09-10 | The Ohio State University | Gaze tracking measurement and training system and method |
US20110103643A1 (en) | 2009-11-02 | 2011-05-05 | Kenneth Edward Salsman | Imaging system with integrated image preprocessing capabilities |
JP5613025B2 (en) | 2009-11-18 | 2014-10-22 | パナソニック株式会社 | Gaze detection apparatus, gaze detection method, electrooculogram measurement apparatus, wearable camera, head mounted display, electronic glasses, and ophthalmologic diagnosis apparatus |
WO2011148366A1 (en) * | 2010-05-26 | 2011-12-01 | Ramot At Tel-Aviv University Ltd. | Method and system for correcting gaze offset |
US8542320B2 (en) | 2010-06-17 | 2013-09-24 | Sony Corporation | Method and system to control a non-gesture controlled device using gesture interactions with a gesture controlled device |
US8861800B2 (en) | 2010-07-19 | 2014-10-14 | Carnegie Mellon University | Rapid 3D face reconstruction from a 2D image and methods using such rapid 3D face reconstruction |
US8823769B2 (en) * | 2011-01-05 | 2014-09-02 | Ricoh Company, Ltd. | Three-dimensional video conferencing system with eye contact |
US8510166B2 (en) | 2011-05-11 | 2013-08-13 | Google Inc. | Gaze tracking system |
US8885877B2 (en) | 2011-05-20 | 2014-11-11 | Eyefluence, Inc. | Systems and methods for identifying gaze tracking scene reference locations |
CN102255689B (en) | 2011-07-08 | 2018-05-04 | 中兴通讯股份有限公司 | A kind of processing method of channel condition information, apparatus and system |
EP2570079B1 (en) | 2011-09-13 | 2017-06-14 | Pie Medical Imaging BV | Method and apparatus for determining optimal 3D reconstruction of an object |
US9609217B2 (en) | 2011-11-02 | 2017-03-28 | Mediatek Inc. | Image-based motion sensor and related multi-purpose camera system |
US9503713B2 (en) | 2011-11-02 | 2016-11-22 | Intuitive Surgical Operations, Inc. | Method and system for stereo gaze tracking |
US8976224B2 (en) * | 2012-10-10 | 2015-03-10 | Microsoft Technology Licensing, Llc | Controlled three-dimensional communication endpoint |
US20150317516A1 (en) | 2012-12-05 | 2015-11-05 | Inuitive Ltd. | Method and system for remote controlling |
WO2014132259A1 (en) | 2013-02-27 | 2014-09-04 | Inuitive Ltd. | Method and system for correlating gaze information |
US10341611B2 (en) | 2013-04-30 | 2019-07-02 | Inuitive Ltd. | System and method for video conferencing |
US10782657B2 (en) | 2014-05-27 | 2020-09-22 | Ultrahaptics IP Two Limited | Systems and methods of gestural interaction in a pervasive computing environment |
-
2014
- 2014-04-29 US US14/763,840 patent/US10341611B2/en active Active
- 2014-04-29 WO PCT/IL2014/050384 patent/WO2014178047A1/en active Application Filing
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7139767B1 (en) * | 1999-03-05 | 2006-11-21 | Canon Kabushiki Kaisha | Image processing apparatus and database |
US20020063957A1 (en) * | 2000-11-29 | 2002-05-30 | Akira Kakizawa | Viewing stereoscopic image pairs |
US20060210045A1 (en) * | 2002-12-30 | 2006-09-21 | Motorola, Inc. | A method system and apparatus for telepresence communications utilizing video avatars |
US20050248651A1 (en) * | 2004-05-10 | 2005-11-10 | Fuji Xerox Co., Ltd. | Conference recording device, conference recording method, and design method and storage media storing programs |
US20080297589A1 (en) * | 2007-05-31 | 2008-12-04 | Kurtz Andrew F | Eye gazing imaging for video communications |
US9041915B2 (en) * | 2008-05-09 | 2015-05-26 | Ball Aerospace & Technologies Corp. | Systems and methods of scene and action capture using imaging system incorporating 3D LIDAR |
US20100315329A1 (en) * | 2009-06-12 | 2010-12-16 | Southwest Research Institute | Wearable workspace |
US20110085018A1 (en) * | 2009-10-09 | 2011-04-14 | Culbertson W Bruce | Multi-User Video Conference Using Head Position Information |
US20110273731A1 (en) * | 2010-05-10 | 2011-11-10 | Canon Kabushiki Kaisha | Printer with attention based image customization |
US20120139816A1 (en) * | 2010-12-05 | 2012-06-07 | Ford Global Technologies, Llc | In-vehicle display management system |
US20120147328A1 (en) * | 2010-12-13 | 2012-06-14 | Microsoft Corporation | 3d gaze tracker |
US8395656B1 (en) * | 2011-01-24 | 2013-03-12 | Hewlett-Packard Development Company, L.P. | Methods and apparatus to direct attention in a video content display |
US20120224019A1 (en) * | 2011-03-01 | 2012-09-06 | Ramin Samadani | System and method for modifying images |
US20120274736A1 (en) * | 2011-04-29 | 2012-11-01 | Robinson Ian N | Methods and systems for communicating focus of attention in a video conference |
US20140098179A1 (en) * | 2012-10-04 | 2014-04-10 | Mcci Corporation | Video conferencing enhanced with 3-d perspective control |
US20150341619A1 (en) * | 2013-01-01 | 2015-11-26 | Inuitive Ltd. | Method and system for light patterning and imaging |
US20160286209A1 (en) * | 2013-04-21 | 2016-09-29 | Zspace, Inc. | Non-linear Navigation of a Three Dimensional Stereoscopic Display |
Cited By (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11796799B2 (en) | 2014-01-21 | 2023-10-24 | Mentor Acquisition One, Llc | See-through computer display systems |
US10001644B2 (en) | 2014-01-21 | 2018-06-19 | Osterhout Group, Inc. | See-through computer display systems |
US9971156B2 (en) | 2014-01-21 | 2018-05-15 | Osterhout Group, Inc. | See-through computer display systems |
US11099380B2 (en) | 2014-01-21 | 2021-08-24 | Mentor Acquisition One, Llc | Eye imaging in head worn computing |
US10007118B2 (en) | 2014-01-21 | 2018-06-26 | Osterhout Group, Inc. | Compact optical system with improved illumination |
US11002961B2 (en) | 2014-01-21 | 2021-05-11 | Mentor Acquisition One, Llc | See-through computer display systems |
US10012838B2 (en) | 2014-01-21 | 2018-07-03 | Osterhout Group, Inc. | Compact optical system with improved contrast uniformity |
US10890760B2 (en) | 2014-01-21 | 2021-01-12 | Mentor Acquisition One, Llc | See-through computer display systems |
US10191284B2 (en) | 2014-01-21 | 2019-01-29 | Osterhout Group, Inc. | See-through computer display systems |
US10222618B2 (en) | 2014-01-21 | 2019-03-05 | Osterhout Group, Inc. | Compact optics with reduced chromatic aberrations |
US11650416B2 (en) | 2014-01-21 | 2023-05-16 | Mentor Acquisition One, Llc | See-through computer display systems |
US10012840B2 (en) | 2014-01-21 | 2018-07-03 | Osterhout Group, Inc. | See-through computer display systems |
US20150309314A1 (en) * | 2014-01-21 | 2015-10-29 | Osterhout Group, Inc. | See-through computer display systems |
US10481393B2 (en) | 2014-01-21 | 2019-11-19 | Mentor Acquisition One, Llc | See-through computer display systems |
US10877270B2 (en) | 2014-06-05 | 2020-12-29 | Mentor Acquisition One, Llc | Optical configurations for head-worn see-through displays |
US11960089B2 (en) | 2014-06-05 | 2024-04-16 | Mentor Acquisition One, Llc | Optical configurations for head-worn see-through displays |
US11402639B2 (en) | 2014-06-05 | 2022-08-02 | Mentor Acquisition One, Llc | Optical configurations for head-worn see-through displays |
US10564426B2 (en) | 2014-07-08 | 2020-02-18 | Mentor Acquisition One, Llc | Optical configurations for head-worn see-through displays |
US11940629B2 (en) | 2014-07-08 | 2024-03-26 | Mentor Acquisition One, Llc | Optical configurations for head-worn see-through displays |
US10775630B2 (en) | 2014-07-08 | 2020-09-15 | Mentor Acquisition One, Llc | Optical configurations for head-worn see-through displays |
US11409110B2 (en) | 2014-07-08 | 2022-08-09 | Mentor Acquisition One, Llc | Optical configurations for head-worn see-through displays |
US10908422B2 (en) | 2014-08-12 | 2021-02-02 | Mentor Acquisition One, Llc | Measuring content brightness in head worn computing |
US11630315B2 (en) | 2014-08-12 | 2023-04-18 | Mentor Acquisition One, Llc | Measuring content brightness in head worn computing |
US11360314B2 (en) | 2014-08-12 | 2022-06-14 | Mentor Acquisition One, Llc | Measuring content brightness in head worn computing |
US10078224B2 (en) | 2014-09-26 | 2018-09-18 | Osterhout Group, Inc. | See-through computer display systems |
US9743040B1 (en) * | 2015-12-03 | 2017-08-22 | Symantec Corporation | Systems and methods for facilitating eye contact during video conferences |
US11226691B2 (en) | 2016-05-09 | 2022-01-18 | Mentor Acquisition One, Llc | User interface systems for head-worn computers |
US11320656B2 (en) | 2016-05-09 | 2022-05-03 | Mentor Acquisition One, Llc | User interface systems for head-worn computers |
US10684478B2 (en) | 2016-05-09 | 2020-06-16 | Mentor Acquisition One, Llc | User interface systems for head-worn computers |
US11500212B2 (en) | 2016-05-09 | 2022-11-15 | Mentor Acquisition One, Llc | User interface systems for head-worn computers |
US11977238B2 (en) | 2016-06-01 | 2024-05-07 | Mentor Acquisition One, Llc | Modular systems for head-worn computers |
US11460708B2 (en) | 2016-06-01 | 2022-10-04 | Mentor Acquisition One, Llc | Modular systems for head-worn computers |
US11754845B2 (en) | 2016-06-01 | 2023-09-12 | Mentor Acquisition One, Llc | Modular systems for head-worn computers |
US11022808B2 (en) | 2016-06-01 | 2021-06-01 | Mentor Acquisition One, Llc | Modular systems for head-worn computers |
US11586048B2 (en) | 2016-06-01 | 2023-02-21 | Mentor Acquisition One, Llc | Modular systems for head-worn computers |
US10534180B2 (en) | 2016-09-08 | 2020-01-14 | Mentor Acquisition One, Llc | Optical systems for head-worn computers |
US11604358B2 (en) | 2016-09-08 | 2023-03-14 | Mentor Acquisition One, Llc | Optical systems for head-worn computers |
US11366320B2 (en) | 2016-09-08 | 2022-06-21 | Mentor Acquisition One, Llc | Optical systems for head-worn computers |
US11971554B2 (en) | 2017-07-24 | 2024-04-30 | Mentor Acquisition One, Llc | See-through computer display systems with stray light management |
US11668939B2 (en) | 2017-07-24 | 2023-06-06 | Mentor Acquisition One, Llc | See-through computer display systems with stray light management |
US11789269B2 (en) | 2017-07-24 | 2023-10-17 | Mentor Acquisition One, Llc | See-through computer display systems |
US11042035B2 (en) | 2017-07-24 | 2021-06-22 | Mentor Acquisition One, Llc | See-through computer display systems with adjustable zoom cameras |
US11409105B2 (en) | 2017-07-24 | 2022-08-09 | Mentor Acquisition One, Llc | See-through computer display systems |
US11550157B2 (en) | 2017-07-24 | 2023-01-10 | Mentor Acquisition One, Llc | See-through computer display systems |
US11567328B2 (en) | 2017-07-24 | 2023-01-31 | Mentor Acquisition One, Llc | See-through computer display systems with adjustable zoom cameras |
US10578869B2 (en) | 2017-07-24 | 2020-03-03 | Mentor Acquisition One, Llc | See-through computer display systems with adjustable zoom cameras |
US11960095B2 (en) | 2017-07-24 | 2024-04-16 | Mentor Acquisition One, Llc | See-through computer display systems |
US11226489B2 (en) | 2017-07-24 | 2022-01-18 | Mentor Acquisition One, Llc | See-through computer display systems with stray light management |
US10422995B2 (en) | 2017-07-24 | 2019-09-24 | Mentor Acquisition One, Llc | See-through computer display systems with stray light management |
US11500207B2 (en) | 2017-08-04 | 2022-11-15 | Mentor Acquisition One, Llc | Image expansion optic for head-worn computer |
US11947120B2 (en) | 2017-08-04 | 2024-04-02 | Mentor Acquisition One, Llc | Image expansion optic for head-worn computer |
US10969584B2 (en) | 2017-08-04 | 2021-04-06 | Mentor Acquisition One, Llc | Image expansion optic for head-worn computer |
US10785468B2 (en) * | 2018-05-06 | 2020-09-22 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Communication methods and systems, electronic devices, servers, and readable storage media |
US20190342541A1 (en) * | 2018-05-06 | 2019-11-07 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Communication methods and systems, electronic devices, servers, and readable storage media |
US10728526B2 (en) * | 2018-05-06 | 2020-07-28 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Communication methods and systems, electronic devices, and readable storage media |
US11595635B2 (en) | 2018-05-06 | 2023-02-28 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Communication methods and systems, electronic devices, servers, and readable storage media |
US20190342542A1 (en) * | 2018-05-06 | 2019-11-07 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Communication methods and systems, electronic devices, and readable storage media |
US10619787B1 (en) * | 2018-08-29 | 2020-04-14 | Facebook, Inc. | Mounting systems for a video-conferencing device, video conferencing systems, and related methods |
US11823698B2 (en) * | 2020-01-17 | 2023-11-21 | Audiotelligence Limited | Audio cropping |
US20210358514A1 (en) * | 2020-01-17 | 2021-11-18 | Audiotelligence Limited | Audio cropping |
US11257511B1 (en) * | 2021-01-05 | 2022-02-22 | Dell Products L.P. | Voice equalization based on face position and system therefor |
Also Published As
Publication number | Publication date |
---|---|
WO2014178047A1 (en) | 2014-11-06 |
US10341611B2 (en) | 2019-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10341611B2 (en) | System and method for video conferencing | |
US11580697B2 (en) | Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas | |
US11354840B2 (en) | Three dimensional acquisition and rendering | |
US10880582B2 (en) | Three-dimensional telepresence system | |
US10559126B2 (en) | 6DoF media consumption architecture using 2D video decoder | |
KR102417177B1 (en) | Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking | |
US11025882B2 (en) | Live action volumetric video compression/decompression and playback | |
JP5654138B2 (en) | Hybrid reality for 3D human machine interface | |
US8928659B2 (en) | Telepresence systems with viewer perspective adjustment | |
US8134555B2 (en) | Acquisition of surface normal maps from spherical gradient illumination | |
Gotsch et al. | TeleHuman2: A Cylindrical Light Field Teleconferencing System for Life-size 3D Human Telepresence. | |
US9380263B2 (en) | Systems and methods for real-time view-synthesis in a multi-camera setup | |
US11710273B2 (en) | Image processing | |
Vasudevan et al. | A methodology for remote virtual interaction in teleimmersive environments | |
Lien et al. | Skeleton-based data compression for multi-camera tele-immersion system | |
JP2014086774A (en) | Video communication system and video communication method | |
US20230288622A1 (en) | Imaging processing system and 3d model generation method | |
WO2014132259A1 (en) | Method and system for correlating gaze information | |
Kjeldskov et al. | Eye contact over video | |
Caviedes et al. | Combining computer vision and video processing to achieve immersive mobile videoconferencing | |
Ekstrand et al. | High-resolution | |
Nashel | Rendering and display for multi-viewer tele-immersion | |
FR3043295A1 (en) | SPACE ENHANCED REALITY DEVICE FOR OFFICE ENVIRONMENT |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INUITIVE LTD., ISRAEL Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BEN-BASSAT, DAVID;REEL/FRAME:036548/0521 Effective date: 20140115 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |