US20100079576A1 - Display system and method - Google Patents
Display system and method Download PDFInfo
- Publication number
- US20100079576A1 US20100079576A1 US12/567,436 US56743609A US2010079576A1 US 20100079576 A1 US20100079576 A1 US 20100079576A1 US 56743609 A US56743609 A US 56743609A US 2010079576 A1 US2010079576 A1 US 2010079576A1
- Authority
- US
- United States
- Prior art keywords
- viewer
- vdu
- image
- reflector
- receptionist
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/142—Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
- H04N7/144—Constructional details of the terminal equipment, e.g. arrangements of the camera and the display camera and display on the same optical axis, e.g. optically multiplexing the camera and display for eye to eye contact
Definitions
- This invention relates to a display system and method, and in particular, such a system and method allowing persons at remote locations to see each other in a virtual environment.
- Face-to-face communication is the most direct and common way of communication among people. Facial expressions like eye contact, smiling, anger, and emotional gestures are all conveyed effectively through this process. However, such a process requires that the persons engaged in such communications be physically located at the same place.
- VDU visual display unit
- a party to the communication sees his/her partner through the screen of the VDU, which is an un-human and face-to-machine system.
- a display system including at least first and second visual display means (VDM's), each being adapted to display at least one visual image for viewing; at least a first image capturing device adapted to capture at least one image of a first individual viewing said first VDM; at least second and third image capturing devices each adapted to capture, each from a different angle, at least one image of a second individual viewing said second VDM; wherein said first image capturing device is connectable with said second VDM for transmitting said captured image to said second VDM for display; wherein said first VDM is connectable with either of said second and third image capturing devices for display of said image captured by either of said second and third image capturing devices; means for identifying the position of a reference point of the captured image of said first individual against a pre-determined reference background; and means for selectively connecting said first VDM with said second image capturing device or said third image capturing device in accordance with the position of said reference point of said first individual as identified by said identifying means.
- VDM's visual display means
- a display method including the steps of (a) capturing at least one image of a first individual; (b) displaying the captured image of said first individual to a second individual; (c) capturing images of said second individual from at least a first angle and a second angle which are different from each other; (d) identifying the position of a reference point of the captured image of said first individual against a pre-determined reference background; and (e) selectively displaying the image captured from said first angle or from said second angle, in accordance with the identified position of said reference point of said first individual.
- a visual display apparatus including a visual display unit engaged with a support, said support including a closable opening; a reflector movable relative to said support between a first position in which said reflector substantially closes said opening and a second position in which said opening is open and images displayed by said visual display unit are reflectable by said reflector for viewing; wherein an end of said reflector is slidably and swivellably movable relative to said support for movement between said first and second positions.
- FIG. 1A shows a video camera, being part of a display system according to the present invention, monitoring a user of the system;
- FIG. 1B is an image of the user captured by the video camera in FIG. 1 , as displayed on a visual display unit (VDU);
- VDU visual display unit
- FIG. 2 shows the different views of a receptionist as perceived by a viewer at different positions and angles
- FIG. 3A is a schematic diagram of a basic arrangement in the display system according to the present invention.
- FIG. 3B is a top view of the image displayed in the VDU in FIG. 3A ;
- FIG. 4 is a schematic diagram of the display system according to the present invention.
- FIG. 5 shows a VDU array in front of the receptionist in a control room, in a discrete mode of operation
- FIG. 6 shows the VDU array in FIG. 5 in an integrated mode of operation
- FIG. 7 shows an array of VDU's, each displaying the image of a different viewer site
- FIG. 8 shows the connection of video cameras at different viewer sites with the display system according to the present invention
- FIG. 9 shows the relationship between the positioning of the viewer in a viewer site and the positioning of video cameras in a control room
- FIG. 10 shows a top view of a receptionist in front of the array of VDU's shown in FIG. 6 ;
- FIG. 11 shows connection of the cameras in the control room with the viewer sites
- FIG. 12 is a schematic diagram of an “e-Conferencing” system according to the present invention.
- FIG. 13 shows part of an “e-Conferencing” system for four participants
- FIG. 14 is a more detailed schematic diagram of the arrangement of a four-party e-Conference using a display system according to the present invention.
- FIG. 15A shows the sitting plan of an exemplary conference
- FIG. 15B shows the arrangement of the virtual participants in one room in an e-Conference simulating the sitting plan of FIG. 15A ;
- FIG. 15C shows the arrangement of the virtual participants in another room in an e-Conference simulating the sitting plan of FIG. 15A ;
- FIG. 16 is a schematic diagram of the application of a display system according to the present invention as an “e-Theatre”;
- FIG. 17 is a schematic diagram of the application of a display system according to the present invention as a “Stereo Television”;
- FIG. 18 is a side view of a display unit in an in-use configuration
- FIG. 19 is a side view of the display unit shown in FIG. 18 in a close configuration
- FIG. 20 is a schematic diagram of the basic hardware design of a viewer site of a display system according to the present invention.
- FIG. 21 is a schematic diagram of the connection of viewer sites and the control room installed in an “e-Receptionist” application of a display system according to the present invention.
- FIG. 22A is a partial side view of a display unit according to the present invention in an in-use configuration
- FIG. 22B is a sectional view of the display unit taken along the line B-B in FIG. 22A ;
- FIG. 22C is a sectional view of the display unit taken along the line C-C in FIG. 22B , with the display unit in a not-in-use configuration;
- FIG. 22D is a top view of the display unit shown in FIG. 22C ;
- FIG. 22E is a sectional view taken along the line E-E in FIG. 22D ;
- FIG. 22F is a sectional view taken along the line F-F in FIG. 22E ;
- FIG. 23 is an enlarged view of the movable revolutionary joint shown in FIG. 22C , with the plate in a closed position;
- FIG. 24 is view corresponding to FIG. 23 , with the plate in an open position;
- FIG. 25 shows movement of the plate from a closed position to a fully open position
- FIG. 26 shows the construction of the semi-transparent plate of a display unit according to the present invention
- FIG. 27A is a top view of the mounting chassis parts of a further alternative display unit of a display system according to the present invention.
- FIG. 27B is a sectional view taken along line G-G of FIG. 27A ;
- FIG. 28 is a top perspective view showing use of the display system shown in FIGS. 22A to 22F ;
- FIG. 29 is a side view of the display system shown in FIG. 28 ;
- FIG. 30 shows how the angle of inclination of the plate and that of the VDU are calculated
- FIG. 31 shows the positioning of a viewer in front of a digital video camera forming part of the display system according to the present invention
- FIG. 32 shows various images of a viewer in the capture window by the digital video camera in FIG. 31 ;
- FIG. 33 shows how the head position of the viewer is determined
- FIG. 34 is a top view of a control room with an array of VDU's
- FIG. 35 is a side view of the control room shown in FIG. 34 ;
- FIG. 36 shows schematically the connection topology of the digital cameras
- FIG. 37 shows the viewer site and the control room
- FIG. 38 shows the same viewer site and control room as in FIG. 37 , but for illustrating the determination of viewing angles
- FIG. 39 is a top view of an e-Receptionist application of a display unit according to the present invention.
- FIG. 40 shows various views of the viewer when captured in the capture window of the digital video camera in FIG. 39 ;
- FIG. 41 is a top view of an e-Receptionist application of a display unit according to the present invention.
- FIG. 42 is a simplified side view of FIG. 41 ;
- FIG. 43 shows a simplified architecture of a 3G mobile phone
- FIG. 44 shows a video wall module of a display system according to the present invention, adopting a modified 3G mobile phone architecture
- FIG. 45 shows the use of a 3G mobile phone as part of display unit of a viewer site
- FIG. 46 shows various remote site modules distributed at geographically remote locations.
- FIG. 47 shows an array of video wall modules (VWM's) forming an array of VDU's and digital video cameras.
- VWM's video wall modules
- the viewing distance usually dominates the object's depth so that the object seems to be flat.
- the object's depth is dominated very much, so one can feel the object significantly by the parallax effect.
- a video camera 10 being part of a display system according to the present invention, captures the head position of a person 12 using such a system.
- the captured image is displayed on a visual display unit 14 , e.g. a screen of a 4:3 screen aspect ratio.
- FIG. 2 when a viewer facing a female receptionist 18 moves his/her own head among the three positions 16 a , 16 b , 16 c , he/she should see different views of the face of the receptionist 18 .
- the viewer when at the position 16 a , the viewer should see more of the right side face of the receptionist 18 , as shown in 18 a ; when at the position 16 b , the viewer should see the front view of the receptionist, as in 18 b ; and when at the position 16 c , the viewer should see more of the left face of the receptionist 18 , as shown in 18 c.
- FIG. 3A is a schematic diagram of a basic arrangement in a display system according to the present invention.
- a VDU 24 e.g. a monitor or a television set
- the image 20 displayed by the VDU 24 is reflected by a semi-transparent plate or mirror 26 positioned before a male viewer 28 , and inclined at an angle, e.g. 45°, to the horizontal.
- the viewer 28 will perceive an image 30 of the receptionist (called the “virtual receptionist), as reflected by the plate 26 , floating in the air, aligned with his own line of sight, and merged with the environment.
- the VDU 24 is positioned within a dark enclosure 32 , hidden from the line of sight of the viewer 28 , and with an opening 34 allowing light from the VDU 24 to pass through.
- a video camera 22 is positioned above the viewer 28 for determining the head position of the viewer 28 , to be discussed below.
- Another method is to use a video camera 22 to capture the image of the viewer 28 .
- a static background scene was recorded and stored in the reference memory of the camera 22 of the system.
- Objects moving in front of the camera 22 will be detected and the resultant moving images will be compared with the static background image and by simple calculations, the position of the moving object, e.g. a head of the viewer, can be determined.
- Head search algorithm is used for finding object moving in front of the cameras
- face masking algorithm is used for sending object images to reduce bandwidth requirement
- contrast normalization is used for balancing object images with the background screen.
- the image data captured by the digital cameras are in the form of pixels matrix and each pixel combines three basic colour values of red, green and blue.
- the active full colour Normally, only one station is selected by the master receptionist at the central control room as the active full colour screen, which consumes almost half of the total 6 Mbps bandwidth, and the remaining stations will be in the mode of passive black-and-white images and share the remaining bandwidth.
- the ratio of bandwidth sharing determines the quality of display required. If a higher colour resolution is required, the black-and-white quality will be reduced.
- the basic formula to convert coloured red, green and blue values to a gray value is:
- the static pixel buffer is set to remember the highest probability of gray value occurrence within a certain period of time (e.g. x seconds) and stores that value in the static buffer memory.
- the dynamic pixel buffer stores each pixel in memory where the probability of gray value occurrence is random within x seconds.
- the time duration x seconds determines the refresh rate of the system, which also depends on the available bandwidth for data transmission. The higher the bandwidth, the higher the refresh rate can be.
- the subsequently captured images will be compared with it pixel by pixel. If one pixel is found to be the same as that in the corresponding static buffer, it increases the probability of occurrence at that gray value, and the probability value is constantly updated until a constant static background image is found. The longer the time it occurs as the same gray value in that particular pixel, the higher the stability it is. Any intermittent changes in that pixel value are only regarded as noise or dynamic pixel. The whole picture image will then be sent to the master control room at a very low refresh rate to minimize the bandwidth requirement.
- one pixel If one pixel is found to be different from the corresponding static buffer, its value will be stored in the dynamic buffer. With the information on area (obtained by counting the number of dynamic pixels) and shape (by pattern matching), one could predict whether the object is a human face or not. Although the prediction may not be very accurate, it is sufficient for the purpose of locating the head position. As the size of this dynamic pixel images is much smaller than the whole screen picture, it needs less bandwidth for transmission and the refresh rate can be higher.
- the moving object With its size and shape matched with the predefined threshold values, it is assumed to be a human face looking at the camera.
- the static background image with very low refresh rate and the dynamic moving object image with higher refresh rate are sent separately to the central control room and combined in either one of the big screen matrix. If the required bandwidth is not enough for this purpose, the visual quality (resolution) of the moving object image may be reduced. Such will not of significant effect so long as such can be recognized by the master receptionist at the central control room as a moving human face.
- Another way to further reduce the bandwidth requirement is to convert the moving object image into a very simple object shape, by eliminating all image details and keeping only the contour information. This is acceptable because this image is treated as a preview image for the purpose of selection only. An image of a higher resolution may be displayed once this particular station is chosen/designated as the active station.
- the camera 22 also serves as a viewing device for the viewer 28 , connecting his/her image through a data link to a control room of the system, and presenting the image of the viewer 28 to a master receptionist front video wall (to be discussed below) for further manipulation.
- the size, location and face details of the virtual receptionist are preferably correlated in the same way as in presented in a real situation, although it is not strictly necessary.
- the size of an adult human face is roughly the same for everybody. It is thus possible to adjusting the size of the display monitor screen so that the size of the reflected image in the air is close to the size of a human face, with the reflected image details, colour, and contrast closely resembling that of a real receptionist.
- the location of the reflected image can be adjusted by carefully positioning the level of the display monitor screen relative to the reflective plate. For example, if the distance between the topmost of part of the reflected image and the top of the display monitor is equal to or close to the horizontal distance between the top of the image and the reflective plate, a realistic image close to real life situation will be provided.
- the receptionist 18 is situate at a control room R, which is remote from a site S where the viewer 28 is located.
- a control room R In front of the receptionist 18 are a number of video cameras 38 a , 38 b , 38 c , each capturing a different face image of the receptionist 18 .
- the video cameras 38 a , 38 b , 38 c are connected with the screen 24 via a video multiplexer 40 .
- the camera 22 is situate at the site S for determining the position of the viewer 28 .
- the switch SW of the video multiplexer 40 will connect the VDU 24 with the video camera 38 a , thus allowing the 30 image of the receptionist 18 as captured by the video camera 38 a to be displayed by the VDU 24 .
- signals detected by the video camera 22 will cause the switch SW of the video multiplexer 40 to connect the VDU 24 with the video camera 38 c , whereupon the image 30 of the receptionist as displayed by the VDU 24 , and thus as perceived by the viewer 28 , will more of the left face of the receptionist 18 .
- VDU's 42 e.g. monitors or television sets
- VDU's 42 are arranged in a wall-like manner in front of the receptionist 18 in the control room R.
- the VDU's 42 may be arranged as an 8 ⁇ 8 array, with video cameras 38 put on each intersection point. The resolution of the three-dimensional effect will be determined by the size of the array.
- the image of the viewer 28 as captured by the video camera 22 will be displayed on this VDU array in either a discrete manner or in an integrated manner.
- each of the VDU's 42 may show the image of a different view viewer site S, as in FIG. 5 .
- a maximum of sixty-four viewers can be displayed on the array at one time. If there are more than sixty-four viewer sites S, the receptionist 18 may switch a page at a time by operating on her own control panel. In the integrated mode, all the VDU's 42 forming the array may collectively show the image of one viewer site S only, as in FIG. 6 .
- each page of the viewer sites S represents the images captured by a total of sixty-four video cameras 22 , each located at a different view site S. These sixty-four viewer sites S also form an 8 ⁇ 8 array of cameras 22 , as shown in FIG. 7 .
- the 8 ⁇ 8 cameras 22 thus also form an array, as shown in FIG. 8 .
- Video data from the cameras 22 pass through a video cross bar switch 48 to the control room R for further manipulation.
- each camera 22 In the discrete mode of operation, to display all the images of the viewers 28 , each camera 22 has to be assigned a unique identifying address, e.g. A 1 to H 8 . It is clear that a high speed scanning mechanism is necessary in order to have a reasonable refresh rate for each VDU 42 in real time operation.
- a screen update rate of down to 10 Hz is acceptable, and this means 100 mS per frame for each individual station.
- it requires roughly a minimum bandwidth of 200 Mbps for a 640 ⁇ 480 ⁇ 256 colour quality of display resolution and a 10:1 data compression. This calculated value could be met with the present network bandwidth of from 100 Mbps to 1 Gbps specification.
- bandwidth scanning There are several ways to reduce the bandwidth requirement. Firstly, if one is willing to sacrifice the display quality in “discrete” real time mode of operation, a “priority scanning” technique can be used, for which the bandwidth may be as low as 6 Mbps. This requirement can be easily achieved in existing Internet broadband environment.
- Priority scanning refers to the selection of only one active station for high quality display, while the rest will be handled by reduced quality algorithm to lower the bandwidth requirement. Such is an acceptable arrangement because normally there is only one receptionist 18 to handle call service or inquiry. In case viewers 28 at all sixty-four stations (viewer sites S) request service at the same time, some of the viewers 28 have to wait until service is available. By means of eye-contact between the active viewer 28 (i.e. the viewer 28 who is receiving service from the receptionist 18 ) and the receptionist 18 , the waiting viewer 28 will note that he does not have eye-contact with the virtual receptionist. He will then realize that the receptionist 18 is serving another viewer 28 , and that it is reasonable that he/she stays calm for a while until service is available to him/her.
- the active station usually has the most high quality full colour display and it also serves as a cursor screen for the receptionist 18 to easily select from among the other sixty-three sets of low quality black-and-white screen displays showing the non-active stations.
- the maximum bandwidth requirement for one station alone is about 3 Mbps, and the remaining 3 Mbps can be shared among the remaining sixty-three stations, which is about 47.6 kbps per station, which quite enough for handling a black-and-white image with pure outline contours of human face.
- delta object separation Another way to reduce bandwidth requirement is to use “delta object separation” technique.
- a static background was recorded and stored in the reference memory.
- a one-time job requires maximum bandwidth and takes the longest time to refresh the original background view for each station.
- a special algorithm activates the calculation of the bitmap changes against the background scene. The result serves to locate the head of the viewer against the capture window and to send differential data stream embedded with contents of delta object information to the control room R.
- the receptionist 18 in the control room R may select which one to be the active viewer.
- the receptionist 18 may first grasp an overview of all the stations (viewer sites S) by using the discrete mode of operation, identify if someone is approaching any station, select that particular station (viewer site S) as the active station by, e.g. rotating a control dial and pushing a button to confirm the selection.
- the receptionist 18 may then switch the array of VDU's 42 to the integrated mode of operation. In this mode of operation, the array of VDU's 42 will combine to act as a single big display screen with each VDU 42 displaying only a portion of the active viewer's image. In this mode of operation, eye-contact can be established between the active viewer and the receptionist 18 , in a manner to be discussed herebelow.
- FIGS. 5 and 6 there will be a 7 ⁇ 7 array of video cameras 38 . These cameras 38 point at the receptionist 18 and are placed inline with the image captured. The positioning of the array of VDU's 42 and the location of the receptionist 18 should be well defined to obtain a more realistic visual effect. As shown in FIG. 9 , if the viewer 28 is at the position P 4 in the viewer site S, in which he looks at the virtual receptionist 30 sideway offset from the centre at an angle ⁇ , he should see exactly the same image of the receptionist 18 in the control room R as captured by the camera 38 d , which is also sideway offset from the centre of the receptionist 18 at an angle ⁇ .
- the receptionist 18 sits between a blue curtain 50 , acting as a backdrop, and the array of VDU's 42 and the array of video cameras 38 . If the receptionist 18 sits directly facing the middle column of video cameras 38 , the angles ⁇ and ⁇ in FIG. 10 will be the same.
- the distance D between the receptionist 18 and the array of VDU's 42 is given by the following formula:
- W is the width of a VDU 42 .
- the “eye-contact point” is defined as the centre point between the two eyes of the viewer 28 .
- the image of the viewer 28 captured within the capture window will be the same as displayed on the VDU array, but with the size proportionally enlarged. It is approximately a one-to-one mapping of the captured viewer's image as displayed on the VDU array. If, during the capture of the image by the camera 22 at the viewer site S, the viewer 28 moves his head, such will be correspondingly displayed on the VDU array.
- the receptionist 18 sits still in front of her control panel, and stays in the control room R without moving. If the viewer 28 moves his head around with his “eye-contact point” moving within the capture window, the camera 38 in the control room R which is closest to the VDU 42 displaying the viewer's eye-contact point will be connected to the active viewer's station, while its opposite image pair of “no-eye-contact point” will be displayed in all other non-active station(s). If the receptionist 18 remains seated with her head facing directly forward, the viewer 28 will see sideway left or right, up or down view of the receptionist 18 .
- the line of sight and head of the receptionist 18 move to follow the “eye-contact point” of the active viewer 28 as displayed on the VDU array, she will then be performing eye-contact with the active viewer 28 , because no matter where the viewer 28 moves, he can see the front view of the receptionist 18 , as the receptionist 18 will then be facing and looking at the camera 38 among the VDU array which is closest to the VDU 42 displaying the “eye-contact point” of the viewer 28 , and it is the image captured by this particular camera 38 which is transmitted to the VDU 24 at the active viewer site S, and as perceived by the viewer 28 . If the line of sight of the receptionist 18 does not follow the eye-contact point of the viewer 28 , then the viewer 28 will only see a side face of the receptionist 18 , depending on the direction in which the receptionist 18 moves her head.
- FIG. 11 shows the connection of the cameras 38 in the control room R with the viewer sites S.
- the control signal addresses any one of the video camera in the 7 ⁇ 7 video camera array, depending on the result calculated by the head location detection algorithm described above.
- a blue curtain 50 helps remove the background scene to be displayed on the VDU 24 at the viewer site S. This is important for rebuilding the virtual object image in the air through the semi-transparent plate 26 .
- the background scene stored in the reference memory can be separated from the object view and then only the object details are transmitted for further processing. This technique helps in reducing the bandwidth requirement, because only the needed video data are transmitted.
- the bandwidth required for sending the video data of the receptionist 18 from the control room R to the viewer site S is the same as that for sending the video data of the viewer 28 to the array of VDU's 42 in the control room R.
- the present display system can be used for “e-Conferencing”.
- the working principle of “e-Conferencing” is very similar to that of “e-Receptionist”, except that the data communication channel is a bi-directional link instead of having separate channels for data transfer.
- the same “eye-contact” principle is applied here.
- two participants 100 , 102 of an e-Conference each located at a geographically remote area 104 , 106 respectively, sit before a respective camera 108 , 110 .
- a respective display unit 112 , 114 Similar to that shown in FIG. 3A and discussed above.
- the cameras 108 , 110 and display units 112 , 114 are connected with one another via a data communication channel 116 .
- a virtual image 102 a of the participant 102 will be displayed by the display unit 112 for perception by the participant 100 .
- a virtual image 100 a of the participant 100 will be displayed by the display unit 114 for perception by the participant 102 .
- a participant D situate at a location which is geographically remote from the other three participants A, B and C, has three display units 120 a , 120 b , 120 c installed in a table 122 .
- Each of the display units 120 a , 120 b , 120 c is associated with a video camera 124 a , 124 b , 124 c directed towards D.
- the display unit 120 a and the associated video camera 124 a are connected via a data communication channel with a corresponding set of display unit and video camera before A, and similarly for participants B and C.
- A′ an image of A, designated as A′, will be displayed by the display unit 120 a , and perceived by D; and similarly for the image B′ of B and the image C′ for C.
- D is facing the video camera 124 a which is connected with the display unit before the participant A.
- A will see the front view of D and can thus establish eye contact with D.
- B and C as they can only see the right side face of D, as captured by the video cameras 124 b and 124 c respectively, they cannot establish eye contact with D. They will thus realize that D is not addressing either of them.
- FIG. 14 shows a more detailed schematic diagram of the arrangement of a four-party e-conference using a display system according to the present invention, in which parties A, B, C and D are each located at a respective location L A , L B , L C , L D , which are geographically remote from one another. As each party sits in his/her own respective location, and views the images of his/her counterparts, it is necessary to carefully organize and position the virtual parties in order to create an effective virtual environment.
- FIG. 14 A possible arrangement is shown in FIG. 14 .
- D makes eye contact with image A′ of A as displayed by the display unit 120 DA the front face of D will be captured by the associated video camera 124 DA , and transmitted via the data communication channel 128 , and displayed by the display unit 120 AD , and perceived by A as image D′.
- the cameras 124 DB and 124 DC in L A will capture his right face.
- L B B, B, through a display unit 120 BD , will see an image D′ of the right face of D.
- M 1 , M 2 , . . . M 8 For e-Conferences involving more members, e.g. eight members (M 1 , M 2 , . . . M 8 ), if it is intended to simulate the sitting plan as shown in FIG. 15A , the rule of thumb is that in each location, all the members are arranged in the same sequence around the table. Let's take room R 3 in which M 3 is physically located, virtual M 4 is to M 3 's left, followed by virtual M 5 , and so on, until back to virtual M 1 and subsequently virtual M 2 , as shown in FIG. 15B .
- M 5 is a display unit in R 3 which is connected with the video camera in room R 5 which is associated with a display unit in R 5 for display of the image of M 3 .
- “Delta object separation” technique may also be employed in e-Conference to remove background scene of each individual meeting member, and thus to transmit data of the object image of the member only through the data communication channel.
- the basic data communication technique used for e-Conference may be the same as that used in 3G mobile phone technology. Instead of a small screen in the mobile phone, a bigger and modified display system may be used to create the virtual scene to achieve the special visual effect.
- a further application of a display system and method according to the present invention is the e-Theatre, which is shown schematically in FIG. 16 .
- an artist 150 performs inside a control room 152 , in front of a VDU array 153 (as discussed above) with a blue curtain 154 behind him/her.
- VDU's 156 in the VDU array 153 monitor the progress of various scenes on individual stage, and video cameras 158 positioned among the VDU's 156 capture the image of the artist 150 , each at a different angle.
- Technicians may be employed to operate various panels and buttons to transmit the image of the artist 150 to various concerts at different geographically remote locations.
- VDU's 156 in the VDU array 153 and video cameras 158 in the control room 152 are connected display units 160 a , 160 b , 160 c and 160 d of the respective locations T A , T B , T C and T D via a data communication channel 162 .
- the image of the artist 150 as displayed by the respective display unit 160 a , 160 b , 160 c and 160 d is reflected by a respective inclined semi-transparent plate 164 a , 164 b , 164 c and 164 d to form a virtual image 166 a , 166 b , 166 c , and 166 d as perceived the respective audience 168 a , 168 b , 168 c , and 168 d .
- An artist 170 physically present at location T A may even co-perform with the virtual image 166 a of the artist 150 for the audience 168 a at location T A .
- Stepo Television A further possibility of the application of a display system according to the present invention is called “Stereo Television”, as shown in FIG. 17 .
- An image 172 of an artist geographically remote from an area 174 is displayed by a display unit 176 and reflected by an inclined semi-transparent plate 178 with a television set 180 as background.
- the image (virtual artist) 182 is perceived by the audience 184 to be closer to the audience than the television set 180 .
- the semi-transparent plate 26 is connected with an upper surface of the dark enclosure box 32 via a hinge 33 , and thus movable to selectively open or close the box 32 by the plate 26 .
- the video camera 22 is attached on a free end of the plate 26 , and is directed downwardly towards a viewer.
- the angle at which the plate 26 is inclined relative to the upper surface depends on how the VDU 24 is placed beneath the opening 34 in the box 32 . The angle should be such that the image projected by the VDU 24 in space should form a reasonable figure of the target image in the viewer's line of sight.
- the semi-transparent plate 26 which acts like a display window, is attached to a desktop 35 by a slide-in roller hinge system.
- the plate 26 is slid into a slot 37 near the hinge 33 to close the opening 34 , whereby a flat desktop surfaced is formed for other use.
- the positioning of the VDU 24 depends on the particular application of the display system.
- e-Receptionist as discussed above, a normal receptionist desk may be modified by providing a recess with a dark enclosure box within which is placed a VDU. When not in use, the opening of the dark enclosure box may be closed, and a real receptionist may sit across the receptionist desk for serving customers.
- e-Conference the construction is similar except that the number of recesses (and thus the number of VDU's) in the conference table will depend on the number of parties intended to be served by the system.
- the VDU should be designed to be movable up and down to adjust the viewing depth and, and horizontally to adjust the location of the projected image, i.e. virtual artist.
- each viewer site 200 e.g. in an “e-Receptionist” application of a display system according to the present invention, includes a personal computer (PC) 202 with a display screen 204 , attached with a digital video camera 206 .
- the PC 202 is connected with a data communication network 208 (e.g. the Internet or intranet) via a low-speed Control+Voice trunk 210 , a high-speed video-out trunk 212 , a high-speed video-in trunk 214 of Local Area Network (LAN) environment.
- LAN Local Area Network
- the video-in and video-out of the array of viewer's sites 218 are connected to a video network management unit 220 via a cross bar switch 222 .
- the video-in and video-out of the array of VDU's and cameras 224 in the control room are also connected to the video network management unit 220 via a cross bar switch 226 .
- the cross bar switches 222 , 226 are connected with a system control unit 228 , which is connected with a control panel interface 230 , is provided with special design control protocol for overall inter-operating system control.
- FIGS. 22A to 22F show various views of an alternative display unit 300 according to the present invention.
- a monitor 302 is inclinedly supported in a recess of a table 304 .
- Image displayed on the monitor 302 is projected onto a semi-transparent plate 306 , to form a virtual image 308 to be perceived by an onlooker/viewer.
- the plate 306 is movable between an in-use position in which it is pivoted upwardly to an inclined position relative to the surface 310 of the table 304 , and a not-in-use position in which it lies flush with the surface 310 of the table 304 to form a generally continuous and flush table top surface.
- a digital video camera 312 At one longitudinal end of the plate 306 is mounted a digital video camera 312 for capturing images of the viewer, for transmission to another VDU, being part of the display system.
- a second longitudinal end of the plate 306 is mounted a hemispherical support 314 which is slidably and swivellably movable relative to a row of parallel roller bars 316 .
- a movable revolutionary joint As shown more clearly in FIGS. 23 and 24 , the hemispherical support 314 is engaged with the plate 306 via a mounting frame 318 .
- the support 314 is first lifted up above the three topmost roll bars 316 ( 2 ), the support 314 is then allowed to move down the row of roller bars 316 ( 3 , 4 , 5 ), thus causing the plate 306 pivot upwardly, until it reaches the lowest point of the path of movement ( 6 ). Conversely, by moving the support 314 up the row until it rests on the three topmost roller bars 316 , the plate 306 will lie generally flush with the surface 310 of the table 304 .
- the plate 306 is made up of a frame 320 , which is moulded by a clear plastic material to maximize optical transparency, or at least to minimize visual obstruction.
- a recess 322 is provided for receiving a semi-transparent film or plate 324 .
- Data from the digital video camera 312 are transmitted by a clear plastic flat cable 326 which runs along a side of the frame 320 to the bottom part.
- FIGS. 27A and 27B are, respectively, top view and sectional view of the mounting chassis parts of a further alternative display unit of a display system according to the present invention.
- a metal chassis 350 is mounted beneath a table 352 , and is configured to hold a VDU 354 with its screen 356 inclined at around 20° to the horizontal.
- the part of the chassis 350 facing an opening 358 is dark in colour to reduce light leakage.
- the chassis 350 is provided with a rectangular hole 360 for accommodating a digital video camera.
- FIG. 28 is a top perspective view showing use of the display system 300 shown in FIGS. 22A to 22F
- FIG. 29 is a side view thereof.
- the display unit 300 is installed in the table 304 .
- Image displayed by the VDU 302 is reflected by the slanted semi-transparent plate 306 and perceived by a viewer 362 as virtual image 364 .
- FIG. 30 shows how the angle of inclination of the plate 306 and that of the VDU 302 are calculated.
- H is the height of the eye level of the viewer above the surface of the table 304 ;
- P is the horizontal distance between the eye of the view and the top edge of the VDU 302 ;
- 0 is the angle of inclination of the screen of the VDU 302 with respect to the surface of the table 304 ;
- ⁇ is the angle of inclination of the plate 306 with respect to the surface of the table 304 .
- FIG. 31 shows the viewer 362 whose image is captured by a digital video camera 370 .
- the image of the viewer 362 as captured in the capture window of the video camera 370 is as shown in frame P 1 in FIG. 32 ; when the viewer 362 moves to her left side, her image in the capture window of the video camera 370 is as shown in frame P 2 in FIG. 32 ; and when the viewer 362 moves to her right side, her image in the capture window of the video camera 370 is as shown in frame P 3 in FIG. 32 .
- FIG. 33 shows how the head position of the viewer is determined.
- a simplified face recognition algorithm is used for determining the viewer head position and the centre point between the viewer's two eyes is calculated with respect to the grid position within the capture window.
- the capture window of the digital video camera 370 thus acts as a reference background against which the position of the head position, or a reference point (e.g. the centre point between the eyes), of the viewer 362 is to be determined and identified.
- a reference point e.g. the centre point between the eyes
- a set of data will be recognized as from G 6 , F 6 , E 6 , D 6 , C 6 , B 6 to A 6 eventually.
- the same principle applies to vertical movement of the viewer's head.
- Such information will be transmitted to the control room as real-time data to control the choice of camera to be connected to the display screen at the viewer's site.
- FIGS. 34 and 35 show a receptionist 372 sitting in a control room in front of an array of VDU's
- FIG. 36 shows schematically the connection topology of the digital cameras in which C 1 +C 2 is the capture window field of view of the video camera in the viewer site.
- the digital camera at the corresponding position (D, 6) in the array of VDU in front of the receptionist 372 will be connected with the VDU at the viewer site;
- the digital camera at the corresponding position (F, 6) in the array of VDU in front of the receptionist 372 will be connected with the VDU at the viewer site;
- the digital camera at the corresponding position (B, 6) in the array of VDU in front of the receptionist 372 will be connected with the VDU at the viewer site.
- FIG. 37 shows both the viewer site V and the control room C.
- the viewer 362 sits in front of a table 304 installed with a display unit according to the present invention, with a VDU 302 displaying image of the receptionist 372 physically located in the control room C, which is geographically remote from the viewer site V.
- the image of the receptionist 372 is captured by a number of digital video cameras distributed among an array of VDU's 380 in front of the receptionist 372 , and the image captured by one of these digital video cameras will be transmitted via a data communication channel for display by the VDU 302 at the viewer site V.
- the image of the receptionist 372 as displayed by the VDU 302 is reflected by the semi-transparent plate 306 and perceived by the viewer 362 as a virtual image, i.e. a virtual receptionist 364 .
- the image of the viewer 362 is captured by the digital video camera 370 installed at an upper end of the plate 306 .
- the image of the viewer 362 as captured by the digital video camera 370 is displayed on the array of VDU's 380 in front of the receptionist 372 .
- the viewer 362 is at the position P 1 in the viewer site V, he/she will be recognized as being at the D 6 position, his/her image will be displayed at position P 1 ′ in the array of VDU's 380 in the control room C.
- Data representing “D 6 ” will be transmitted via the data communication channel to the control system in the control room, thus activating the video camera at position (D, 6) in the array of VDU's 380 .
- This particular video camera will then be connected with the VDU 302 at the viewer site V, and it is the image of the receptionist 372 as captured by this video camera which will be transmitted via the data communication channel for display by the VDU 302 at the viewer site V, as mentioned above.
- This particular video camera will then be connected with the VDU 302 at the viewer site V, and it is the image of the receptionist 372 as captured by this video camera which will be transmitted via the data communication channel for display by the VDU 302 at the viewer site V, as mentioned above.
- This particular video camera will then be connected with the VDU 302 at the viewer site V, and it is the image of the receptionist 372 as captured by this video camera which will be transmitted via the data communication channel for display by the VDU 302 at the viewer site V, as mentioned above.
- FIG. 38 shows the same viewer site V and control room C as in FIG. 37 , but for the purpose of illustrating the determination of viewing angles, where W 1 is the horizontal distance from the digital video camera in the VDU array 380 to the last digital video camera at one end, W 2 is the horizontal distance from the digital video camera in the VDU array 380 to the last digital video camera at another end, and D is the average distance between the array of video cameras to the receptionist 372 .
- FIG. 39 by way of the aforesaid arrangement, when the viewer 362 moves his/her head to the left to position P 2 by an angle of ⁇ with respect to the virtual image 364 of the receptionist 372 , his/her image will be as shown P 2 ′ in FIG. 40 , and when the viewer 362 moves his/her head to the right to position P 3 by an angle of ⁇ with respect to the virtual image 364 of the receptionist 372 , his/her image will be as shown P 3 ′ in FIG. 40 .
- FIGS. 41 and 42 are identical to FIGS. 41 and 42 :
- FIG. 43 A simplified architecture of a 3G mobile phone is shown in FIG. 43 , as comprising a digital camera 402 connected with a video buffer (camera) 404 , a display screen 406 connected with a video buffer (screen) 408 , and an antenna 410 for receiving and transmitting signals for communication with other mobile phones via the communication network of a service provider.
- Signals captured by the digital camera 402 are stored in the a video buffer (camera) 404 , corresponding to a segment of memory mapped onto the system data memory.
- Data stored in the screen video buffer 408 are mapped on the screen 406 for display of the content.
- FIG. 44 shows a video wall module (VWM) forming part of a VDU array in a control room setting.
- a digital video camera 412 is connected with a video buffer (camera) dual port access memory 414 , which is in turn connected with an internal central processing unit (CPU) 416 .
- a display screen 418 is connected with a video buffer (screen) dual port access memory 420 , which is also with the CPU 416 .
- a video memory management unit (VMMU) 421 is connected with the CPU 416 via video memory control bus 422 , with the video buffer (camera) dual port access memory 414 via camera data bus 424 , and with the video buffer (screen) dual port access memory 420 via screen data bus 426
- FIG. 45 shows the use of a 3G mobile phone 428 as part of a remote site module (RSM) of a display system according to the present invention.
- An on-site video camera 430 is connected with a video buffer (camera) 432 .
- An VDU 434 housed in a display unit as previously discussed is connected with a video buffer (screen) 436 of the mobile phone 428 .
- a face recognition module 440 in the operating system.
- the VMMU 421 controls the video data memory flow from the RSM's (A, 1; . . . H, 8) to each VWM in the control room (A, 1; H, 8).
- the VMMU 421 sends signals to each RSM to request a page of the video image captured by its respective on-site video camera 430 .
- each RSM starts to determine if is any viewer in front of its camera 430 . If so, the RSM will reply to the VMMU 421 by sending the viewer head position data and the captured image of the viewer; if not, only signals representing the background image will be transmitted to the VMMU 421 as a reply to the request.
- the VMMU 421 will then direct this page from the RSM's to the respective corresponding VWM on the array of VDU's in the control room.
- the video camera associated with the respective VWM will also be activated to capture the face of the receptionist in the control room for transmission back to the RSM display screen.
- the master receptionist in the control room looks at the array of VDU's and starts searching each VWM display with its corresponding contents from the respective on-site video camera. By manipulating a cursor pad 442 on a control panel 444 , the master receptionist can select any one of the VWM's in the VDU array to be the active VWM.
- the active RSM will continuously send viewer head position data stream information to the VMMU 421 for determining the video camera among the array of VDU's to be activated for sending images of the receptionist captured by it to the display of the active RSM.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Controls And Circuits For Display Device (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
A display system is disclosed as including first and second visual display units (VDU's), each for displaying visual images for viewing; a first digital video camera for capturing images of a first individual viewing images displayed by the first VDU; at least second and third digital video cameras for capturing, each from a different angle, images of a second individual viewing the second VDU; in which the first digital video camera is connectable with the second VDU for transmitting the captured images to the second VDU for display; and the first VDU is connectable with either of the second and third digital video cameras for display of images captured by either of the second and third digital video cameras; means for identifying the position of the centre point between the eyes of the captured images of the first individual against a capture window of the first digital video camera; and means for selectively connecting the first VDU with the second digital video camera or the third digital video camera in accordance with the identified position of the centre point between the eyes of the first individual. A visual display apparatus is also disclosed as including a visual display unit supported by a table, the table including a closable opening; a reflector movable relative to the table between a first position in which the reflector closes the opening and a second position in which the opening is open and images displayed by the visual display unit are reflectable by the reflector for viewing; and an end of the reflector is slidably and swivellably movable relative to the table for movement between the first and second positions.
Description
- This invention relates to a display system and method, and in particular, such a system and method allowing persons at remote locations to see each other in a virtual environment.
- Face-to-face communication is the most direct and common way of communication among people. Facial expressions like eye contact, smiling, anger, and emotional gestures are all conveyed effectively through this process. However, such a process requires that the persons engaged in such communications be physically located at the same place.
- With today's computer and telecommunication technologies (e.g. “NetMeeting” and “Videophone”), virtual face-to-face communication can be performed through a telecommunication channel between persons at two geographically remote locations. In such conventional systems, a video camera and a visual display unit (VDU) (e.g. a computer monitor or a television set) are placed before each of the persons engaged in the virtual communication. The video image captured by a video camera of a first party to the communication is transmitted through the telecommunication channel, for display on the VDU before a second party to the communication, and vice versa. However, in such conventional systems, a party to the communication sees his/her partner through the screen of the VDU, which is an un-human and face-to-machine system.
- It is thus an object of the present invention to provide a display system in which the aforesaid shortcomings are mitigated, or at least to provide a useful alternative to the public.
- According to a first aspect of the present invention, there is provided a display system including at least first and second visual display means (VDM's), each being adapted to display at least one visual image for viewing; at least a first image capturing device adapted to capture at least one image of a first individual viewing said first VDM; at least second and third image capturing devices each adapted to capture, each from a different angle, at least one image of a second individual viewing said second VDM; wherein said first image capturing device is connectable with said second VDM for transmitting said captured image to said second VDM for display; wherein said first VDM is connectable with either of said second and third image capturing devices for display of said image captured by either of said second and third image capturing devices; means for identifying the position of a reference point of the captured image of said first individual against a pre-determined reference background; and means for selectively connecting said first VDM with said second image capturing device or said third image capturing device in accordance with the position of said reference point of said first individual as identified by said identifying means.
- According to a second aspect of the present invention, there is provided a display method, including the steps of (a) capturing at least one image of a first individual; (b) displaying the captured image of said first individual to a second individual; (c) capturing images of said second individual from at least a first angle and a second angle which are different from each other; (d) identifying the position of a reference point of the captured image of said first individual against a pre-determined reference background; and (e) selectively displaying the image captured from said first angle or from said second angle, in accordance with the identified position of said reference point of said first individual.
- According to a third aspect of the present invention, there is provided a visual display apparatus including a visual display unit engaged with a support, said support including a closable opening; a reflector movable relative to said support between a first position in which said reflector substantially closes said opening and a second position in which said opening is open and images displayed by said visual display unit are reflectable by said reflector for viewing; wherein an end of said reflector is slidably and swivellably movable relative to said support for movement between said first and second positions.
- Preferred embodiments of the present invention will now be described, by way of examples only, with reference to the accompanying drawings, in which:
-
FIG. 1A shows a video camera, being part of a display system according to the present invention, monitoring a user of the system; -
FIG. 1B is an image of the user captured by the video camera inFIG. 1 , as displayed on a visual display unit (VDU); -
FIG. 2 shows the different views of a receptionist as perceived by a viewer at different positions and angles; -
FIG. 3A is a schematic diagram of a basic arrangement in the display system according to the present invention; -
FIG. 3B is a top view of the image displayed in the VDU inFIG. 3A ; -
FIG. 4 is a schematic diagram of the display system according to the present invention; -
FIG. 5 shows a VDU array in front of the receptionist in a control room, in a discrete mode of operation; -
FIG. 6 shows the VDU array inFIG. 5 in an integrated mode of operation; -
FIG. 7 shows an array of VDU's, each displaying the image of a different viewer site; -
FIG. 8 shows the connection of video cameras at different viewer sites with the display system according to the present invention; -
FIG. 9 shows the relationship between the positioning of the viewer in a viewer site and the positioning of video cameras in a control room; -
FIG. 10 shows a top view of a receptionist in front of the array of VDU's shown inFIG. 6 ; -
FIG. 11 shows connection of the cameras in the control room with the viewer sites; -
FIG. 12 is a schematic diagram of an “e-Conferencing” system according to the present invention; -
FIG. 13 shows part of an “e-Conferencing” system for four participants; -
FIG. 14 is a more detailed schematic diagram of the arrangement of a four-party e-Conference using a display system according to the present invention; -
FIG. 15A shows the sitting plan of an exemplary conference; -
FIG. 15B shows the arrangement of the virtual participants in one room in an e-Conference simulating the sitting plan ofFIG. 15A ; -
FIG. 15C shows the arrangement of the virtual participants in another room in an e-Conference simulating the sitting plan ofFIG. 15A ; -
FIG. 16 is a schematic diagram of the application of a display system according to the present invention as an “e-Theatre”; -
FIG. 17 is a schematic diagram of the application of a display system according to the present invention as a “Stereo Television”; -
FIG. 18 is a side view of a display unit in an in-use configuration; -
FIG. 19 is a side view of the display unit shown inFIG. 18 in a close configuration; -
FIG. 20 is a schematic diagram of the basic hardware design of a viewer site of a display system according to the present invention; -
FIG. 21 is a schematic diagram of the connection of viewer sites and the control room installed in an “e-Receptionist” application of a display system according to the present invention; -
FIG. 22A is a partial side view of a display unit according to the present invention in an in-use configuration; -
FIG. 22B is a sectional view of the display unit taken along the line B-B inFIG. 22A ; -
FIG. 22C is a sectional view of the display unit taken along the line C-C inFIG. 22B , with the display unit in a not-in-use configuration; -
FIG. 22D is a top view of the display unit shown inFIG. 22C ; -
FIG. 22E is a sectional view taken along the line E-E inFIG. 22D ; -
FIG. 22F is a sectional view taken along the line F-F inFIG. 22E ; -
FIG. 23 is an enlarged view of the movable revolutionary joint shown inFIG. 22C , with the plate in a closed position; -
FIG. 24 is view corresponding toFIG. 23 , with the plate in an open position; -
FIG. 25 shows movement of the plate from a closed position to a fully open position; -
FIG. 26 shows the construction of the semi-transparent plate of a display unit according to the present invention; -
FIG. 27A is a top view of the mounting chassis parts of a further alternative display unit of a display system according to the present invention; -
FIG. 27B is a sectional view taken along line G-G ofFIG. 27A ; -
FIG. 28 is a top perspective view showing use of the display system shown inFIGS. 22A to 22F ; -
FIG. 29 is a side view of the display system shown inFIG. 28 ; -
FIG. 30 shows how the angle of inclination of the plate and that of the VDU are calculated; -
FIG. 31 shows the positioning of a viewer in front of a digital video camera forming part of the display system according to the present invention; -
FIG. 32 shows various images of a viewer in the capture window by the digital video camera inFIG. 31 ; -
FIG. 33 shows how the head position of the viewer is determined; -
FIG. 34 is a top view of a control room with an array of VDU's; -
FIG. 35 is a side view of the control room shown inFIG. 34 ; -
FIG. 36 shows schematically the connection topology of the digital cameras; -
FIG. 37 shows the viewer site and the control room; -
FIG. 38 shows the same viewer site and control room as inFIG. 37 , but for illustrating the determination of viewing angles; -
FIG. 39 is a top view of an e-Receptionist application of a display unit according to the present invention; -
FIG. 40 shows various views of the viewer when captured in the capture window of the digital video camera inFIG. 39 ; -
FIG. 41 is a top view of an e-Receptionist application of a display unit according to the present invention; -
FIG. 42 is a simplified side view ofFIG. 41 ; -
FIG. 43 shows a simplified architecture of a 3G mobile phone; -
FIG. 44 shows a video wall module of a display system according to the present invention, adopting a modified 3G mobile phone architecture; -
FIG. 45 shows the use of a 3G mobile phone as part of display unit of a viewer site; -
FIG. 46 shows various remote site modules distributed at geographically remote locations; and -
FIG. 47 shows an array of video wall modules (VWM's) forming an array of VDU's and digital video cameras. - In a long-distance object-viewing situation, the viewing distance usually dominates the object's depth so that the object seems to be flat. However, in a short-distance object-viewing situation, the object's depth is dominated very much, so one can feel the object significantly by the parallax effect.
- For example, as shown in
FIGS. 1A and 1B , avideo camera 10, being part of a display system according to the present invention, captures the head position of aperson 12 using such a system. The captured image is displayed on avisual display unit 14, e.g. a screen of a 4:3 screen aspect ratio. - As shown in
FIG. 2 , when a viewer facing afemale receptionist 18 moves his/her own head among the threepositions 16 a, 16 b, 16 c, he/she should see different views of the face of thereceptionist 18. In particular, when at theposition 16 a, the viewer should see more of the right side face of thereceptionist 18, as shown in 18 a; when at the position 16 b, the viewer should see the front view of the receptionist, as in 18 b; and when at the position 16 c, the viewer should see more of the left face of thereceptionist 18, as shown in 18 c. -
FIG. 3A is a schematic diagram of a basic arrangement in a display system according to the present invention. Take an “e-Receptionist” situation, in which theimage 20 of a female receptionist is transmitted for display to a viewer, theimage 20 of the receptionist is displayed by aVDU 24, e.g. a monitor or a television set, forming part of adisplay unit 25. Theimage 20 displayed by theVDU 24 is reflected by a semi-transparent plate ormirror 26 positioned before amale viewer 28, and inclined at an angle, e.g. 45°, to the horizontal. By way of such an arrangement, theviewer 28 will perceive animage 30 of the receptionist (called the “virtual receptionist), as reflected by theplate 26, floating in the air, aligned with his own line of sight, and merged with the environment. - The
VDU 24 is positioned within adark enclosure 32, hidden from the line of sight of theviewer 28, and with anopening 34 allowing light from theVDU 24 to pass through. Avideo camera 22 is positioned above theviewer 28 for determining the head position of theviewer 28, to be discussed below. - There are several ways to determine the head position of the
viewer 28. One is to use hardware ultrasonic distance measurement technique. However, problems will arise if more than one viewer are located within the capture window at the same time. - Another method is to use a
video camera 22 to capture the image of theviewer 28. Before theviewer 28 approaches the system, a static background scene was recorded and stored in the reference memory of thecamera 22 of the system. Objects moving in front of thecamera 22 will be detected and the resultant moving images will be compared with the static background image and by simple calculations, the position of the moving object, e.g. a head of the viewer, can be determined. - In particular, we need some simple basic calculation as used in the face recognition technique for the detection and alignment process, such as multi-scale head search, face masking and contrast normalization, etc. Head search algorithm is used for finding object moving in front of the cameras; face masking algorithm is used for sending object images to reduce bandwidth requirement; and contrast normalization is used for balancing object images with the background screen.
- The image data captured by the digital cameras are in the form of pixels matrix and each pixel combines three basic colour values of red, green and blue. There are two types of images to be sent from each station to the central control room, namely the active full colour and the passive back-and-white. Normally, only one station is selected by the master receptionist at the central control room as the active full colour screen, which consumes almost half of the total 6 Mbps bandwidth, and the remaining stations will be in the mode of passive black-and-white images and share the remaining bandwidth. The ratio of bandwidth sharing determines the quality of display required. If a higher colour resolution is required, the black-and-white quality will be reduced. The basic formula to convert coloured red, green and blue values to a gray value is:
-
Gray value=0.3 red+0.59 green+0.11 blue - There are two pixel buffers to store the status of gray screen images. The static pixel buffer is set to remember the highest probability of gray value occurrence within a certain period of time (e.g. x seconds) and stores that value in the static buffer memory. The dynamic pixel buffer stores each pixel in memory where the probability of gray value occurrence is random within x seconds. The time duration x seconds determines the refresh rate of the system, which also depends on the available bandwidth for data transmission. The higher the bandwidth, the higher the refresh rate can be.
- Once the preliminary static pixel buffer is established, the subsequently captured images will be compared with it pixel by pixel. If one pixel is found to be the same as that in the corresponding static buffer, it increases the probability of occurrence at that gray value, and the probability value is constantly updated until a constant static background image is found. The longer the time it occurs as the same gray value in that particular pixel, the higher the stability it is. Any intermittent changes in that pixel value are only regarded as noise or dynamic pixel. The whole picture image will then be sent to the master control room at a very low refresh rate to minimize the bandwidth requirement.
- If one pixel is found to be different from the corresponding static buffer, its value will be stored in the dynamic buffer. With the information on area (obtained by counting the number of dynamic pixels) and shape (by pattern matching), one could predict whether the object is a human face or not. Although the prediction may not be very accurate, it is sufficient for the purpose of locating the head position. As the size of this dynamic pixel images is much smaller than the whole screen picture, it needs less bandwidth for transmission and the refresh rate can be higher.
- Once the moving object with its size and shape matched with the predefined threshold values, it is assumed to be a human face looking at the camera. The static background image with very low refresh rate and the dynamic moving object image with higher refresh rate are sent separately to the central control room and combined in either one of the big screen matrix. If the required bandwidth is not enough for this purpose, the visual quality (resolution) of the moving object image may be reduced. Such will not of significant effect so long as such can be recognized by the master receptionist at the central control room as a moving human face.
- Another way to further reduce the bandwidth requirement is to convert the moving object image into a very simple object shape, by eliminating all image details and keeping only the contour information. This is acceptable because this image is treated as a preview image for the purpose of selection only. An image of a higher resolution may be displayed once this particular station is chosen/designated as the active station.
- With the contour of a human face, one can predict the head position with respect to the background scene and such information will be sent to the central control room for the purpose of selecting the camera for the active station. Although such a prediction depends only on the shape and contour of the human face, such will still provide sufficient information on the search of the position of the moving object without using complicated face recognition algorithms.
- The
camera 22 also serves as a viewing device for theviewer 28, connecting his/her image through a data link to a control room of the system, and presenting the image of theviewer 28 to a master receptionist front video wall (to be discussed below) for further manipulation. - In order to allow the
viewer 28 to perceive a more realistic image of the receptionist, the size, location and face details of the virtual receptionist are preferably correlated in the same way as in presented in a real situation, although it is not strictly necessary. - The size of an adult human face is roughly the same for everybody. It is thus possible to adjusting the size of the display monitor screen so that the size of the reflected image in the air is close to the size of a human face, with the reflected image details, colour, and contrast closely resembling that of a real receptionist. The location of the reflected image can be adjusted by carefully positioning the level of the display monitor screen relative to the reflective plate. For example, if the distance between the topmost of part of the reflected image and the top of the display monitor is equal to or close to the horizontal distance between the top of the image and the reflective plate, a realistic image close to real life situation will be provided.
- As shown in
FIG. 4 , thereceptionist 18 is situate at a control room R, which is remote from a site S where theviewer 28 is located. In front of thereceptionist 18 are a number ofvideo cameras receptionist 18. Thevideo cameras screen 24 via avideo multiplexer 40. Thecamera 22 is situate at the site S for determining the position of theviewer 28. - In the situation as shown in
FIG. 4 , if theviewer 28 is recognized by thecamera 22 to be at position P1, the switch SW of thevideo multiplexer 40 will connect theVDU 24 with thevideo camera 38 a, thus allowing the 30 image of thereceptionist 18 as captured by thevideo camera 38 a to be displayed by theVDU 24. Similarly, if theviewer 28 has moved to position P3, signals detected by thevideo camera 22 will cause the switch SW of thevideo multiplexer 40 to connect theVDU 24 with thevideo camera 38 c, whereupon theimage 30 of the receptionist as displayed by theVDU 24, and thus as perceived by theviewer 28, will more of the left face of thereceptionist 18. - An array of VDU's 42 (e.g. monitors or television sets) are arranged in a wall-like manner in front of the
receptionist 18 in the control room R. For an example, and as shown inFIGS. 5 and 6 , the VDU's 42 may be arranged as an 8×8 array, withvideo cameras 38 put on each intersection point. The resolution of the three-dimensional effect will be determined by the size of the array. - The image of the
viewer 28 as captured by thevideo camera 22 will be displayed on this VDU array in either a discrete manner or in an integrated manner. As the “virtual receptionist” may be distributed for display at a number of different viewers sites S, each of the VDU's 42 may show the image of a different view viewer site S, as inFIG. 5 . For an 8×8 array of VDU's 42, a maximum of sixty-four viewers can be displayed on the array at one time. If there are more than sixty-four viewer sites S, thereceptionist 18 may switch a page at a time by operating on her own control panel. In the integrated mode, all the VDU's 42 forming the array may collectively show the image of one viewer site S only, as inFIG. 6 . - For the discrete mode of operation, each page of the viewer sites S represents the images captured by a total of sixty-four
video cameras 22, each located at a different view site S. These sixty-four viewer sites S also form an 8×8 array ofcameras 22, as shown inFIG. 7 . - The 8×8
cameras 22 thus also form an array, as shown inFIG. 8 . Video data from thecameras 22 pass through a video cross bar switch 48 to the control room R for further manipulation. - In the discrete mode of operation, to display all the images of the
viewers 28, eachcamera 22 has to be assigned a unique identifying address, e.g. A1 to H8. It is clear that a high speed scanning mechanism is necessary in order to have a reasonable refresh rate for eachVDU 42 in real time operation. - In practice, a screen update rate of down to 10 Hz is acceptable, and this means 100 mS per frame for each individual station. However, to have sixty-four stations sharing one data link channel, it requires roughly a minimum bandwidth of 200 Mbps for a 640×480×256 colour quality of display resolution and a 10:1 data compression. This calculated value could be met with the present network bandwidth of from 100 Mbps to 1 Gbps specification.
- There are several ways to reduce the bandwidth requirement. Firstly, if one is willing to sacrifice the display quality in “discrete” real time mode of operation, a “priority scanning” technique can be used, for which the bandwidth may be as low as 6 Mbps. This requirement can be easily achieved in existing Internet broadband environment.
- “Priority scanning” refers to the selection of only one active station for high quality display, while the rest will be handled by reduced quality algorithm to lower the bandwidth requirement. Such is an acceptable arrangement because normally there is only one
receptionist 18 to handle call service or inquiry. Incase viewers 28 at all sixty-four stations (viewer sites S) request service at the same time, some of theviewers 28 have to wait until service is available. By means of eye-contact between the active viewer 28 (i.e. theviewer 28 who is receiving service from the receptionist 18) and thereceptionist 18, the waitingviewer 28 will note that he does not have eye-contact with the virtual receptionist. He will then realize that thereceptionist 18 is serving anotherviewer 28, and that it is reasonable that he/she stays calm for a while until service is available to him/her. - The active station usually has the most high quality full colour display and it also serves as a cursor screen for the
receptionist 18 to easily select from among the other sixty-three sets of low quality black-and-white screen displays showing the non-active stations. The maximum bandwidth requirement for one station alone is about 3 Mbps, and the remaining 3 Mbps can be shared among the remaining sixty-three stations, which is about 47.6 kbps per station, which quite enough for handling a black-and-white image with pure outline contours of human face. - Another way to reduce bandwidth requirement is to use “delta object separation” technique. As mentioned above, for each station (viewer site S), a static background was recorded and stored in the reference memory. A one-time job requires maximum bandwidth and takes the longest time to refresh the original background view for each station. After setting up the background view, if someone approaches any station, a special algorithm activates the calculation of the bitmap changes against the background scene. The result serves to locate the head of the viewer against the capture window and to send differential data stream embedded with contents of delta object information to the control room R.
- The
receptionist 18 in the control room R may select which one to be the active viewer. Thereceptionist 18 may first grasp an overview of all the stations (viewer sites S) by using the discrete mode of operation, identify if someone is approaching any station, select that particular station (viewer site S) as the active station by, e.g. rotating a control dial and pushing a button to confirm the selection. Once the active viewer is selected, thereceptionist 18 may then switch the array of VDU's 42 to the integrated mode of operation. In this mode of operation, the array of VDU's 42 will combine to act as a single big display screen with eachVDU 42 displaying only a portion of the active viewer's image. In this mode of operation, eye-contact can be established between the active viewer and thereceptionist 18, in a manner to be discussed herebelow. - As shown in
FIGS. 5 and 6 , for an 8×8 VDU array, there will be a 7×7 array ofvideo cameras 38. Thesecameras 38 point at thereceptionist 18 and are placed inline with the image captured. The positioning of the array of VDU's 42 and the location of thereceptionist 18 should be well defined to obtain a more realistic visual effect. As shown inFIG. 9 , if theviewer 28 is at the position P4 in the viewer site S, in which he looks at thevirtual receptionist 30 sideway offset from the centre at an angle α, he should see exactly the same image of thereceptionist 18 in the control room R as captured by thecamera 38 d, which is also sideway offset from the centre of thereceptionist 18 at an angle α. Similarly, if the viewer is at the position P5 in the viewer site S, in which he looks at thevirtual receptionist 30 sideway offset from the centre at an angle with the same angle β, he should see exactly the same image of thereceptionist 18 in the control room R as captured by the camera 38 e, which is also sideway offset from the centre of thereceptionist 18 at an angle β. - As shown in
FIG. 10 , thereceptionist 18 sits between ablue curtain 50, acting as a backdrop, and the array of VDU's 42 and the array ofvideo cameras 38. If thereceptionist 18 sits directly facing the middle column ofvideo cameras 38, the angles α and β inFIG. 10 will be the same. The distance D between thereceptionist 18 and the array of VDU's 42 is given by the following formula: -
- where W is the width of a
VDU 42. - The “eye-contact point” is defined as the centre point between the two eyes of the
viewer 28. In the integrated mode of operation, the image of theviewer 28 captured within the capture window will be the same as displayed on the VDU array, but with the size proportionally enlarged. It is approximately a one-to-one mapping of the captured viewer's image as displayed on the VDU array. If, during the capture of the image by thecamera 22 at the viewer site S, theviewer 28 moves his head, such will be correspondingly displayed on the VDU array. - Let's assume that the
receptionist 18 sits still in front of her control panel, and stays in the control room R without moving. If theviewer 28 moves his head around with his “eye-contact point” moving within the capture window, thecamera 38 in the control room R which is closest to theVDU 42 displaying the viewer's eye-contact point will be connected to the active viewer's station, while its opposite image pair of “no-eye-contact point” will be displayed in all other non-active station(s). If thereceptionist 18 remains seated with her head facing directly forward, theviewer 28 will see sideway left or right, up or down view of thereceptionist 18. - If, on the other hand, the line of sight and head of the
receptionist 18 move to follow the “eye-contact point” of theactive viewer 28 as displayed on the VDU array, she will then be performing eye-contact with theactive viewer 28, because no matter where theviewer 28 moves, he can see the front view of thereceptionist 18, as thereceptionist 18 will then be facing and looking at thecamera 38 among the VDU array which is closest to theVDU 42 displaying the “eye-contact point” of theviewer 28, and it is the image captured by thisparticular camera 38 which is transmitted to theVDU 24 at the active viewer site S, and as perceived by theviewer 28. If the line of sight of thereceptionist 18 does not follow the eye-contact point of theviewer 28, then theviewer 28 will only see a side face of thereceptionist 18, depending on the direction in which thereceptionist 18 moves her head. -
FIG. 11 shows the connection of thecameras 38 in the control room R with the viewer sites S. The control signal addresses any one of the video camera in the 7×7 video camera array, depending on the result calculated by the head location detection algorithm described above. - As mentioned above, behind the
receptionist 18 in the control room R is ablue curtain 50, which helps remove the background scene to be displayed on theVDU 24 at the viewer site S. This is important for rebuilding the virtual object image in the air through thesemi-transparent plate 26. By using the “delta object separation” technique discussed above, the background scene stored in the reference memory can be separated from the object view and then only the object details are transmitted for further processing. This technique helps in reducing the bandwidth requirement, because only the needed video data are transmitted. The bandwidth required for sending the video data of thereceptionist 18 from the control room R to the viewer site S is the same as that for sending the video data of theviewer 28 to the array of VDU's 42 in the control room R. - In addition to “e-Receptionist”, the present display system can be used for “e-Conferencing”. The working principle of “e-Conferencing” is very similar to that of “e-Receptionist”, except that the data communication channel is a bi-directional link instead of having separate channels for data transfer. The same “eye-contact” principle is applied here. As shown in
FIG. 12 , twoparticipants remote area respective camera area respective display unit FIG. 3A and discussed above. Thecameras display units data communication channel 116. With such an arrangement, a virtual image 102 a of theparticipant 102 will be displayed by thedisplay unit 112 for perception by theparticipant 100. Similarly, a virtual image 100 a of theparticipant 100 will be displayed by thedisplay unit 114 for perception by theparticipant 102. - As shown in
FIG. 13 , in an alternative e-conference situation involving four participants, a participant D situate at a location which is geographically remote from the other three participants A, B and C, has threedisplay units display units video camera 124 a, 124 b, 124 c directed towards D. Thedisplay unit 120 a and the associatedvideo camera 124 a are connected via a data communication channel with a corresponding set of display unit and video camera before A, and similarly for participants B and C. Thus, an image of A, designated as A′, will be displayed by thedisplay unit 120 a, and perceived by D; and similarly for the image B′ of B and the image C′ for C. - It can be seen in
FIG. 13 that D is facing thevideo camera 124 a which is connected with the display unit before the participant A. Thus, A will see the front view of D and can thus establish eye contact with D. As to B and C, as they can only see the right side face of D, as captured by the video cameras 124 b and 124 c respectively, they cannot establish eye contact with D. They will thus realize that D is not addressing either of them. -
FIG. 14 shows a more detailed schematic diagram of the arrangement of a four-party e-conference using a display system according to the present invention, in which parties A, B, C and D are each located at a respective location LA, LB, LC, LD, which are geographically remote from one another. As each party sits in his/her own respective location, and views the images of his/her counterparts, it is necessary to carefully organize and position the virtual parties in order to create an effective virtual environment. - As shown in
FIG. 14 the various display units and video cameras are connected with one another via adata communication channel 128. A possible arrangement is shown inFIG. 14 . Let's take location LD as an example. When D makes eye contact with image A′ of A as displayed by the display unit 120 DA, the front face of D will be captured by the associatedvideo camera 124 DA, and transmitted via thedata communication channel 128, and displayed by the display unit 120 AD, and perceived by A as image D′. In this scenario, as D is facing thevideo camera 124 DA, thecameras - For e-Conferences involving more members, e.g. eight members (M1, M2, . . . M8), if it is intended to simulate the sitting plan as shown in
FIG. 15A , the rule of thumb is that in each location, all the members are arranged in the same sequence around the table. Let's take room R3 in which M3 is physically located, virtual M4 is to M3's left, followed by virtual M5, and so on, until back to virtual M1 and subsequently virtual M2, as shown inFIG. 15B . In this connection, virtue M5 is a display unit in R3 which is connected with the video camera in room R5 which is associated with a display unit in R5 for display of the image of M3. Similarly, in room R5 in which M5 is physically located, virtue M6 is to M5's left, followed by virtual M7, and so on, back to virtual M1 and finally having M4 to M5's right, as shown inFIG. 15C . - “Delta object separation” technique may also be employed in e-Conference to remove background scene of each individual meeting member, and thus to transmit data of the object image of the member only through the data communication channel. In addition, the basic data communication technique used for e-Conference may be the same as that used in 3G mobile phone technology. Instead of a small screen in the mobile phone, a bigger and modified display system may be used to create the virtual scene to achieve the special visual effect.
- A further application of a display system and method according to the present invention is the e-Theatre, which is shown schematically in
FIG. 16 . In this application of the invention, anartist 150 performs inside acontrol room 152, in front of a VDU array 153 (as discussed above) with ablue curtain 154 behind him/her. VDU's 156 in theVDU array 153 monitor the progress of various scenes on individual stage, andvideo cameras 158 positioned among the VDU's 156 capture the image of theartist 150, each at a different angle. Technicians may be employed to operate various panels and buttons to transmit the image of theartist 150 to various concerts at different geographically remote locations. - Let's assume that e-Theatres are held at locations TA, TB, Tc and TD. The VDU's 156 in the
VDU array 153 andvideo cameras 158 in thecontrol room 152 are connecteddisplay units 160 a, 160 b, 160 c and 160 d of the respective locations TA, TB, TC and TD via adata communication channel 162. The image of theartist 150 as displayed by therespective display unit 160 a, 160 b, 160 c and 160 d is reflected by a respective inclinedsemi-transparent plate 164 a, 164 b, 164 c and 164 d to form a virtual image 166 a, 166 b, 166 c, and 166 d as perceived therespective audience artist 150 for the audience 168 a at location TA. - A further possibility of the application of a display system according to the present invention is called “Stereo Television”, as shown in
FIG. 17 . Animage 172 of an artist geographically remote from anarea 174 is displayed by adisplay unit 176 and reflected by an inclinedsemi-transparent plate 178 with atelevision set 180 as background. The image (virtual artist) 182 is perceived by theaudience 184 to be closer to the audience than thetelevision set 180. - Turning now back to the
display unit 25, first discussed in relation toFIG. 3A , it is shown inFIGS. 18 and 19 that thesemi-transparent plate 26 is connected with an upper surface of thedark enclosure box 32 via ahinge 33, and thus movable to selectively open or close thebox 32 by theplate 26. Thevideo camera 22 is attached on a free end of theplate 26, and is directed downwardly towards a viewer. The angle at which theplate 26 is inclined relative to the upper surface depends on how theVDU 24 is placed beneath theopening 34 in thebox 32. The angle should be such that the image projected by theVDU 24 in space should form a reasonable figure of the target image in the viewer's line of sight. - As shown more clearly in
FIG. 19 , thesemi-transparent plate 26, which acts like a display window, is attached to adesktop 35 by a slide-in roller hinge system. When theplate 26 is not in use, theplate 26 is slid into aslot 37 near thehinge 33 to close theopening 34, whereby a flat desktop surfaced is formed for other use. - The positioning of the
VDU 24 depends on the particular application of the display system. In the case of “e-Receptionist”, as discussed above, a normal receptionist desk may be modified by providing a recess with a dark enclosure box within which is placed a VDU. When not in use, the opening of the dark enclosure box may be closed, and a real receptionist may sit across the receptionist desk for serving customers. As to “e-Conference”, the construction is similar except that the number of recesses (and thus the number of VDU's) in the conference table will depend on the number of parties intended to be served by the system. - Turning to “e-Theatre” and “stereo Television”, as the virtual artist may be positioned anywhere around the stage, or in front of the TV screen, the VDU should be designed to be movable up and down to adjust the viewing depth and, and horizontally to adjust the location of the projected image, i.e. virtual artist.
- As shown in
FIG. 20 , eachviewer site 200, e.g. in an “e-Receptionist” application of a display system according to the present invention, includes a personal computer (PC) 202 with adisplay screen 204, attached with adigital video camera 206. ThePC 202 is connected with a data communication network 208 (e.g. the Internet or intranet) via a low-speed Control+Voice trunk 210, a high-speed video-out trunk 212, a high-speed video-intrunk 214 of Local Area Network (LAN) environment. As further shown inFIG. 21 , the video-in and video-out of the array of viewer'ssites 218 are connected to a videonetwork management unit 220 via across bar switch 222. Similarly, the video-in and video-out of the array of VDU's and cameras 224 in the control room are also connected to the videonetwork management unit 220 via across bar switch 226. The cross bar switches 222, 226 are connected with asystem control unit 228, which is connected with acontrol panel interface 230, is provided with special design control protocol for overall inter-operating system control. -
FIGS. 22A to 22F show various views of analternative display unit 300 according to the present invention. Amonitor 302 is inclinedly supported in a recess of a table 304. Image displayed on themonitor 302 is projected onto asemi-transparent plate 306, to form avirtual image 308 to be perceived by an onlooker/viewer. Theplate 306 is movable between an in-use position in which it is pivoted upwardly to an inclined position relative to thesurface 310 of the table 304, and a not-in-use position in which it lies flush with thesurface 310 of the table 304 to form a generally continuous and flush table top surface. At one longitudinal end of theplate 306 is mounted adigital video camera 312 for capturing images of the viewer, for transmission to another VDU, being part of the display system. - As shown in
FIGS. 22A , 22C, 22E and 22F, a second longitudinal end of theplate 306 is mounted ahemispherical support 314 which is slidably and swivellably movable relative to a row of parallel roller bars 316. There is thus formed a movable revolutionary joint. As shown more clearly inFIGS. 23 and 24 , thehemispherical support 314 is engaged with theplate 306 via a mountingframe 318. - As shown in
FIG. 25 , from a window-closed configuration (1), thesupport 314 is first lifted up above the three topmost roll bars 316 (2), thesupport 314 is then allowed to move down the row of roller bars 316 (3, 4, 5), thus causing theplate 306 pivot upwardly, until it reaches the lowest point of the path of movement (6). Conversely, by moving thesupport 314 up the row until it rests on the three topmost roller bars 316, theplate 306 will lie generally flush with thesurface 310 of the table 304. - As shown in
FIG. 26 , theplate 306 is made up of aframe 320, which is moulded by a clear plastic material to maximize optical transparency, or at least to minimize visual obstruction. Arecess 322 is provided for receiving a semi-transparent film orplate 324. Data from thedigital video camera 312 are transmitted by a clear plasticflat cable 326 which runs along a side of theframe 320 to the bottom part. -
FIGS. 27A and 27B are, respectively, top view and sectional view of the mounting chassis parts of a further alternative display unit of a display system according to the present invention. Ametal chassis 350 is mounted beneath a table 352, and is configured to hold aVDU 354 with its screen 356 inclined at around 20° to the horizontal. The part of thechassis 350 facing anopening 358 is dark in colour to reduce light leakage. Thechassis 350 is provided with arectangular hole 360 for accommodating a digital video camera. -
FIG. 28 is a top perspective view showing use of thedisplay system 300 shown inFIGS. 22A to 22F , andFIG. 29 is a side view thereof. As shown inFIG. 28 , thedisplay unit 300 is installed in the table 304. Image displayed by theVDU 302 is reflected by the slantedsemi-transparent plate 306 and perceived by aviewer 362 asvirtual image 364. -
FIG. 30 shows how the angle of inclination of theplate 306 and that of theVDU 302 are calculated. H is the height of the eye level of the viewer above the surface of the table 304; P is the horizontal distance between the eye of the view and the top edge of theVDU 302; 0 is the angle of inclination of the screen of theVDU 302 with respect to the surface of the table 304; and θ is the angle of inclination of theplate 306 with respect to the surface of the table 304. Let's assume here that H is 0.364 m and P is 1 m. -
- As to θ, it is found that its minimum value θmin should be:
-
- Given that φ is found to be 20°, it follows that θmin is 55°.
-
FIG. 31 shows theviewer 362 whose image is captured by adigital video camera 370. When theviewer 362 is in the centre position, the image of theviewer 362 as captured in the capture window of thevideo camera 370 is as shown in frame P1 inFIG. 32 ; when theviewer 362 moves to her left side, her image in the capture window of thevideo camera 370 is as shown in frame P2 inFIG. 32 ; and when theviewer 362 moves to her right side, her image in the capture window of thevideo camera 370 is as shown in frame P3 inFIG. 32 . -
FIG. 33 shows how the head position of the viewer is determined. In particular, a simplified face recognition algorithm is used for determining the viewer head position and the centre point between the viewer's two eyes is calculated with respect to the grid position within the capture window. The capture window of thedigital video camera 370 thus acts as a reference background against which the position of the head position, or a reference point (e.g. the centre point between the eyes), of theviewer 362 is to be determined and identified. For example, in position P1 inFIG. 32 discussed above, the viewer is recognized as located at D6; in P2, the viewer is recognized as located at F6; and in P3, the viewer is recognized as located at B6. As the viewer moves her head around, e.g. from left to right in real time, a set of data will be recognized as from G6, F6, E6, D6, C6, B6 to A6 eventually. The same principle applies to vertical movement of the viewer's head. Such information will be transmitted to the control room as real-time data to control the choice of camera to be connected to the display screen at the viewer's site. - For further illustration,
FIGS. 34 and 35 show areceptionist 372 sitting in a control room in front of an array of VDU's, andFIG. 36 shows schematically the connection topology of the digital cameras in which C1+C2 is the capture window field of view of the video camera in the viewer site. In this example, when the image of the viewer is at position P1, the viewer is recognized as at D6, the digital camera at the corresponding position (D, 6) in the array of VDU in front of thereceptionist 372 will be connected with the VDU at the viewer site; when the image of the viewer is at position P2, the viewer is recognized as at F6, the digital camera at the corresponding position (F, 6) in the array of VDU in front of thereceptionist 372 will be connected with the VDU at the viewer site; and when the image of the viewer is at position P3, the viewer is recognized as at B6, the digital camera at the corresponding position (B, 6) in the array of VDU in front of thereceptionist 372 will be connected with the VDU at the viewer site. - The situation may be further illustrated by
FIG. 37 , which shows both the viewer site V and the control room C. In the viewer site V, theviewer 362 sits in front of a table 304 installed with a display unit according to the present invention, with aVDU 302 displaying image of thereceptionist 372 physically located in the control room C, which is geographically remote from the viewer site V. The image of thereceptionist 372 is captured by a number of digital video cameras distributed among an array of VDU's 380 in front of thereceptionist 372, and the image captured by one of these digital video cameras will be transmitted via a data communication channel for display by theVDU 302 at the viewer site V. The image of thereceptionist 372 as displayed by theVDU 302 is reflected by thesemi-transparent plate 306 and perceived by theviewer 362 as a virtual image, i.e. avirtual receptionist 364. - Similarly, the image of the
viewer 362 is captured by thedigital video camera 370 installed at an upper end of theplate 306. The image of theviewer 362 as captured by thedigital video camera 370 is displayed on the array of VDU's 380 in front of thereceptionist 372. When theviewer 362 is at the position P1 in the viewer site V, he/she will be recognized as being at the D6 position, his/her image will be displayed at position P1′ in the array of VDU's 380 in the control room C. Data representing “D6” will be transmitted via the data communication channel to the control system in the control room, thus activating the video camera at position (D, 6) in the array of VDU's 380. This particular video camera will then be connected with theVDU 302 at the viewer site V, and it is the image of thereceptionist 372 as captured by this video camera which will be transmitted via the data communication channel for display by theVDU 302 at the viewer site V, as mentioned above. - When the
viewer 362 moves to position P2 in the viewer site V, his/her image as captured by the capture window of thedigital video camera 370 will be as shown in the dotted line as P2 inFIG. 36 , in which the centre point between the viewer's two eyes is recognized as being at F6. The viewer's image will be displayed at position P2′ in theVDU array 380. Data representing “F6” will be transmitted via the data communication channel to the control system in the control room, thus activating the video camera at position (F, 6) in the array of VDU's 380. This particular video camera will then be connected with theVDU 302 at the viewer site V, and it is the image of thereceptionist 372 as captured by this video camera which will be transmitted via the data communication channel for display by theVDU 302 at the viewer site V, as mentioned above. - Similarly, when the
viewer 362 moves to position P3 in the viewer site V, his/her image as captured by the capture window of thedigital video camera 370 will be as shown in the dotted line as P3 inFIG. 36 , in which the centre point between the viewer's two eyes is recognized as being at B6. The viewer's image will be displayed at position P3′ in theVDU array 380. Data representing “B6” will be transmitted via the data communication channel to the control system in the control room, thus activating the video camera at position (B, 6) in the array of VDU's 380. This particular video camera will then be connected with theVDU 302 at the viewer site V, and it is the image of thereceptionist 372 as captured by this video camera which will be transmitted via the data communication channel for display by theVDU 302 at the viewer site V, as mentioned above. -
FIG. 38 shows the same viewer site V and control room C as inFIG. 37 , but for the purpose of illustrating the determination of viewing angles, where W1 is the horizontal distance from the digital video camera in theVDU array 380 to the last digital video camera at one end, W2 is the horizontal distance from the digital video camera in theVDU array 380 to the last digital video camera at another end, and D is the average distance between the array of video cameras to thereceptionist 372. -
- In case where there is an odd number of video cameras in each row and they are evenly spaced out, W1=W2, thus α=β.
- Turning now to
FIG. 39 , by way of the aforesaid arrangement, when theviewer 362 moves his/her head to the left to position P2 by an angle of α with respect to thevirtual image 364 of thereceptionist 372, his/her image will be as shown P2′ inFIG. 40 , and when theviewer 362 moves his/her head to the right to position P3 by an angle of β with respect to thevirtual image 364 of thereceptionist 372, his/her image will be as shown P3′ inFIG. 40 . When theviewer 362 is at position P2, he/she is sidewardly disposed relative to thevideo camera 370 by an angle α′, and when theviewer 362 is at position P3, he/she is sidewardly disposed relative to thevideo camera 370 by an angle β′. With a maximum angle of field of view of α+β, the maximum angle of capture of the camera capture window α′+β′ should be correlated with the angle α+β. - In
FIGS. 41 and 42 : -
- C1+C2 is approximate capture window field of view;
- P is the horizontal distance between the eyes of the
viewer 362 and the top edge of theVDU 302; - γ is the angle of inclination of the
semi-transparent plate 306 with respect to the top surface of the table 304; - L is the length of the
plate 306; and - L1 is the horizontal distance between the
digital video camera 370 and the top edge of theVDU 302. It is assumed here that L1≈L (1−cos γ).
- Given the above:
-
- A simplified architecture of a 3G mobile phone is shown in
FIG. 43 , as comprising adigital camera 402 connected with a video buffer (camera) 404, adisplay screen 406 connected with a video buffer (screen) 408, and anantenna 410 for receiving and transmitting signals for communication with other mobile phones via the communication network of a service provider. Signals captured by thedigital camera 402 are stored in the a video buffer (camera) 404, corresponding to a segment of memory mapped onto the system data memory. Data stored in thescreen video buffer 408 are mapped on thescreen 406 for display of the content. - By modification of the video memory interface, a typical 3G mobile phone architecture can be applied in a VDU array.
FIG. 44 shows a video wall module (VWM) forming part of a VDU array in a control room setting. Adigital video camera 412 is connected with a video buffer (camera) dualport access memory 414, which is in turn connected with an internal central processing unit (CPU) 416. Adisplay screen 418 is connected with a video buffer (screen) dualport access memory 420, which is also with theCPU 416. A video memory management unit (VMMU) 421 is connected with theCPU 416 via videomemory control bus 422, with the video buffer (camera) dualport access memory 414 viacamera data bus 424, and with the video buffer (screen) dualport access memory 420 viascreen data bus 426 -
FIG. 45 shows the use of a 3Gmobile phone 428 as part of a remote site module (RSM) of a display system according to the present invention. An on-site video camera 430 is connected with a video buffer (camera) 432. AnVDU 434 housed in a display unit as previously discussed is connected with a video buffer (screen) 436 of themobile phone 428. In addition to aninternal CPU 438 for controlling the operation of themobile phone 428, there is also provided aface recognition module 440 in the operating system. - As shown in
FIGS. 46 and 47 , theVMMU 421 controls the video data memory flow from the RSM's (A, 1; . . . H, 8) to each VWM in the control room (A, 1; H, 8). In the “discrete” mode of operation, theVMMU 421 sends signals to each RSM to request a page of the video image captured by its respective on-site video camera 430. Upon receipt of the request, each RSM starts to determine if is any viewer in front of itscamera 430. If so, the RSM will reply to theVMMU 421 by sending the viewer head position data and the captured image of the viewer; if not, only signals representing the background image will be transmitted to theVMMU 421 as a reply to the request. TheVMMU 421 will then direct this page from the RSM's to the respective corresponding VWM on the array of VDU's in the control room. The video camera associated with the respective VWM will also be activated to capture the face of the receptionist in the control room for transmission back to the RSM display screen. These steps are carried out for all RSM's and VWM's concurrently. It is a one-to-one mapping in the “discrete” mode, and acts similarly to a pair of 3G mobile phones in operation and communication with each other. - The master receptionist in the control room looks at the array of VDU's and starts searching each VWM display with its corresponding contents from the respective on-site video camera. By manipulating a
cursor pad 442 on acontrol panel 444, the master receptionist can select any one of the VWM's in the VDU array to be the active VWM. - It is also possible to switch to the “integrated” mode of operation, in which all the VWM's (say sixty-four of them) will be combined to form a single big screen for displaying the image from the active RSM. In this mode, eye-contact can take place during communication between the master receptionist and the viewer at the active RSM. The active RSM will continuously send viewer head position data stream information to the
VMMU 421 for determining the video camera among the array of VDU's to be activated for sending images of the receptionist captured by it to the display of the active RSM. - It should be understood that the above only illustrates examples whereby the present invention may be carried out, and that various modifications and/or alterations may be made thereto without departing from the spirit of the invention.
- It should also be understood that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may be provided in combination in a single embodiment. Conversely, various features of the invention which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any appropriate sub-combinations.
Claims (15)
1-42. (canceled)
43. A visual display apparatus including:
a visual display unit engaged with a support, said support including a closable opening;
a reflector movable relative to said support between a first position in which said reflector substantially closes said opening and a second position in which said opening is open and images displayed by said visual display unit are reflectable by said reflector for viewing;
wherein a first end of said reflector is slidably and swivellably movable relative to said support for movement between said first and second positions.
44. An apparatus according to claim 43 wherein said support comprises a table.
45. An apparatus according to claim 44 wherein when said reflector is in said second position, said reflector is inclined relative to a top of said table.
46. An apparatus according to claim 43 wherein when in said second position, said reflector is inclined relative to a screen of said visual display unit.
47. An apparatus according to claim 43 wherein said reflector is semi-transparent.
48. An apparatus according to claim 43 wherein said first end of said reflector includes a substantially hemispherical member which contacts said support during at least part of the movement of said reflector between said first and second positions.
49. An apparatus according to claim 43 wherein said support includes a plurality of movable members along which said end of said reflector is movable for movement between said first and second positions.
50. An apparatus according to claim 49 wherein said plurality of movable members comprise a plurality of rotationally movable members.
51. An apparatus according to claim 50 wherein said plurality of rotationally movable members comprise a plurality of rollable bars.
52. An apparatus according to claim 51 wherein said rollable bars are substantially parallel to one another.
53. An apparatus according to claim 43 further including an image capturing device.
54. An apparatus according to claim 53 wherein said image capturing device is a digital video camera.
55. An apparatus according to claim 53 wherein said image capturing device is engaged with said reflector for simultaneous movement.
56. An apparatus according to claim 55 wherein said image capturing device is engaged with a second end of said reflector, said second end being opposite to said first end.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/567,436 US20100079576A1 (en) | 2005-06-02 | 2009-09-25 | Display system and method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/144,525 US7605837B2 (en) | 2005-06-02 | 2005-06-02 | Display system and method |
US12/567,436 US20100079576A1 (en) | 2005-06-02 | 2009-09-25 | Display system and method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/144,525 Division US7605837B2 (en) | 2005-06-02 | 2005-06-02 | Display system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
US20100079576A1 true US20100079576A1 (en) | 2010-04-01 |
Family
ID=37493647
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/144,525 Expired - Fee Related US7605837B2 (en) | 2005-06-02 | 2005-06-02 | Display system and method |
US12/567,436 Abandoned US20100079576A1 (en) | 2005-06-02 | 2009-09-25 | Display system and method |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/144,525 Expired - Fee Related US7605837B2 (en) | 2005-06-02 | 2005-06-02 | Display system and method |
Country Status (1)
Country | Link |
---|---|
US (2) | US7605837B2 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110304613A1 (en) * | 2010-06-11 | 2011-12-15 | Sony Ericsson Mobile Communications Ab | Autospectroscopic display device and method for operating an auto-stereoscopic display device |
US8217945B1 (en) * | 2011-09-02 | 2012-07-10 | Metric Insights, Inc. | Social annotation of a single evolving visual representation of a changing dataset |
CN105578113A (en) * | 2016-02-02 | 2016-05-11 | 北京小米移动软件有限公司 | Video communication method, device and system |
CN105657325A (en) * | 2016-02-02 | 2016-06-08 | 北京小米移动软件有限公司 | Method, apparatus and system for video communication |
CN105744206A (en) * | 2016-02-02 | 2016-07-06 | 北京小米移动软件有限公司 | Video communication method, device and system |
US9955120B2 (en) * | 2016-02-12 | 2018-04-24 | Sony Interactive Entertainment LLC | Multiuser telepresence interaction |
Families Citing this family (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6760772B2 (en) | 2000-12-15 | 2004-07-06 | Qualcomm, Inc. | Generating and implementing a communication protocol and interface for high data rate signal transfer |
US8812706B1 (en) | 2001-09-06 | 2014-08-19 | Qualcomm Incorporated | Method and apparatus for compensating for mismatched delays in signals of a mobile display interface (MDDI) system |
ATE517500T1 (en) | 2003-06-02 | 2011-08-15 | Qualcomm Inc | GENERATION AND IMPLEMENTATION OF A SIGNAL PROTOCOL AND INTERFACE FOR HIGHER DATA RATES |
EP2363992A1 (en) | 2003-08-13 | 2011-09-07 | Qualcomm Incorporated | A signal interface for higher data rates |
AU2004303402A1 (en) | 2003-09-10 | 2005-03-24 | Qualcomm Incorporated | High data rate interface |
EP1680904A1 (en) | 2003-10-15 | 2006-07-19 | QUALCOMM Incorporated | High data rate interface |
KR100827573B1 (en) | 2003-10-29 | 2008-05-07 | 퀄컴 인코포레이티드 | High data rate interface |
US8606946B2 (en) | 2003-11-12 | 2013-12-10 | Qualcomm Incorporated | Method, system and computer program for driving a data signal in data interface communication data link |
KR20060096161A (en) | 2003-11-25 | 2006-09-07 | 콸콤 인코포레이티드 | High data rate interface with improved link synchronization |
EP2247070B1 (en) | 2003-12-08 | 2013-09-25 | QUALCOMM Incorporated | High data rate interface with improved link synchronization |
EP1733537A1 (en) | 2004-03-10 | 2006-12-20 | Qualcomm, Incorporated | High data rate interface apparatus and method |
CA2560067C (en) | 2004-03-17 | 2011-08-23 | Qualcomm Incorporated | High data rate interface apparatus and method |
JP5032301B2 (en) | 2004-03-24 | 2012-09-26 | クゥアルコム・インコーポレイテッド | High data rate interface apparatus and method |
US8650304B2 (en) | 2004-06-04 | 2014-02-11 | Qualcomm Incorporated | Determining a pre skew and post skew calibration data rate in a mobile display digital interface (MDDI) communication system |
CA2569106C (en) | 2004-06-04 | 2013-05-21 | Qualcomm Incorporated | High data rate interface apparatus and method |
US7865834B1 (en) * | 2004-06-25 | 2011-01-04 | Apple Inc. | Multi-way video conferencing user interface |
US8692838B2 (en) | 2004-11-24 | 2014-04-08 | Qualcomm Incorporated | Methods and systems for updating a buffer |
US8873584B2 (en) | 2004-11-24 | 2014-10-28 | Qualcomm Incorporated | Digital data interface device |
US8699330B2 (en) | 2004-11-24 | 2014-04-15 | Qualcomm Incorporated | Systems and methods for digital data transmission rate control |
US8723705B2 (en) | 2004-11-24 | 2014-05-13 | Qualcomm Incorporated | Low output skew double data rate serial encoder |
US8667363B2 (en) | 2004-11-24 | 2014-03-04 | Qualcomm Incorporated | Systems and methods for implementing cyclic redundancy checks |
US8539119B2 (en) | 2004-11-24 | 2013-09-17 | Qualcomm Incorporated | Methods and apparatus for exchanging messages having a digital data interface device message format |
EP1927009A1 (en) * | 2005-09-22 | 2008-06-04 | Wisconsin Alumni Research Foundation | Reconstruction of images of the beating heart using a highly constrained backprojection |
US7701930B2 (en) * | 2005-10-25 | 2010-04-20 | Ittiam Systems (P) Ltd. | Technique for providing virtual N-way video conferencing to IP videophones |
US8692839B2 (en) | 2005-11-23 | 2014-04-08 | Qualcomm Incorporated | Methods and systems for updating a buffer |
US8730069B2 (en) | 2005-11-23 | 2014-05-20 | Qualcomm Incorporated | Double data rate serial encoder |
KR101249988B1 (en) * | 2006-01-27 | 2013-04-01 | 삼성전자주식회사 | Apparatus and method for displaying image according to the position of user |
US20070250567A1 (en) * | 2006-04-20 | 2007-10-25 | Graham Philip R | System and method for controlling a telepresence system |
US7532232B2 (en) * | 2006-04-20 | 2009-05-12 | Cisco Technology, Inc. | System and method for single action initiation of a video conference |
GB0615433D0 (en) * | 2006-08-04 | 2006-09-13 | Univ York | Display systems |
US8463361B2 (en) | 2007-05-24 | 2013-06-11 | Lifewave, Inc. | System and method for non-invasive instantaneous and continuous measurement of cardiac chamber volume |
CN101874242A (en) * | 2007-10-12 | 2010-10-27 | 宝利通公司 | Integrated system for telepresence videoconferencing |
US8379076B2 (en) * | 2008-01-07 | 2013-02-19 | Cisco Technology, Inc. | System and method for displaying a multipoint videoconference |
US20100085280A1 (en) * | 2008-10-03 | 2010-04-08 | Lambert David K | Display system and method therefor |
US9002427B2 (en) | 2009-03-30 | 2015-04-07 | Lifewave Biomedical, Inc. | Apparatus and method for continuous noninvasive measurement of respiratory function and events |
JP2012523788A (en) * | 2009-04-13 | 2012-10-04 | ショースキャン デジタル エルエルシー | Movie shooting and projection method and apparatus |
US20100274145A1 (en) | 2009-04-22 | 2010-10-28 | Tupin Jr Joe Paul | Fetal monitoring device and methods |
KR20110052998A (en) * | 2009-11-13 | 2011-05-19 | 삼성전자주식회사 | Apparatus and method for providing user interface in a device |
US8878773B1 (en) | 2010-05-24 | 2014-11-04 | Amazon Technologies, Inc. | Determining relative motion as input |
CN103607971B (en) * | 2011-07-07 | 2016-08-31 | 奥林巴斯株式会社 | Medical master slave manipulator |
US10088924B1 (en) | 2011-08-04 | 2018-10-02 | Amazon Technologies, Inc. | Overcoming motion effects in gesture recognition |
US8683054B1 (en) * | 2011-08-23 | 2014-03-25 | Amazon Technologies, Inc. | Collaboration of device resources |
US9223415B1 (en) | 2012-01-17 | 2015-12-29 | Amazon Technologies, Inc. | Managing resource usage for task performance |
US9769419B2 (en) | 2015-09-30 | 2017-09-19 | Cisco Technology, Inc. | Camera system for video conference endpoints |
US12107907B2 (en) * | 2020-08-28 | 2024-10-01 | Tmrw Foundation Ip S.Àr.L. | System and method enabling interactions in virtual environments with virtual presence |
US11798204B2 (en) * | 2022-03-02 | 2023-10-24 | Qualcomm Incorporated | Systems and methods of image processing based on gaze detection |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4758887A (en) * | 1985-10-09 | 1988-07-19 | Weyel Kg | Conference table |
US5500671A (en) * | 1994-10-25 | 1996-03-19 | At&T Corp. | Video conference system and method of providing parallax correction and a sense of presence |
US5777665A (en) * | 1995-09-20 | 1998-07-07 | Videotronic Systems | Image blocking teleconferencing eye contact terminal |
US5953052A (en) * | 1995-09-20 | 1999-09-14 | Videotronic Systems | Reflected display teleconferencing eye contact terminal |
US6285392B1 (en) * | 1998-11-30 | 2001-09-04 | Nec Corporation | Multi-site television conference system and central control apparatus and conference terminal for use with the system |
US20010038412A1 (en) * | 1995-09-20 | 2001-11-08 | Mcnelley Steve H. | Integrated reflected display teleconferencing eye contact terminal |
US20040165060A1 (en) * | 1995-09-20 | 2004-08-26 | Mcnelley Steve H. | Versatile teleconferencing eye contact terminal |
US7136090B1 (en) * | 1999-08-10 | 2006-11-14 | Teleportec, Inc. | Communications system |
US7460150B1 (en) * | 2005-03-14 | 2008-12-02 | Avaya Inc. | Using gaze detection to determine an area of interest within a scene |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE4334814A1 (en) | 1993-10-13 | 1995-04-20 | Sel Alcatel Ag | Device for conducting a television conference |
-
2005
- 2005-06-02 US US11/144,525 patent/US7605837B2/en not_active Expired - Fee Related
-
2009
- 2009-09-25 US US12/567,436 patent/US20100079576A1/en not_active Abandoned
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4758887A (en) * | 1985-10-09 | 1988-07-19 | Weyel Kg | Conference table |
US5500671A (en) * | 1994-10-25 | 1996-03-19 | At&T Corp. | Video conference system and method of providing parallax correction and a sense of presence |
US5777665A (en) * | 1995-09-20 | 1998-07-07 | Videotronic Systems | Image blocking teleconferencing eye contact terminal |
US5953052A (en) * | 1995-09-20 | 1999-09-14 | Videotronic Systems | Reflected display teleconferencing eye contact terminal |
US20010038412A1 (en) * | 1995-09-20 | 2001-11-08 | Mcnelley Steve H. | Integrated reflected display teleconferencing eye contact terminal |
US20040165060A1 (en) * | 1995-09-20 | 2004-08-26 | Mcnelley Steve H. | Versatile teleconferencing eye contact terminal |
US6285392B1 (en) * | 1998-11-30 | 2001-09-04 | Nec Corporation | Multi-site television conference system and central control apparatus and conference terminal for use with the system |
US7136090B1 (en) * | 1999-08-10 | 2006-11-14 | Teleportec, Inc. | Communications system |
US7460150B1 (en) * | 2005-03-14 | 2008-12-02 | Avaya Inc. | Using gaze detection to determine an area of interest within a scene |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110304613A1 (en) * | 2010-06-11 | 2011-12-15 | Sony Ericsson Mobile Communications Ab | Autospectroscopic display device and method for operating an auto-stereoscopic display device |
US8217945B1 (en) * | 2011-09-02 | 2012-07-10 | Metric Insights, Inc. | Social annotation of a single evolving visual representation of a changing dataset |
CN105578113A (en) * | 2016-02-02 | 2016-05-11 | 北京小米移动软件有限公司 | Video communication method, device and system |
CN105657325A (en) * | 2016-02-02 | 2016-06-08 | 北京小米移动软件有限公司 | Method, apparatus and system for video communication |
CN105744206A (en) * | 2016-02-02 | 2016-07-06 | 北京小米移动软件有限公司 | Video communication method, device and system |
US9955120B2 (en) * | 2016-02-12 | 2018-04-24 | Sony Interactive Entertainment LLC | Multiuser telepresence interaction |
Also Published As
Publication number | Publication date |
---|---|
US7605837B2 (en) | 2009-10-20 |
US20060274031A1 (en) | 2006-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7605837B2 (en) | Display system and method | |
US10827150B2 (en) | System and methods for facilitating virtual presence | |
US6889120B2 (en) | Mutually-immersive mobile telepresence with gaze and eye contact preservation | |
US6208373B1 (en) | Method and apparatus for enabling a videoconferencing participant to appear focused on camera to corresponding users | |
US8199185B2 (en) | Reflected camera image eye contact terminal | |
US7916165B2 (en) | Systems and method for enhancing teleconferencing collaboration | |
US7209160B2 (en) | Versatile teleconferencing eye contact terminal | |
US7855726B2 (en) | Apparatus and method for presenting audio in a video teleconference | |
US7593546B2 (en) | Telepresence system with simultaneous automatic preservation of user height, perspective, and vertical gaze | |
US20120081503A1 (en) | Immersive video conference system | |
US20070002130A1 (en) | Method and apparatus for maintaining eye contact during person-to-person video telecommunication | |
KR100904505B1 (en) | Communications system | |
US9270933B1 (en) | System and method for face-to-face video communication | |
JP3289730B2 (en) | I / O device for image communication | |
JP2002300602A (en) | Window-type image pickup/display device and two-way communication method using the same | |
Jouppi et al. | Bireality: mutually-immersive telepresence | |
JPH0832948A (en) | Line of sight coincidental video conference system | |
CN108427195A (en) | A kind of information processing method and equipment based on augmented reality | |
TWI700933B (en) | Video communication device and method for connecting video communivation to other device | |
WO2020163518A1 (en) | Systems, algorithms, and designs for see-through experiences with wide-angle cameras | |
JP3139100B2 (en) | Multipoint image communication terminal device and multipoint interactive system | |
JP2000270306A (en) | Image processing unit, its method and served medium | |
JPH07111638A (en) | Line of sight coincidence type display image pickup device for intra multi-spot video telephone | |
JPH04150684A (en) | Display/image pickup device | |
CZ2009361A3 (en) | Video-conference setting system for communication of remote groups and method of using thereof for communication of remote groups |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |