WO2012053940A2 - Meeting camera - Google Patents

Meeting camera Download PDF

Info

Publication number
WO2012053940A2
WO2012053940A2 PCT/RU2011/000817 RU2011000817W WO2012053940A2 WO 2012053940 A2 WO2012053940 A2 WO 2012053940A2 RU 2011000817 W RU2011000817 W RU 2011000817W WO 2012053940 A2 WO2012053940 A2 WO 2012053940A2
Authority
WO
WIPO (PCT)
Prior art keywords
cameras
image
meeting
camera
virtual camera
Prior art date
Application number
PCT/RU2011/000817
Other languages
French (fr)
Other versions
WO2012053940A3 (en
Inventor
Dmitry Alekseevich Gorilovsky
Original Assignee
Rawllin International Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GBGB1017776.4A external-priority patent/GB201017776D0/en
Priority claimed from GBGB1020999.7A external-priority patent/GB201020999D0/en
Application filed by Rawllin International Inc filed Critical Rawllin International Inc
Priority to PCT/RU2012/000027 priority Critical patent/WO2012099505A1/en
Publication of WO2012053940A2 publication Critical patent/WO2012053940A2/en
Publication of WO2012053940A3 publication Critical patent/WO2012053940A3/en
Priority to TW101134923A priority patent/TW201332336A/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • H04N7/144Constructional details of the terminal equipment, e.g. arrangements of the camera and the display camera and display on the same optical axis, e.g. optically multiplexing the camera and display for eye to eye contact

Definitions

  • the invention relates to a meeting camera device with two or more cameras, the device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face.
  • the invention further relates to systems comprising such devices.
  • FIG. 1 an image of a first person is shown on the screen of a second person.
  • the image of the first person is seen from off-centre, because the first person is looking at their screen centre while being filmed by an off-centre camera.
  • Figure 1 we see the back of the head of a second person who is viewing their (the second person's) screen. The second person is being filmed by an off-centre camera.
  • the image of the second person provided to the first person will be off-centre, in common with the image of the first person supplied to the second person.
  • an image processing system comprising: a system of n fixed real cameras arranged in such a way that their individual fields of view merge so as to form a single wide-angle field of view for recording a panoramic scene; an image construction system simulating a mobile virtual camera continuously scanning the panoramic scene to furnish a target sub-image corresponding to an arbitrary section of the wide-angle field of view and constructed from adjacent source images furnished by the n real cameras.
  • a meeting camera device including a screen and two or more cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view.
  • the meeting camera device may be operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face.
  • the meeting camera device may be one wherein the viewer is located in an off-centre position with respect to the screen.
  • the meeting camera device may be one wherein virtual camera placement is accomplished by a tracking system tracking the viewer and implementing a tracking algorithm.
  • the meeting camera device may be one wherein the tracking system tracks a viewer's eye or eyes, and the virtual camera is centred on an eye of the viewer.
  • the meeting camera device may be one wherein the virtual camera is centred on a right eye of the viewer.
  • the meeting camera device may be one wherein the tracking system is operable to record its tracking statistics for the tracking of a user's eye or eyes.
  • the meeting camera device may be one wherein the tracking system is operable to record its tracking of a user's eye or eyes to provide data which if corresponding to a predefined sequence will unlock the device.
  • the meeting camera device may be one wherein the virtual camera is situated in the centre of the screen.
  • the meeting camera device may be one wherein parallax information is used in constructing the virtual camera image.
  • the meeting camera device may be one wherein two cameras are arranged with respect to the viewer such that they each capture a significantly different image, but still somewhat similar images.
  • the meeting camera device may be one wherein where two images differ significantly, graphical modelling techniques are used to generate a virtual camera image.
  • the meeting camera device may be one wherein images taken from different cameras of a face and head are projected onto a head-shaped object so as to generate a three dimensional representation of the topography of a person's face and head; that three dimensional representation being imaged from in front of the face, so as to provide a virtual camera image from in front of the face.
  • the meeting camera device may be one wherein optic axes of the cameras meet exactly or approximately at a position which is the position in which a subject is located in an ideal or reference case.
  • the meeting camera device may be one wherein optic axes of the cameras meet at a point in front of the centre of the screen.
  • the meeting camera device may be one comprising exactly two cameras.
  • the meeting camera device may be one wherein cameras are placed on either side of the device screen.
  • the meeting camera device may be one comprising three cameras.
  • the meeting camera device may be one wherein the cameras are arranged on the vertices of a triangle.
  • the meeting camera device may be one wherein parallax information is available along orthogonal directions.
  • the meeting camera device may be one comprising four cameras.
  • the meeting camera device may be one wherein the cameras are arranged on the vertices of a quadrilateral.
  • the meeting camera device may be one wherein parallax information is available along orthogonal directions.
  • the meeting camera device may be one wherein an image taken by the virtual camera is shown to another party.
  • the meeting camera device may be one wherein the device provides for seeing eye-to-eye when video conferencing.
  • the meeting camera device may be one wherein a viewer can approach a large panel display with continuous video-conferencing and talk directly to the person shown on it, giving the feeling of eye-to-eye contact.
  • the meeting camera device may be one wherein the device comprises an integral microphone and speaker.
  • the meeting camera device may be one wherein an image from the virtual camera has a selectable 200m level.
  • the meeting camera device may be one wherein an image from the virtual camera has selectable tilt or selectable pan, or both selectable tilt and selectable pan.
  • the meeting camera device may be one wherein the virtual camera is operable to correct for unwanted zoom present in the image of a user.
  • the meeting camera device may be one wherein the virtual camera is operable to correct for unwanted tilt, unwanted pan, or unwanted tilt and unwanted pan, present in the image of a user.
  • the meeting camera device may be one wherein the device is a mobile phone, a laptop computer, a desktop monitor, a television, or a large screen display device.
  • the meeting camera device may be one wherein a device display is a liquid crystal display, a plasma screen display, a cathode ray tube, an organic light emitting diode (OLED) display, or a bistable display.
  • a device display is a liquid crystal display, a plasma screen display, a cathode ray tube, an organic light emitting diode (OLED) display, or a bistable display.
  • the meeting camera device may be one wherein the device is a handheld portable device, a fixed device, a desktop device, a wall-mounted device, a conference room device, a device in an automobile, a device on a mobile phone, a device on a train, a device on an aeroplane, or a hotel room device.
  • the meeting camera device may be one wherein the virtual camera provides video output.
  • the meeting camera device may be one wherein the virtual camera provides a photograph.
  • the meeting camera device may be one wherein the device has a profile of a triangle.
  • the meeting camera device may be one wherein the point is about 40 cm from the screen.
  • the meeting camera device may be one wherein the point is about 2 m from the screen.
  • the meeting camera device may be one wherein the virtual camera is implemented so as to provide a two dimensional image.
  • the meeting camera device may be one wherein the virtual camera is implemented so as to provide a three dimensional image.
  • the meeting camera device may be one wherein the three dimensional image is for display on a three dimensional display.
  • the meeting camera device may be one wherein the three dimensional display is an autostereoscopic display, or a holographic display.
  • the meeting camera device may be one wherein the device comprises at least two microphones.
  • the meeting camera device may be one wherein the device is operable to identify at least one sound source from the sound input received at the two microphones.
  • the meeting camera device may be one wherein the device is operable to provide to a receiving device a selectable option to transmit only the sound from the identified sound source.
  • the meeting camera device may be one wherein upon selection at the receiving device of the option to transmit only the sound from the identified sound source, the meeting camera device transmits only the sound from the identified sound source.
  • a meeting camera system including a device comprising a screen and two or more cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the system including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view.
  • the meeting camera system may be one comprising a remote computer, wherein images from the cameras are transmitted to the remote computer, at which the virtual camera image is generated.
  • the meeting camera system may be one comprising the cloud, wherein images from the cameras are transmitted to the cloud, at which the virtual camera image is generated.
  • the meeting camera system may be one comprising a different display device, wherein images from the cameras are transmitted to the different display device, and wherein the virtual camera image is generated and displayed at the different display device.
  • the meeting camera system may be one wherein the device comprises exactly two cameras.
  • the meeting camera system may be one wherein the device comprises three cameras.
  • the meeting camera system may be one wherein the virtual camera is implemented so as to provide a three dimensional image.
  • a meeting camera device system comprising two devices, each device including a screen and two or more cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, each device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein each device provides a target sub-image to the other device.
  • the meeting camera device system may be one wherein provision of a target sub-images includes transmission via a mobile phone network, or transmission via the internet, or transmission via a network, or transmission via a wired network, or transmission via a wireless network.
  • the meeting camera device system may be one wherein each device includes three cameras.
  • the meeting camera device system may be one wherein each virtual camera is implemented so as to provide a three dimensional image.
  • a meeting camera device system comprising two devices, each device including a screen and two or more cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, each device transmitting its camera images to a computer, the computer including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein each device receives a target sub- image based on data transmitted by the other device to the computer.
  • the meeting camera device system may be one wherein provision of a target sub-images includes transmission via a mobile phone network, or transmission via the internet, or transmission via a network, or transmission via a wired network, or transmission via a wireless network.
  • the meeting camera device system may be one wherein each device includes exactiy two cameras.
  • the meeting camera device system may be one wherein each device includes three cameras.
  • the meeting camera device system may be one wherein each virtual camera is implemented so as to provide a three dimensional image.
  • the meeting camera device system may be one wherein the computer is in the Cloud.
  • Figure 1 shows a system in which there is displayed an image of a first person looking at their screen centre, being filmed by their off-centre camera, and being shown on a second person's screen.
  • Figure 2 shows a device comprising a screen and two off-centre cameras. The cameras are arranged in a major face of the device.
  • Figure 3 shows a system in which there is displayed an image of a first person looking at their screen centre, being filmed by their two off-centre cameras, which are part of a virtual camera, and being shown on second person's screen by the virtual camera.
  • Figure 4 shows a system in which there is displayed an image of a first person looking at their screen centre from an off-centre position (not shown), being filmed by their two off- centre cameras, which are part of a virtual camera, and being shown on second person's screen by the virtual camera.
  • Figure 5 shows a device comprising a screen and three off-centre cameras. The cameras are arranged in a major face of the device.
  • Figure 6 shows a system in which there is displayed an image of a first person looking at their screen centre from an off-centre position (not shown), being filmed by their three off- centre cameras, which are part of a virtual camera, and being shown on second person's screen by the virtual camera.
  • Figure 7 shows a system in which there is displayed an image of a first person looking at their screen centre from an off-centre position (not shown), being filmed by their three off- centre cameras, which are part of a virtual camera, and being shown on second person's screen by the virtual camera.
  • Figure 8 shows a system in which there is displayed an image of a first person looking at their screen centre from an off-centre position (not shown), being filmed by their three off- centre cameras, which are part of a virtual camera, and being shown on second person's screen by the virtual camera.
  • Figure 9 shows a device comprising a screen and four off-centre cameras. The cameras are arranged in a major face of the device.
  • Figure 10 shows a system in which there is displayed an image of a first person looking at their screen centre from an off-centre position (not shown), being filmed by their four off- centre cameras, which are part of a virtual camera, and being shown on second person's screen by the virtual camera.
  • Figure 11 shows an example of a customer proposition.
  • Figure 12 shows an example of a smartphone specification.
  • Figure 13 shows an example of a mobile device industrial design.
  • Figure 14 shows an example of a mobile device industrial design.
  • Figure 15 shows an example of a mobile phone hardware specification.
  • Figure 16 shows examples of chipsets for mobile devices.
  • Figure 17 shows an example specification for a back screen of a mobile device.
  • Figure 18 shows an example software architecture of a mobile device.
  • Figure 19 shows examples of aspects of an example mobile device.
  • Figure 20 shows examples of an applications concept for a mobile device.
  • Figure 21 shows examples of applications for a mobile device.
  • Figure 22 shows further examples of applications for a mobile device.
  • Figure 23 shows an example of a mobile device in which the microphone is placed in a hole in the body of the mobile device, in the SIM card's eject hole.
  • a 'Meet Camera' which provides for seeing eye-to-eye when video conferencing.
  • FIG 3 An example of a result of using the virtual camera described above in relation to Figure 2 is shown in Figure 3.
  • an image of a first person is shown on the screen of a second person.
  • the image of the first person is seen from the centre, because the first person is looking at their screen centre while being filmed by two off-centre cameras, such as those shown in Figure 2, from which a virtual camera located at or near the screen centre has been created.
  • Figure 3 we see the back of the head of a second person who is viewing their (the second person's) screen.
  • the second person is being filmed by two off- centre cameras, from which a virtual camera at or near the screen centre is created.
  • the image of the second person provided to the first person will be seen from at or near the screen centre, in common with the image of the first person supplied to the second person.
  • 'Meet Camera' is that one can approach a large panel display with always on video-conferencing and talk direcdy to the person shown on it— giving the feeling of eye-to-eye contact.
  • the face displayed by the virtual camera can be placed in the centre of the screen, even if the face of the person whose image is being captured moves significantiy away from the centre of the screen.
  • This placement can be accomplished by a tracking system implementing a tracking algorithm.
  • the tracking system may track an eye or the eyes of a viewer.
  • An example is shown in Figure 4. In Figure 4, an image of a first person is shown on the screen of a second person.
  • the image of the first person is seen from the centre, even though the first person is located in an off-centre position (not shown), similar to the off-centre position of the second person shown in Fig. 4. This is because the first person is looking at their screen centre while being filmed by two off-centre cameras, such as those shown in Figure 2, from which a virtual camera has been created.
  • the virtual camera is arranged so as to provide an image of the first person as if the first person were looking directly at the virtual camera.
  • Figure 4 we see the back of the head of a second person who is viewing their (the second person's) screen.
  • the second person is being filmed by two off-centre cameras; the second person is in an off-centre position.
  • a second virtual camera for the second person is arranged so as to provide an image of the second person as if the second person were looking directly at the second virtual camera.
  • the image of the second person provided to the first person is a front view of the second person, centrally located on the device screen, in common with the image of the first person supplied to the second person.
  • the tracking system may record its tracking statistics for the tracking of a user's eye or eyes. Such statistics could be useful in deterrnining the user's degree of attentiveness, or for measuring the effectiveness of advertising.
  • the tracking system may be useful in implementing a form of user password for unlocking a device. For example, a user may look at points on the device in a sequence, and this will unlock the device.
  • Tracking system output may be used to control the user interface. For example, high priority information may be presented on a part of the screen that the tracking system indicates the viewer is looking at.
  • the virtual camera may be implemented with respect to a display device which displays an image from the virtual camera, or with respect to a display device which obtains an image for display on another display device.
  • the display device with respect to which the virtual image is captured, or on which the virtual image is displayed may be a mobile phone display, a laptop computer display, a desktop monitor display, a television display, or a large screen display device.
  • the display device may be a liquid crystal display device, a plasma screen display device, a cathode ray tube, an organic light emitting diode (OLED) display device, or a bistable display device.
  • the virtual camera may be implemented with respect to a device which captures images for use in generating the virtual camera image, where that device is a handheld portable device, a fixed device, a desktop device, a wall-mounted device, a conference room device, a device in an automobile, a device on a mobile phone, a device on a train, a device on an aeroplane, or a hotel room device.
  • Figure 2 shows a particular arrangement of two cameras on a device
  • two cameras used to generate a virtual camera may be arranged in many ways. It is preferable that the two cameras should be arranged with respect to the individual being filmed such that they each capture a significantly different image, but still somewhat similar images.
  • This enables the virtual camera algorithm to combine the two images obtained by the two cameras such as to generate an image as if it had been obtained from a different location.
  • this process becomes less useful if the two images do not differ significandy i.e. if the two cameras are located in very similar positions. Those skilled in the art would appreciate that this process becomes less reliable if the two images differ too greatly, so that they cannot be readily combined.
  • graphical modelling techniques may be used to generate a virtual camera image.
  • images taken from two different cameras of a face and head may be projected onto a head-shaped object so as to generate a three dimensional representation of the topography of a person's face and head. That three dimensional representation may be imaged from in front of the face, so as to generate a virtual camera image from in front of the face.
  • This process can be extended to three, four or more cameras.
  • three cameras, four cameras, or more than four cameras can be arranged so as to generate a virtual camera.
  • three cameras may be arranged on a device as shown in Figure 5.
  • parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device (eg. the device of Figure 5), which is useful when generating an image from a virtual camera, as would be clear to those skilled in the art.
  • Parallax information may be used in constructing the virtual camera image.
  • Three non collinear cameras are useful when the person being filmed is located off-centre, because they may be off-centre not just as shown in Figure 4, i.e.
  • off-centre substantially in the direction parallel to the line passing through the two cameras of Figure 4, but they may be off-centre along the orthogonal direction to the line passing through the two cameras in Figure 4 i.e. too high or too low with respect to the off-centre position in Figure 4.
  • four cameras may be arranged on a device as shown in Figure 9, which shows four cameras each near the vertices of a device with a rectangular profile.
  • Figure 6 shows an example in which the second person is off-centre along the orthogonal direction to the line passing through the two cameras in Figure 4 i.e. too high or too low (in this example, too low) with respect to their corresponding position in Figure 4.
  • an image of a first person is shown on the screen of a second person. The image of the first person is seen from the centre, even though the first person is located in an off- centre position (not shown), which is similar to the off-centre position shown for the second person in Figure 6. This is because the first person is looking at their screen centre while being filmed by three off-centre cameras, such as those shown in Figure 5, from which a virtual camera has been created.
  • the virtual camera is arranged so as to provide an image of the first person as if the first person were looking directiy at the virtual camera.
  • Figure 6 we see the back of the head of a second person who is viewing their (the second person's) screen.
  • the second person is being filmed by three off-centre cameras; the second person is in an off-centre position, which differs from the off-centre position shown in Figure 4.
  • a second virtual camera for the second person is arranged so as to provide an image of the second person as if the second person were looking directly at the second virtual camera.
  • the image of the second person provided to the first person is a front view of the second person, centrally located on the device screen, in common with the image of the first person supplied to the second person.
  • Figure 7 shows an example which may be especially effective in generating parallax information along orthogonal directions, or in generating a wide field of view, which may be useful when generating an image from a virtual camera, as would be clear to those skilled in the art.
  • Parallax information may be used in constructing the virtual camera image.
  • the three cameras are arranged on the vertices of an equilateral triangle.
  • the device has the profile of an equilateral triangle, although this is not necessary in order for the three cameras to be arranged on an equilateral triangle: the device profile could be another shape such as rectangular, for example.
  • An equilateral triangle arrangement of cameras is useful in generating parallax information, such as when the user is in an off-centre position.
  • Parallax information may be used in constructing the virtual camera image.
  • the three cameras may be arranged on the vertices of an isosceles triangle, a right angled triangle, or a scalene triangle.
  • the three cameras may be arranged on the vertices of a triangle.
  • Figure 7 the image of the first person is seen from the centre, even though the first person is located in an off-centre position (not shown), which is similar to the off-centre position shown for the second person.
  • Figure 8 shows an example which may be especially effective in generating parallax information along orthogonal directions, or in generating a wide field of view, which may be useful when generating an image from a virtual camera, as would be clear to those skilled in the art.
  • Parallax information may be used in constructing the virtual camera image.
  • the three cameras are arranged on the vertices of a triangle, such as an equilateral triangle.
  • the device has the profile of a rectangle.
  • a triangular arrangement (for example, on an equilateral triangle) of cameras is useful in generating parallax information, such as when the user is in an off-centre position.
  • Parallax information may be used in constructing the virtual camera image.
  • the three cameras may be arranged on the vertices of an isosceles triangle, a right angled triangle, or a scalene triangle.
  • the three cameras may be arranged on the vertices of a triangle.
  • the image of the first person is seen from the centre, even though the first person is located in an off-centre position (not shown), which is similar to the off-centre position shown for the second person.
  • four cameras may be arranged on a device as shown in Figure 9, which shows four cameras each near the vertices of a device with a rectangular profile. Because the cameras of Figure 9 are not collinear, parallax information is available along orthogonal directions, which is useful when generating an image from a virtual camera, as would be clear to those skilled in the art. Parallax information may be used in constructing the virtual camera image.
  • Figure 10 shows an example which may be especially effective in generating parallax information along orthogonal directions, or in generating a wide field of view, which may be useful when generating an image from a virtual camera, as would be clear to those skilled in the art.
  • Parallax information may be used in constructing the virtual camera image.
  • the four cameras are arranged on the vertices of a rectangle.
  • the device has the profile of a rectangle.
  • a quadrilateral arrangement (for example, on a rectangle) of cameras is useful in generating parallax information, such as when the user is in an off-centre position.
  • Parallax information may be used in constructing the virtual camera image.
  • the four cameras may be arranged on the vertices of a square, a kite, or a parallelogram.
  • the four cameras may be arranged on the vertices of a quadrilateral.
  • the image of the first person is seen from the centre, even though the first person is located in an off-centre position (not shown), which is similar to the off-centre position shown for the second person.
  • the user may be in an off-centre position because they move with respect to a fixed device, or because the device is not fixed (eg. it is handheld), and the device moves, tilts or pans with respect to a user.
  • the user and the device may move eg. a moving user using a handheld device, which may tilt or pan.
  • the device which provides a virtual camera may also provide a microphone and speaker, so that a user of the device can be in voice communication with another user of another device with a microphone and speaker.
  • the virtual camera may be mobile in that its position can be located within a field of view that is obtained by combining the images from two, three, four or more real cameras.
  • the mobile virtual camera may supply a target sub-image corresponding to an arbitrary section of the field of view constructed from adjacent source images from two, three, four or more real cameras.
  • the view generated by the virtual camera may be displayed on a display.
  • the image from the mobile virtual camera may have a selectable zoom level.
  • the image from the mobile virtual camera may have selectable tilt or selectable pan, or both selectable tilt and selectable pan.
  • the virtual camera may be situated so as to provide the view seen from a particular eye of the user.
  • the eye may be a right eye or a left eye.
  • the right eye is the preferred eye.
  • the virtual camera may provide video output.
  • the virtual camera may provide a photograph.
  • the different cameras may supply source images with different luminances. Accordingly, at the boundary between the different camera images, a boundary line may appear, across which the image brightness is seen to fall or rise relatively abrupdy. Accordingly, the luminance difference between different source images which form part of the target image must be corrected, so as to provide an image which is acceptably free of one or more boundary lines to a user who views the target image. Correction may be implemented as described in US5,650,814, which is incorporated by reference, or by other methods known to those skilled in the art.
  • the virtual camera image may be generated on a device with includes two, three, four or more cameras from which the virtual camera image is generated.
  • the images from two, three, four or more cameras may be transmitted to a remote computer, at which the virtual camera image is generated.
  • the virtual camera image thus generated may be transmitted to a display device for display.
  • the images from two, three, four or more cameras may be transmitted to a display device, the virtual camera image being generated and displayed at the display device.
  • the virtual camera may be implemented so as to correct for unwanted tilt, unwanted pan, or unwanted tilt and unwanted pan, present in the image of a user.
  • the virtual camera may be implemented so as to correct for unwanted zoom (i.e. image too close or too far), present in the image of a user.
  • the virtual camera may be implemented so as to provide a two dimensional image.
  • the virtual camera may be provided so as to provide a three dimensional image.
  • images taken from two, three or more different cameras of a face and head may be projected onto a head-shaped object so as to generate a three dimensional representation of the topography of a person's face and head. That three dimensional representation can provide a three dimensional image.
  • a two dimensional image may be displayed on a display.
  • a three dimensional image may be displayed on a three dimensional display, such as on an autostereoscopic display, on a holographic display, or on any three dimensional display known to those skilled in the art.
  • a virtual camera may be implemented in many ways using the images from two, three, four or more cameras.
  • One example is provided by US5,650,81 "Image Processing System Comprising Fixed Cameras and a System Simulating a Mobile Camera", which is incorporated here by reference.
  • a virtual camera may be facilitated if the optic axes of the n real cameras (where n ⁇ 2) of the system meet exacdy or approximately at the position which is the position in which a subject is located in an ideal or reference case eg. a position a fixed distance perpendicular from the centre of a screen.
  • an ideal or reference case eg. a position a fixed distance perpendicular from the centre of a screen.
  • this provides a common reference point for all n cameras.
  • the optic axes of the two cameras may meet at a point in front of the centre of the screen.
  • the optic axes of the three cameras may meet at a point in front of the centre of the screen.
  • the optic axes of the four cameras may meet at a point in front of the centre of the screen.
  • that point may be about 40 cm in front of the screen in the case of a screen on a portable device, or about 2 m in front of the screen in the case of a medium sized television screen, or about 4 m in front of the screen in the case of a large sized television screen.
  • such a point may be the position of the centre of the face of the second person; such a position is possible for the device of Fig. 2, or for the device of Fig. 5 or for the device of Fig. 9.
  • each device including an image processing system, each image processing system comprising a system of n ⁇ 2 fixed real cameras arranged that individual fields of view merge so as to form a single field of view, an image construction system simulating a mobile, virtual camera supplying a target sub-image corresponding to a section of the field of view and constructed from source images from the n real cameras, wherein the image from each virtual camera of a particular device is displayed at the other device.
  • the image processing system may be a digital system that further comprises a luminance equalizing system for overall equalizing of corresponding luminance levels of first and second portions of a digital target image derived from two adjacent source images (Ii, Ij).
  • each device including an image processing system, each image processing system comprising a system of n ⁇ 2 fixed real cameras arranged that individual fields of view merge so as to form a single field of view, an image construction system simulating a mobile, virtual camera continuously scanning the field of view to construct a target sub-image corresponding to an arbitrary section of the field of view and derived from adjacent source images from the n real cameras, wherein the image from each virtual camera of a particular device is displayed at the other device.
  • the image processing system may be a digital system that further comprises a luminance equalizing system for overall equalizing of corresponding luminance levels of first and second portions of a digital target image derived from two adjacent source images (Ii, Ij).
  • Individual sound sources are identified through the use of two or more inbuilt microphones in the meeting camera device, eg. a mobile device. Then the individual sources are graphically represented on a receiving device relative to their position eg. in the room.
  • a visual interface on the receiving device enables selection by hand of which sound source to record e.g. to optimise the noise cancellation/sonic focus for the selected sound source. This could be advantageous in for instance meetings where one person is talking and you want to aggressively noise cancel everything else.
  • One method for accomphshing this is to determine the relative delays of the various sound sources between their reception at the microphones by determining the respective delays of the sound sources with respect to their reception at the microphones.
  • the sounds from these two sources can be separated eg. by filtering out the unwanted sound source.
  • Such other sounds could be background chatter from people in a crowded environment, such as in a train station, or in an airport, or such sounds could be vehicular traffic sounds in an urban environment.
  • Those other sounds can be suppressed, so as to improve the audibility of the person one wants to listen to.
  • An option can be selected on the meeting camera device (eg. a mobile device), to suppress background sound, to improve the audibility of the person one wants to listen to.
  • Meeting camera device including a screen and n>2 cameras, the cameras each situated off- centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view.
  • Parallax information may be used in constructing the virtual camera image.
  • virtual camera is arranged so as to provide an image of the first person as if the first person were looking direcdy at the virtual camera, even when the first person is in an off-centre position.
  • Device is mobile phone, laptop computer, a desktop monitor, a television, or a large screen display device.
  • display device may be a liquid crystal display device, a plasma screen display device, a cathode ray tube, an organic light emitting diode (OLED) display device, or a bistable display device.
  • OLED organic light emitting diode
  • device may be a handheld portable device, a fixed device, a desktop device, a wall- mounted device, a conference room device, a device in an automobile, a device on a mobile phone, a device on a train, a device on an aeroplane, or a hotel room device.
  • ⁇ two cameras should be arranged with respect to the individual being filmed such that they each capture a significantly different image, but still somewhat similar images.
  • ⁇ images taken from different cameras of a face and head may be projected onto a head-shaped object so as to generate a three dimensional representation of the topography of a person's face and head; that three dimensional representation may be imaged from in front of the face, so as to generate a virtual camera image from in front of the face.
  • Three or more cameras are used which are not collinearly arranged.
  • parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device; parallax information may be used in constructing the virtual camera image.
  • device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position, wherein the device has at least three non collinearly arranged cameras, and wherein the off-centre position is displaced vertically from a horizontal plane passing through the screen centre, and wherein the off-centre position is displaced horizontally from the screen centre.
  • Three cameras are arranged on the vertices of an equilateral triangle, an isosceles triangle, a right angled triangle, a scalene triangle, or a triangle.
  • ⁇ user may be in an off-centre position because they move with respect to a fixed device
  • ⁇ the user and the device may move
  • device which provides a virtual camera may also provide a microphone and speaker
  • device which provides a virtual camera may also provide a microphone and speaker; user of the device can be in voice communication with another user of another device with a microphone and speaker
  • ⁇ virtual camera may be mobile in that its position can be located within a field of view that is obtained by combining the images from two, three, four or more real cameras mobile virtual camera may supply a target sub-image corresponding to an arbitrary section of the field of view constructed from adjacent source images from two, three, four or more real cameras
  • image from the mobile virtual camera may have a selectable zoom level
  • image from the mobile virtual camera may have selectable tilt or selectable pan, or both selectable tilt and selectable pan.
  • virtual camera may provide video output
  • virtual camera may provide a photograph.
  • virtual camera may be implemented so as to correct for unwanted tilt, unwanted pan, or unwanted tilt and unwanted pan, present in the image of a user
  • virtual camera may be implemented so as to correct for unwanted zoom
  • the optic axes of the n real cameras (where n>2) of the system meet exactly or approximately at the position which is the position in which a subject is located in an ideal or reference case
  • the optic axes of two cameras may meet at a point in front of the centre of the screen
  • optic axes of three cameras may meet at a point in front of the centre of the screen optic axes of four cameras may meet at a point in front of the centre of the screen, point may be about 40 cm in front of the screen
  • virtual camera is implemented so as to provide a two dimensional image virtual camera is implemented so as to provide a three dimensional image three dimensional image is for display on a three dimensional display
  • three dimensional display is an autostereoscopic display, or a holographic display meeting camera device comprises at least two microphones.
  • Meeting camera device is operable to identify at least one sound source from the sound input received at the two microphones.
  • Meeting camera is operable to provide to a receiving device a selectable option to transmit only the sound from the identified sound source.
  • Meeting camera device wherein upon selection at the receiving device of the option to transmit only the sound from the identified sound source, the meeting camera device transmits only the sound from the identified sound source.
  • Method of supplying a target sub-image corresponding to a portion of the fields of view for a meeting camera device wherein the meeting camera device includes a screen and n ⁇ 2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein the method comprises the step of: using the virtual camera comprising an image construction system to supply a target sub-image corresponding to a portion of the fields of view.
  • Computer program product operable to supply a target sub-image corresponding to a portion of the fields of view for a meeting camera device, wherein the meeting camera device includes a screen and n>2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the device jncluding a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein the computer program product is operable to supply a target sub-image corresponding to a portion of the fields of view.
  • the meeting camera device includes a screen and n>2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the device jncluding a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein the computer program product is operable to supply a target sub-image corresponding to
  • Meeting camera system including a device comprising a screen and n ⁇ 2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the system including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view.
  • system comprises the cloud, wherein images from the cameras are transmitted to the cloud, at which the virtual camera image is generated.
  • images from two, three, four or more cameras may be transmitted to a display device, the virtual camera image being generated and displayed at the display device •
  • Meeting camera system operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face.
  • ⁇ Parallax information may be used in constructing the virtual camera image.
  • Meeting camera system operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position.
  • Meeting camera system provides for seeing eye-to-eye when video conferencing.
  • virtual camera is arranged so as to provide an image of the first person as if the first person were looking direcdy at the virtual camera, even when the first person is in an off-centre position.
  • ⁇ Device is mobile phone, laptop computer, a desktop monitor, a television, or a large screen display device.
  • display device may be a liquid crystal display device, a plasma screen display device, a cathode ray tube, an organic light emitting diode (OLED) display device, or a bistable display device.
  • OLED organic light emitting diode
  • device may be a handheld portable device, a fixed device, a desktop device, a wall- mounted device, a conference room device, a device in an automobile, a device on a mobile phone, a device on a train, a device on an aeroplane, or a hotel room device.
  • graphical modelling techniques may be used to generate a virtual camera image.
  • images taken from different cameras of a face and head may be projected onto a head-shaped object to as to generate a three dimensional representation of the topography of a person's face and head; that three dimensional representation may be imaged from in front of the face, so as to generate a virtual camera image from in front of the face.
  • Three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device.
  • parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device; parallax information may be used in constructing the virtual camera image.
  • Meeting camera system operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position, wherein the device has at least three non collinearly arranged cameras, and wherein the off-centre position is displaced vertically from a horizontal plane passing through the screen centre, and wherein the off-centre position is displaced horizontally from the screen centre.
  • Three cameras are arranged on the vertices of an equilateral triangle, an isosceles triangle, a right angled triangle, a scalene triangle, or a triangle.
  • Meeting camera system which provides a virtual camera, such that device may provide a microphone and speaker
  • Meeting camera system which provides a virtual camera, such that device may provide a microphone and speaker; user of the device can be in voice communication with another user of another device with a microphone and speaker
  • virtual camera may be mobile in that its position can be located within a field of view that is obtained by combining the images from two, three, four or more real cameras
  • mobile virtual camera may supply a target sub-image corresponding to an arbitrary section of the field of view constructed from adjacent source images from two, three, four or more real cameras
  • image from the mobile virtual camera may have a selectable zoom level
  • image from the mobile virtual camera may have selectable tilt or selectable pan, or both selectable tilt and selectable pan.
  • virtual camera may be implemented so as to correct for unwanted tilt, unwanted pan, or unwanted tilt and unwanted pan, present in the image of a user
  • • virtual camera may be implemented so as to correct for unwanted zoom • the optic axes of the n real cameras (where n ⁇ 2) of the system meet exactly or approximately at the position which is the position in which a subject is located in an ideal or reference case
  • optic axes of three cameras may meet at a point in front of the centre of the screen
  • optic axes of four cameras may meet at a point in front of the centre of the screen.
  • ⁇ point may be about 4 m in front of the screen
  • three dimensional display is an autostereoscopic display, or a holographic display ⁇ Meeting camera system wherein the device comprises at least two microphones.
  • the meeting camera system includes a meeting camera device including a screen and n ⁇ 2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the system including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein the method comprises the step of: using the virtual camera comprising an image construction system to supply a target sub-image corresponding to a portion of the fields of view.
  • Computer program product operable to supply a target sub-image corresponding to a portion of the fields of view for a meeting camera system
  • the meeting camera system includes a meeting camera device including a screen and n>2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap
  • the system including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view
  • the computer program product is operable to supply a target sub-image corresponding to a portion of the fields of view.
  • Meeting camera device system comprising two devices, each device including a screen and n>2 cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, each device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein each device provides a target sub-image to the other device.
  • provision of target sub-images includes transmission via a network
  • provision of target sub-images includes transmission via a wireless network
  • each device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face.
  • Parallax information may be used in constructing the virtual camera image.
  • each device provides for seeing eye-to-eye when video conferencing.
  • virtual camera is arranged so as to provide an image of the first person as if the first person were looking directiy at the virtual camera, even when the first person is in an off-centre position.
  • Each device is mobile phone, laptop computer, a desktop monitor, a television, or a large screen display device.
  • Each display device may be a liquid crystal display device, a plasma screen display device, a cathode ray tube, an organic light emitting diode (OLED) display device, or a bistable display device.
  • OLED organic light emitting diode
  • Each device may be a handheld portable device, a fixed device, a desktop device, a wall-mounted device, a conference room device, a device in an automobile, a device on a mobile phone, a device on a train, a device on an aeroplane, or a hotel room device.
  • two cameras should be arranged with respect to the individual being filmed such that they each capture a significantly different image, but still somewhat similar images.
  • images taken from different cameras of a face and head may be projected onto a head-shaped object to as to generate a three dimensional representation of the topography of a person's face and head; that three dimensional representation may be imaged from in front of the face, so as to generate a virtual camera image from in front of the face.
  • each device three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions, such as along any pair of orthogonal directions-in the general plane of the device.
  • parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device; parallax information may be used in constructing the virtual camera image.
  • each device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position, wherein the device has at least three non collinearly arranged cameras, and wherein the off-centre position is displaced vertically from a horizontal plane passing through the screen centre, and wherein the off-centre position is displaced horizontally from the screen centre.
  • Each device has the profile of a rectangle.
  • Each device has the profile of a triangle.
  • each device For each device, four cameras are arranged on the vertices of a square, a rectangle, a kite, a parallelogram, or a quadrilateral. • For each device, user may be in an off-centre position because they move with respect to a fixed device
  • user may be in an off-centre position because the device is handheld
  • device which provides a virtual camera may also provide a microphone and speaker
  • device which provides a virtual camera may also provide a microphone and speaker; user of the device can be in voice communication with another user of the other device with a microphone and speaker
  • virtual camera may be mobile in that its position can be located within a field of view that is obtained by combining the images from two, three, four or more real cameras
  • mobile virtual camera may supply a target sub-image corresponding to an arbitrary section of the field of view constructed from adjacent source images from two, three, four or more real cameras
  • image from the mobile virtual camera may have a selectable zoom level
  • image from the mobile virtual camera may have selectable tilt or selectable pan, or both selectable tilt and selectable pan.
  • virtual camera may provide video output
  • virtual camera may provide a photograph.
  • virtual camera may be implemented so as to correct for unwanted tilt, unwanted pan, or unwanted tilt and unwanted pan, present in the image of a user
  • virtual camera may be implemented so as to correct for unwanted zoom
  • the optic axes of the n real cameras (where n ⁇ 2) of the system meet exactly or approximately at the position which is the position in which a subject is located in an ideal or reference case •
  • the optic axes of two cameras may meet at a point in front of the centre of the screen
  • optic axes of three cameras may meet at a point in front of the centre of the screen
  • optic axes of four cameras may meet at a point in front of the centre of the screen.
  • point may be about 40 cm in front of the screen
  • point may be about 2 m in front of the screen
  • point may be about 4 m in front of the screen
  • virtual camera is implemented so as to provide a two dimensional image
  • virtual camera is implemented so as to provide a three dimensional image
  • three dimensional display is an autostereoscopic display, or a holographic display
  • each device comprises at least two microphones.
  • ⁇ Meeting camera device system wherein for each device, the system is operable to identify at least one sound source from the sound input received at the two microphones.
  • Meeting camera device system comprising two devices and a computer, each device including a screen and n ⁇ 2 cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, each device transmitting its camera images to a computer, the computer including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein each device receives a target sub-image based on data transmitted by the other device to the computer.
  • each device transmits its camera images to a respective computer, each respective computer including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein each device receives a target sub-image based on data transmitted by the other device to its respective computer.
  • provision of target sub-images includes transmission via a mobile phone network
  • provision of target sub-images includes transmission via a network
  • ⁇ provision of target sub-images includes transmission via a wireless network
  • each device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face.
  • Parallax information may be used in constructing the virtual camera image.
  • Each device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position.
  • each device provides for seeing eye-to-eye when video conferencing.
  • virtual camera is arranged so as to provide an image of the first person as if the first person were looking directly at the virtual camera, even when the first person is in an off-centre position.
  • Each device is mobile phone, laptop computer, a desktop monitor, a television, or a large screen display device.
  • Each display device may be a liquid crystal display device, a plasma screen display device, a cathode ray tube, an organic light emitting diode (OLED) display device, or a bistable display device.
  • OLED organic light emitting diode
  • Each device may be a handheld portable device, a fixed device, a desktop device, a wall-mounted device, a conference room device, a device in an automobile, a device on a mobile phone, a device on a train, a device on an aeroplane, or a hotel room device.
  • images taken from different cameras of a face and head may be projected onto a head-shaped object to as to generate a three dimensional representation of the topography of a person's face and head; that three dimensional representation may be imaged from in front of the face, so as to generate a virtual camera image from in front of the face.
  • For each device, three or more cameras are used which are not collinearly arranged.
  • each device three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device.
  • parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device; parallax information may be used in constructing the virtual camera image.
  • each device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position, wherein the device has at least three non collinearly arranged cameras, and wherein the off-centre position is displaced vertically from a horizontal plane passing through the screen centre, and wherein the off-centre position is displaced horizontally from the screen centre.
  • Each device has the profile of a rectangle.
  • Each device has the profile of a triangle.
  • each device four cameras are arranged on the vertices of a square, a rectangle, a kite, a parallelogram, or a quadrilateral.
  • user may be in an off-centre position because they move with respect to a fixed device
  • device which provides a virtual camera may also provide a microphone and speaker
  • device which provides a virtual camera may also provide a microphone and speaker; user of the device can be in voice communication with another user of the other device with a microphone and speaker
  • virtual camera may be mobile in that its position can be located within a field of view that is obtained by combining the images from two, three, four or more real cameras
  • mobile virtual camera may supply a target sub-image corresponding to an arbitrary section of the field of view constructed from adjacent source images from two, three, four or more real cameras •
  • image from the mobile virtual camera may have a selectable zoom level
  • image from the mobile virtual camera may have selectable tilt or selectable pan, or both selectable tilt and selectable pan.
  • virtual camera may provide video output
  • virtual camera may provide a photograph.
  • virtual camera may be implemented so as to correct for unwanted tilt, unwanted pan, or unwanted tilt and unwanted pan, present in the image of a user
  • virtual camera may be implemented so as to correct for unwanted zoom
  • the optic axes of the n real cameras (where n ⁇ 2) of the system meet exactly or approximately at the position which is the position in which a subject is located in an ideal or reference case
  • the optic axes of two cameras may meet at a point in front of the centre of the screen
  • optic axes of three cameras may meet at a point in front of the centre of the screen
  • optic axes of four cameras may meet at a point in front of the centre of the screen.
  • point may be about 40 cm in front of the screen
  • point may be about 2 m in front of the screen
  • point may be about 4 m in front of the screen
  • virtual camera is implemented so as to provide a two dimensional image
  • virtual camera is implemented so as to provide a three dimensional image
  • three dimensional display is an autostereoscopic display, or a holographic display
  • Method of supplying a target sub-image corresponding to a portion of fields of view for a meeting camera device system comprising two devices and a computer, each device including a screen and n ⁇ 2 cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, wherein the computer includes a virtual camera comprising" an image construction system operable to supply a target sub-image corresponding to a portion of fields of view, the method comprising the steps of:
  • Computer program product operable to supply a target sub-image corresponding to a portion of fields of view for a meeting camera device system comprising two devices and a computer, each device including a screen and n ⁇ 2 cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap
  • the computer includes a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of fields of view, wherein a first device transmits its camera images to a computer, and a second device transmits its camera images to a computer, and the computer program product running on the computer is operable to supply to the second device a target sub-image corresponding to a portion of the fields of view, based on data transmitted by the first device to the computer, and the computer program product running on the computer is operable to supply to the first device a target sub-image corresponding to a portion of the fields of view, based on data transmitted by the second device to the computer.
  • Yota introduction 1 The main focus for Yota's IP protection strategy will be its new LTE phone.
  • the LTE phone will include innovative software, hardware and provide an innovative user experience. See for example Figs. 1 to 23.
  • Meet Camera One advantage of Meet Camera is that one can approach a large panel display with always on video-conferencing and talk directly to the person shown on it - giving the feeling of eye-to-eye contact.
  • the face displayed by the virtual camera can be placed in the centre of the screen, even if the face of the person whose image is being captured moves significantly away from the centre of the screen. This placement can be accomplished by a tracking algorithm.
  • DML Phone speaker It's hard to get good quality audio performance, unless you have a large speaker with a large and ugly speaker hole.
  • NXT pic distributed mode loudspeaker (DML) technology here to vibrate the entire phone screen - the whole screen surface acts as the speaker. The speaker hole can be fully eliminated.
  • DML has never been used before to drive a screen surface in a mobile phone. Haptic feedback can be provided by the drivers too— a new use for the DML exciters. 4.
  • Virtual Web-USB interface for wireless devices iPhone/iPad has no USB connector — a major disaadvantage.
  • USB stick for in-car audio In-car audio systems often have USB interfaces for MP3 files, but will have no way of accessing internet radio (that is currendy only available on really high-end systems).
  • the USB dongle captures the data stream and converts it to a sequence of files— just like the MP3 files the in-car audio is designed to read. This enables even a basic in-car audio device to have playback/rewind, store etc. functionality for internet radio.
  • the streamed audio is stored as at least two separate files, which allows the user to choose to skip to the next track using the car audio system software.
  • the user can listen to music online in his car with no modifications to the in-car audio system.
  • An online interface is used for setting up the service, selecting stream source.
  • UX User experience to identify sound sources
  • Individual sound sources (different people speaking at a phone in hands-free mode) are identified with two or more inbuilt microphones. Then the individual sources are graphically represented on the device relative to their position in the room.
  • a visual interface on the phone enables selection by hand of which sound source to record e.g. to optimise the noise cancellation/sonic focus for the selected sound source. This could be advantageous in for instance meetings where one person is talking and you want to aggressively noise cancel everything else.
  • the mobile phone has a concave front face and a convex rear face, of same or similar magnitude of curvature.
  • Concave front matches path of finger as wrist rotates. Hence it's very natural to use.
  • Having a curved surface as the vibrating DML speaker is also better since if the LCD with the speaker exciters was instead a flat surface, then it would sound unpleasant if that flat surface is placed down against a tabletop. Curving the surface prevents this happening.
  • Preferred curvature of front and back is cylindrical, rather than spherical or aspherical. See eg. Figs 13, 14, 17.
  • the convex back can have a bistable display. Since the normal resting position is front face down, the back screen with bi-stable display is normally displayed when phone is in the resting position. This resting position is stable. If phone is placed back down (ie convex face down), the phone could spin, which is unstable. Hence a user will likely place phone front face (i.e. concave face) down, with the bi-stable screen showing. When the phone is in a pocket, the front face (concave face) can face inwards, since this better matches leg curvature. This can be the better configuration (as opposed to front face up) for antenna reception.
  • the microphone is placed in a hole in the body of the mobile device, in the SIM card's eject hole. See Fig. 23.
  • the casing of the mobile device consists of a material that can change its tactile properties from wood to metal ("mo ⁇ hing").
  • 3GPP Long Term Evolution is the latest standard in the mobile network technology tree that produced the GSM/EDGE and UMTS/HSPA network technologies. It is a project of the 3rd Generation Partnership Project (3GPP), operating under a name trademarked by one of the associations within the partnership, the European Telecommunications Standards Institute.
  • 3GPP 3rd Generation Partnership Project
  • LTE Long Term Evolution
  • 4G fourth generation
  • 4G 4th generation standard
  • LTE Advanced is backwards compatible with LTE and uses the same frequency bands, while LTE is not backwards compatible with 3G systems.
  • LTE Universal Mobile Telecommunications System
  • 3GPP 3rd Generation Partnership Project
  • LTE While it is commonly seen as a cell phone or common carrier development, LTE is also endorsed by public safety agencies in the US as the preferred technology for the new 700 MHz public-safety radio band. Agencies in some areas have filed for waivers hoping to use the 700 MHz spectrum with other technologies in advance of the adoption of a nationwide standard.
  • the LTE specification provides downlink peak rates of at least 100 Mbps, an uplink of at least 50 Mbps and RAN round-trip times of less than 10 ms.
  • LTE supports scalable carrier bandwidths, from 1.4 MHz to 20 MHz and supports both frequency division duplexing (FDD) and time division duplexing (TDD).
  • FDD frequency division duplexing
  • TDD time division duplexing
  • LTE Long Term Evolution
  • GPRS Global System for Mobile communications
  • WiMAX Worldwide Interoperability for Microwave Access
  • LTE Long Term Evolution
  • FDD Frequency Division Duplex
  • TDD Time Division Duplex
  • LTE Advanced is currendy being standardized in 3GPP Release 10.
  • LTE Advanced is a preliminary mobile communication standard, formally submitted as a candidate 4G system to ITU-T in late 2009, was approved into ITU, International Telecommunications Union, IMT-Advanced and expected to be finalized by 3GPP in early 2011. It is standardized by the 3rd Generation Partnership Project (3GPP) as a major enhancement of the 3GPP Long Term Evolution (LTE) standard.
  • 3GPP 3rd Generation Partnership Project
  • LTE format was first proposed by NTT DoCoMo of Japan and has been adopted as the international standards. LTE standardization has come to a mature state by now where changes in the specification are limited to corrections and bug fixes. The first commercial services were launched in Scandinavia in December 2009 followed by the United States and Japan in 2010. More first release LTE networks are expected to be deployed globally during 2010 as a natural evolution of several 2G and 3G systems, including Global system for mobile communications (GSM) and Universal Mobile Telecommunications System (UMTS) (3GPP as well as 3GPP2).
  • GSM Global system for mobile communications
  • UMTS Universal Mobile Telecommunications System
  • the first release LTE does not meet the IMT Advanced requirements for 4G also called IMT Advanced as defined by the International Telecommunication Union such as peak data rates up to 1 Gbit/ s.
  • IMT Advanced as defined by the International Telecommunication Union such as peak data rates up to 1 Gbit/ s.
  • the ITU has invited the submission of candidate Radio Interface Technologies (RITs) following their requirements as mentioned in a circular letter.
  • RITs Radio Interface Technologies
  • LTE-Advanced The requirements for LTE-Advanced are defined in 3 GPP Technical Report (TR) 36.913, "Requirements for Further Advancements for E-UTRA (LTE- Advanced).” These requirements are based on the ITU requirements for 4G and on 3GPP operators' own requirements for advancing LTE. Major technical considerations include the following:
  • WiMAX 2 has been approved by ITU into the IMT Advanced family. WiMAX 2 is designed to be backward compatible with WiMAX 1/1.5 devices. Most vendors now support ease of conversion of earlier 'pre-4G', pre-advanced versions and some support software defined upgrades of core base station equipment from 3G.
  • LTE Advanced Long Term Evolution

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

Meeting camera device including a screen and two or more cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view.

Description

MEETING CAMERA
BACKGROUND OF THE INVENTION 1. Field of the Invention
The invention relates to a meeting camera device with two or more cameras, the device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face. The invention further relates to systems comprising such devices.
2. Technical Background
Conventional video phones give a very poor user experience because there's rarely eye-to- eye contact— instead, the caller seems to be looking away from you since he's looking away from the camera, or at least he is not looking direcdy at the camera. This is shown for example in Figure 1. In Figure 1 , an image of a first person is shown on the screen of a second person. The image of the first person is seen from off-centre, because the first person is looking at their screen centre while being filmed by an off-centre camera. In Figure 1 , we see the back of the head of a second person who is viewing their (the second person's) screen. The second person is being filmed by an off-centre camera. Hence the image of the second person provided to the first person will be off-centre, in common with the image of the first person supplied to the second person.
3. Discussion of Related Art
In US5,650,814 "Image Processing System Comprising Fixed Cameras and a System Simulating a Mobile Camera", there is disclosed an image processing system, comprising: a system of n fixed real cameras arranged in such a way that their individual fields of view merge so as to form a single wide-angle field of view for recording a panoramic scene; an image construction system simulating a mobile virtual camera continuously scanning the panoramic scene to furnish a target sub-image corresponding to an arbitrary section of the wide-angle field of view and constructed from adjacent source images furnished by the n real cameras. SUMMARY OF THE INVENTION
According to a first aspect of the invention, there is provided a meeting camera device including a screen and two or more cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view.
The meeting camera device may be operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face.
The meeting camera device may be one wherein the viewer is located in an off-centre position with respect to the screen. The meeting camera device may be one wherein virtual camera placement is accomplished by a tracking system tracking the viewer and implementing a tracking algorithm.
The meeting camera device may be one wherein the tracking system tracks a viewer's eye or eyes, and the virtual camera is centred on an eye of the viewer.
The meeting camera device may be one wherein the virtual camera is centred on a right eye of the viewer.
The meeting camera device may be one wherein the tracking system is operable to record its tracking statistics for the tracking of a user's eye or eyes.
The meeting camera device may be one wherein the tracking system is operable to record its tracking of a user's eye or eyes to provide data which if corresponding to a predefined sequence will unlock the device.
The meeting camera device may be one wherein the virtual camera is situated in the centre of the screen. The meeting camera device may be one wherein parallax information is used in constructing the virtual camera image.
The meeting camera device may be one wherein two cameras are arranged with respect to the viewer such that they each capture a significantly different image, but still somewhat similar images.
The meeting camera device may be one wherein where two images differ significantly, graphical modelling techniques are used to generate a virtual camera image.
The meeting camera device may be one wherein images taken from different cameras of a face and head are projected onto a head-shaped object so as to generate a three dimensional representation of the topography of a person's face and head; that three dimensional representation being imaged from in front of the face, so as to provide a virtual camera image from in front of the face.
The meeting camera device may be one wherein optic axes of the cameras meet exactly or approximately at a position which is the position in which a subject is located in an ideal or reference case.
The meeting camera device may be one wherein optic axes of the cameras meet at a point in front of the centre of the screen.
The meeting camera device may be one comprising exactly two cameras.
The meeting camera device may be one wherein cameras are placed on either side of the device screen.
The meeting camera device may be one comprising three cameras.
The meeting camera device may be one wherein the cameras are arranged on the vertices of a triangle. The meeting camera device may be one wherein parallax information is available along orthogonal directions.
The meeting camera device may be one comprising four cameras.
The meeting camera device may be one wherein the cameras are arranged on the vertices of a quadrilateral.
The meeting camera device may be one wherein parallax information is available along orthogonal directions.
The meeting camera device may be one wherein an image taken by the virtual camera is shown to another party. The meeting camera device may be one wherein the device provides for seeing eye-to-eye when video conferencing.
The meeting camera device may be one wherein a viewer can approach a large panel display with continuous video-conferencing and talk directly to the person shown on it, giving the feeling of eye-to-eye contact.
The meeting camera device may be one wherein the device comprises an integral microphone and speaker. The meeting camera device may be one wherein an image from the virtual camera has a selectable 200m level.
The meeting camera device may be one wherein an image from the virtual camera has selectable tilt or selectable pan, or both selectable tilt and selectable pan.
The meeting camera device may be one wherein the virtual camera is operable to correct for unwanted zoom present in the image of a user. The meeting camera device may be one wherein the virtual camera is operable to correct for unwanted tilt, unwanted pan, or unwanted tilt and unwanted pan, present in the image of a user. The meeting camera device may be one wherein the device is a mobile phone, a laptop computer, a desktop monitor, a television, or a large screen display device.
The meeting camera device may be one wherein a device display is a liquid crystal display, a plasma screen display, a cathode ray tube, an organic light emitting diode (OLED) display, or a bistable display.
The meeting camera device may be one wherein the device is a handheld portable device, a fixed device, a desktop device, a wall-mounted device, a conference room device, a device in an automobile, a device on a mobile phone, a device on a train, a device on an aeroplane, or a hotel room device.
The meeting camera device may be one wherein the virtual camera provides video output.
The meeting camera device may be one wherein the virtual camera provides a photograph.
The meeting camera device may be one wherein the device has a profile of a triangle.
The meeting camera device may be one wherein the point is about 40 cm from the screen. The meeting camera device may be one wherein the point is about 2 m from the screen.
The meeting camera device may be one wherein the virtual camera is implemented so as to provide a two dimensional image. The meeting camera device may be one wherein the virtual camera is implemented so as to provide a three dimensional image.
The meeting camera device may be one wherein the three dimensional image is for display on a three dimensional display. The meeting camera device may be one wherein the three dimensional display is an autostereoscopic display, or a holographic display. The meeting camera device may be one wherein the device comprises at least two microphones.
The meeting camera device may be one wherein the device is operable to identify at least one sound source from the sound input received at the two microphones.
The meeting camera device may be one wherein the device is operable to provide to a receiving device a selectable option to transmit only the sound from the identified sound source. The meeting camera device may be one wherein upon selection at the receiving device of the option to transmit only the sound from the identified sound source, the meeting camera device transmits only the sound from the identified sound source.
According to a second aspect of the invention, there is provided a meeting camera system including a device comprising a screen and two or more cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the system including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view.
The meeting camera system may be one comprising a remote computer, wherein images from the cameras are transmitted to the remote computer, at which the virtual camera image is generated. The meeting camera system may be one comprising the cloud, wherein images from the cameras are transmitted to the cloud, at which the virtual camera image is generated.
The meeting camera system may be one comprising a different display device, wherein images from the cameras are transmitted to the different display device, and wherein the virtual camera image is generated and displayed at the different display device. The meeting camera system may be one wherein the device comprises exactly two cameras.
The meeting camera system may be one wherein the device comprises three cameras.
The meeting camera system may be one wherein the virtual camera is implemented so as to provide a three dimensional image.
According to a third aspect of the invention, there is provided a meeting camera device system comprising two devices, each device including a screen and two or more cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, each device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein each device provides a target sub-image to the other device.
The meeting camera device system may be one wherein provision of a target sub-images includes transmission via a mobile phone network, or transmission via the internet, or transmission via a network, or transmission via a wired network, or transmission via a wireless network.
The meeting camera device system may be one wherein each device includes exactly two cameras.
The meeting camera device system may be one wherein each device includes three cameras.
The meeting camera device system may be one wherein each virtual camera is implemented so as to provide a three dimensional image.
According to a fourth aspect of the invention, there is provided a meeting camera device system comprising two devices, each device including a screen and two or more cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, each device transmitting its camera images to a computer, the computer including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein each device receives a target sub- image based on data transmitted by the other device to the computer. The meeting camera device system may be one wherein provision of a target sub-images includes transmission via a mobile phone network, or transmission via the internet, or transmission via a network, or transmission via a wired network, or transmission via a wireless network. The meeting camera device system may be one wherein each device includes exactiy two cameras.
The meeting camera device system may be one wherein each device includes three cameras. The meeting camera device system may be one wherein each virtual camera is implemented so as to provide a three dimensional image.
The meeting camera device system may be one wherein the computer is in the Cloud.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 shows a system in which there is displayed an image of a first person looking at their screen centre, being filmed by their off-centre camera, and being shown on a second person's screen.
Figure 2 shows a device comprising a screen and two off-centre cameras. The cameras are arranged in a major face of the device.
Figure 3 shows a system in which there is displayed an image of a first person looking at their screen centre, being filmed by their two off-centre cameras, which are part of a virtual camera, and being shown on second person's screen by the virtual camera.
Figure 4 shows a system in which there is displayed an image of a first person looking at their screen centre from an off-centre position (not shown), being filmed by their two off- centre cameras, which are part of a virtual camera, and being shown on second person's screen by the virtual camera.
Figure 5 shows a device comprising a screen and three off-centre cameras. The cameras are arranged in a major face of the device.
Figure 6 shows a system in which there is displayed an image of a first person looking at their screen centre from an off-centre position (not shown), being filmed by their three off- centre cameras, which are part of a virtual camera, and being shown on second person's screen by the virtual camera.
Figure 7 shows a system in which there is displayed an image of a first person looking at their screen centre from an off-centre position (not shown), being filmed by their three off- centre cameras, which are part of a virtual camera, and being shown on second person's screen by the virtual camera.
Figure 8 shows a system in which there is displayed an image of a first person looking at their screen centre from an off-centre position (not shown), being filmed by their three off- centre cameras, which are part of a virtual camera, and being shown on second person's screen by the virtual camera.
Figure 9 shows a device comprising a screen and four off-centre cameras. The cameras are arranged in a major face of the device.
Figure 10 shows a system in which there is displayed an image of a first person looking at their screen centre from an off-centre position (not shown), being filmed by their four off- centre cameras, which are part of a virtual camera, and being shown on second person's screen by the virtual camera. Figure 11 shows an example of a customer proposition.
Figure 12 shows an example of a smartphone specification.
Figure 13 shows an example of a mobile device industrial design.
Figure 14 shows an example of a mobile device industrial design.
Figure 15 shows an example of a mobile phone hardware specification.
Figure 16 shows examples of chipsets for mobile devices.
Figure 17 shows an example specification for a back screen of a mobile device. Figure 18 shows an example software architecture of a mobile device.
Figure 19 shows examples of aspects of an example mobile device.
Figure 20 shows examples of an applications concept for a mobile device.
Figure 21 shows examples of applications for a mobile device.
Figure 22 shows further examples of applications for a mobile device.
Figure 23 shows an example of a mobile device in which the microphone is placed in a hole in the body of the mobile device, in the SIM card's eject hole.
DETAILED DESCRIPTION
There is provided a 'Meet Camera', which provides for seeing eye-to-eye when video conferencing.
As shown for example in Figure 2, we place cameras on either side of the device screen, such as a Hquid crystal display (LCD) screen to create a virtual camera in the centre of the screen, using an algorithm based on the two images. The image taken by the virtual camera is what is shown to the other party: this gives the impression to the other party that you are looking direcdy at them— a much better user experience. Where two images are used, parallax information is available. Parallax information may be used in constructing the virtual camera image.
An example of a result of using the virtual camera described above in relation to Figure 2 is shown in Figure 3. In Figure 3, an image of a first person is shown on the screen of a second person. The image of the first person is seen from the centre, because the first person is looking at their screen centre while being filmed by two off-centre cameras, such as those shown in Figure 2, from which a virtual camera located at or near the screen centre has been created. In Figure 3, we see the back of the head of a second person who is viewing their (the second person's) screen. The second person is being filmed by two off- centre cameras, from which a virtual camera at or near the screen centre is created. Hence the image of the second person provided to the first person will be seen from at or near the screen centre, in common with the image of the first person supplied to the second person.
One advantage of 'Meet Camera' is that one can approach a large panel display with always on video-conferencing and talk direcdy to the person shown on it— giving the feeling of eye-to-eye contact. The face displayed by the virtual camera can be placed in the centre of the screen, even if the face of the person whose image is being captured moves significantiy away from the centre of the screen. This placement can be accomplished by a tracking system implementing a tracking algorithm. The tracking system may track an eye or the eyes of a viewer. An example is shown in Figure 4. In Figure 4, an image of a first person is shown on the screen of a second person. The image of the first person is seen from the centre, even though the first person is located in an off-centre position (not shown), similar to the off-centre position of the second person shown in Fig. 4. This is because the first person is looking at their screen centre while being filmed by two off-centre cameras, such as those shown in Figure 2, from which a virtual camera has been created. The virtual camera is arranged so as to provide an image of the first person as if the first person were looking directly at the virtual camera. In Figure 4, we see the back of the head of a second person who is viewing their (the second person's) screen. The second person is being filmed by two off-centre cameras; the second person is in an off-centre position. A second virtual camera for the second person is arranged so as to provide an image of the second person as if the second person were looking directly at the second virtual camera. Hence the image of the second person provided to the first person is a front view of the second person, centrally located on the device screen, in common with the image of the first person supplied to the second person.
The tracking system may record its tracking statistics for the tracking of a user's eye or eyes. Such statistics could be useful in deterrnining the user's degree of attentiveness, or for measuring the effectiveness of advertising. The tracking system may be useful in implementing a form of user password for unlocking a device. For example, a user may look at points on the device in a sequence, and this will unlock the device. Tracking system output may be used to control the user interface. For example, high priority information may be presented on a part of the screen that the tracking system indicates the viewer is looking at. The virtual camera may be implemented with respect to a display device which displays an image from the virtual camera, or with respect to a display device which obtains an image for display on another display device. The display device with respect to which the virtual image is captured, or on which the virtual image is displayed, may be a mobile phone display, a laptop computer display, a desktop monitor display, a television display, or a large screen display device. The display device may be a liquid crystal display device, a plasma screen display device, a cathode ray tube, an organic light emitting diode (OLED) display device, or a bistable display device. The virtual camera may be implemented with respect to a device which captures images for use in generating the virtual camera image, where that device is a handheld portable device, a fixed device, a desktop device, a wall-mounted device, a conference room device, a device in an automobile, a device on a mobile phone, a device on a train, a device on an aeroplane, or a hotel room device.
Although Figure 2 shows a particular arrangement of two cameras on a device, it will be appreciated by those skilled in the art that two cameras used to generate a virtual camera may be arranged in many ways. It is preferable that the two cameras should be arranged with respect to the individual being filmed such that they each capture a significantly different image, but still somewhat similar images. This enables the virtual camera algorithm to combine the two images obtained by the two cameras such as to generate an image as if it had been obtained from a different location. Those skilled in the art would appreciate that this process becomes less useful if the two images do not differ significandy i.e. if the two cameras are located in very similar positions. Those skilled in the art would appreciate that this process becomes less reliable if the two images differ too greatly, so that they cannot be readily combined. However, those skilled in the art will appreciate that where two images differ significantly, graphical modelling techniques may be used to generate a virtual camera image. For example, in computational terms, images taken from two different cameras of a face and head may be projected onto a head-shaped object so as to generate a three dimensional representation of the topography of a person's face and head. That three dimensional representation may be imaged from in front of the face, so as to generate a virtual camera image from in front of the face. This process can be extended to three, four or more cameras.
It will also be appreciated by those skilled in the art that three cameras, four cameras, or more than four cameras, can be arranged so as to generate a virtual camera. For example, three cameras may be arranged on a device as shown in Figure 5. Because the cameras of Figure 5 are not collinear, parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device (eg. the device of Figure 5), which is useful when generating an image from a virtual camera, as would be clear to those skilled in the art. Parallax information may be used in constructing the virtual camera image. Three non collinear cameras are useful when the person being filmed is located off-centre, because they may be off-centre not just as shown in Figure 4, i.e. off-centre substantially in the direction parallel to the line passing through the two cameras of Figure 4, but they may be off-centre along the orthogonal direction to the line passing through the two cameras in Figure 4 i.e. too high or too low with respect to the off-centre position in Figure 4. For example, four cameras may be arranged on a device as shown in Figure 9, which shows four cameras each near the vertices of a device with a rectangular profile.
Figure 6 shows an example in which the second person is off-centre along the orthogonal direction to the line passing through the two cameras in Figure 4 i.e. too high or too low (in this example, too low) with respect to their corresponding position in Figure 4. In Figure 6, an image of a first person is shown on the screen of a second person. The image of the first person is seen from the centre, even though the first person is located in an off- centre position (not shown), which is similar to the off-centre position shown for the second person in Figure 6. This is because the first person is looking at their screen centre while being filmed by three off-centre cameras, such as those shown in Figure 5, from which a virtual camera has been created. The virtual camera is arranged so as to provide an image of the first person as if the first person were looking directiy at the virtual camera. In Figure 6, we see the back of the head of a second person who is viewing their (the second person's) screen. The second person is being filmed by three off-centre cameras; the second person is in an off-centre position, which differs from the off-centre position shown in Figure 4. A second virtual camera for the second person is arranged so as to provide an image of the second person as if the second person were looking directly at the second virtual camera. Hence the image of the second person provided to the first person is a front view of the second person, centrally located on the device screen, in common with the image of the first person supplied to the second person.
Figure 7 shows an example which may be especially effective in generating parallax information along orthogonal directions, or in generating a wide field of view, which may be useful when generating an image from a virtual camera, as would be clear to those skilled in the art. Parallax information may be used in constructing the virtual camera image. In Figure 7, the three cameras are arranged on the vertices of an equilateral triangle. In Figure 7, the device has the profile of an equilateral triangle, although this is not necessary in order for the three cameras to be arranged on an equilateral triangle: the device profile could be another shape such as rectangular, for example. An equilateral triangle arrangement of cameras is useful in generating parallax information, such as when the user is in an off-centre position. Parallax information may be used in constructing the virtual camera image. Alternatively, the three cameras may be arranged on the vertices of an isosceles triangle, a right angled triangle, or a scalene triangle. The three cameras may be arranged on the vertices of a triangle. In Figure 7, the image of the first person is seen from the centre, even though the first person is located in an off-centre position (not shown), which is similar to the off-centre position shown for the second person.
Figure 8 shows an example which may be especially effective in generating parallax information along orthogonal directions, or in generating a wide field of view, which may be useful when generating an image from a virtual camera, as would be clear to those skilled in the art. Parallax information may be used in constructing the virtual camera image. In Figure 8, the three cameras are arranged on the vertices of a triangle, such as an equilateral triangle. In Figure 8, the device has the profile of a rectangle. A triangular arrangement (for example, on an equilateral triangle) of cameras is useful in generating parallax information, such as when the user is in an off-centre position. Parallax information may be used in constructing the virtual camera image. Alternatively, the three cameras may be arranged on the vertices of an isosceles triangle, a right angled triangle, or a scalene triangle. The three cameras may be arranged on the vertices of a triangle. In Figure 8, the image of the first person is seen from the centre, even though the first person is located in an off-centre position (not shown), which is similar to the off-centre position shown for the second person.
For example, four cameras may be arranged on a device as shown in Figure 9, which shows four cameras each near the vertices of a device with a rectangular profile. Because the cameras of Figure 9 are not collinear, parallax information is available along orthogonal directions, which is useful when generating an image from a virtual camera, as would be clear to those skilled in the art. Parallax information may be used in constructing the virtual camera image.
Figure 10 shows an example which may be especially effective in generating parallax information along orthogonal directions, or in generating a wide field of view, which may be useful when generating an image from a virtual camera, as would be clear to those skilled in the art. Parallax information may be used in constructing the virtual camera image. In Figure 10, the four cameras are arranged on the vertices of a rectangle. In Figure 10, the device has the profile of a rectangle. A quadrilateral arrangement (for example, on a rectangle) of cameras is useful in generating parallax information, such as when the user is in an off-centre position. Parallax information may be used in constructing the virtual camera image. Alternatively, the four cameras may be arranged on the vertices of a square, a kite, or a parallelogram. The four cameras may be arranged on the vertices of a quadrilateral. In Figure 10, the image of the first person is seen from the centre, even though the first person is located in an off-centre position (not shown), which is similar to the off-centre position shown for the second person.
The user may be in an off-centre position because they move with respect to a fixed device, or because the device is not fixed (eg. it is handheld), and the device moves, tilts or pans with respect to a user. Alternatively, the user and the device may move eg. a moving user using a handheld device, which may tilt or pan.
The device which provides a virtual camera may also provide a microphone and speaker, so that a user of the device can be in voice communication with another user of another device with a microphone and speaker. The virtual camera may be mobile in that its position can be located within a field of view that is obtained by combining the images from two, three, four or more real cameras. The mobile virtual camera may supply a target sub-image corresponding to an arbitrary section of the field of view constructed from adjacent source images from two, three, four or more real cameras.
The view generated by the virtual camera may be displayed on a display. The image from the mobile virtual camera may have a selectable zoom level. The image from the mobile virtual camera may have selectable tilt or selectable pan, or both selectable tilt and selectable pan.
The virtual camera may be situated so as to provide the view seen from a particular eye of the user. The eye may be a right eye or a left eye. The right eye is the preferred eye. The virtual camera may provide video output. The virtual camera may provide a photograph.
When a target image is constructed from portions which arise from different camera images, the different cameras may supply source images with different luminances. Accordingly, at the boundary between the different camera images, a boundary line may appear, across which the image brightness is seen to fall or rise relatively abrupdy. Accordingly, the luminance difference between different source images which form part of the target image must be corrected, so as to provide an image which is acceptably free of one or more boundary lines to a user who views the target image. Correction may be implemented as described in US5,650,814, which is incorporated by reference, or by other methods known to those skilled in the art.
The virtual camera image may be generated on a device with includes two, three, four or more cameras from which the virtual camera image is generated. Alternatively, the images from two, three, four or more cameras may be transmitted to a remote computer, at which the virtual camera image is generated. The virtual camera image thus generated may be transmitted to a display device for display. Alternatively still, the images from two, three, four or more cameras may be transmitted to a display device, the virtual camera image being generated and displayed at the display device.
The virtual camera may be implemented so as to correct for unwanted tilt, unwanted pan, or unwanted tilt and unwanted pan, present in the image of a user. The virtual camera may be implemented so as to correct for unwanted zoom (i.e. image too close or too far), present in the image of a user.
The virtual camera may be implemented so as to provide a two dimensional image. The virtual camera may be provided so as to provide a three dimensional image. For example, in computational terms, images taken from two, three or more different cameras of a face and head may be projected onto a head-shaped object so as to generate a three dimensional representation of the topography of a person's face and head. That three dimensional representation can provide a three dimensional image. A two dimensional image may be displayed on a display. A three dimensional image may be displayed on a three dimensional display, such as on an autostereoscopic display, on a holographic display, or on any three dimensional display known to those skilled in the art.
Those skilled in the art will appreciate that a virtual camera may be implemented in many ways using the images from two, three, four or more cameras. One example is provided by US5,650,81 "Image Processing System Comprising Fixed Cameras and a System Simulating a Mobile Camera", which is incorporated here by reference.
Implementation of a virtual camera may be facilitated if the optic axes of the n real cameras (where n≥2) of the system meet exacdy or approximately at the position which is the position in which a subject is located in an ideal or reference case eg. a position a fixed distance perpendicular from the centre of a screen. One reason is that this provides a common reference point for all n cameras. For example, in the case of Figure 2, the optic axes of the two cameras may meet at a point in front of the centre of the screen. For example, in the case of Figure 5, the optic axes of the three cameras may meet at a point in front of the centre of the screen. For example, in the case of Figure 9, the optic axes of the four cameras may meet at a point in front of the centre of the screen. For example, that point may be about 40 cm in front of the screen in the case of a screen on a portable device, or about 2 m in front of the screen in the case of a medium sized television screen, or about 4 m in front of the screen in the case of a large sized television screen. In Figure 3, such a point may be the position of the centre of the face of the second person; such a position is possible for the device of Fig. 2, or for the device of Fig. 5 or for the device of Fig. 9. There may be provided two devices, each device including an image processing system, each image processing system comprising a system of n≥2 fixed real cameras arranged that individual fields of view merge so as to form a single field of view, an image construction system simulating a mobile, virtual camera supplying a target sub-image corresponding to a section of the field of view and constructed from source images from the n real cameras, wherein the image from each virtual camera of a particular device is displayed at the other device. The image processing system may be a digital system that further comprises a luminance equalizing system for overall equalizing of corresponding luminance levels of first and second portions of a digital target image derived from two adjacent source images (Ii, Ij). The luminance equalizing system may include first and second luminance correction modules which apply a first and second correction law (Gi, Gj), respectively, to first and second sets (R, S) of the corresponding luminance levels of the first and second portion (Ivi, Ivj) of the digital target image derived from said two adjacent digital source images (Ii, Ij), to equalize the corresponding luminance levels to the best possible extent, in accordance with a relation Gi(R)=Gj(S).
There may be provided two devices, each device including an image processing system, each image processing system comprising a system of n≥2 fixed real cameras arranged that individual fields of view merge so as to form a single field of view, an image construction system simulating a mobile, virtual camera continuously scanning the field of view to construct a target sub-image corresponding to an arbitrary section of the field of view and derived from adjacent source images from the n real cameras, wherein the image from each virtual camera of a particular device is displayed at the other device. The image processing system may be a digital system that further comprises a luminance equalizing system for overall equalizing of corresponding luminance levels of first and second portions of a digital target image derived from two adjacent source images (Ii, Ij). The luminance equalizing system may include first and second luminance correction modules which apply a first and second correction law (Gi, Gj), respectively, to first and second sets (R, S) of the corresponding luminance levels of the first and second portion (Ivi, Ivj) of the digital target image derived from said two adjacent digital source images (Ii, Ij), to equalize the corresponding luminance levels to the best possible extent, in accordance with a relation Gi(R)=Gj(S).
User experience (UX) to identity sound sources
Individual sound sources (different people speaking eg. at a phone in hands-free mode) are identified through the use of two or more inbuilt microphones in the meeting camera device, eg. a mobile device. Then the individual sources are graphically represented on a receiving device relative to their position eg. in the room. A visual interface on the receiving device enables selection by hand of which sound source to record e.g. to optimise the noise cancellation/sonic focus for the selected sound source. This could be advantageous in for instance meetings where one person is talking and you want to aggressively noise cancel everything else. One method for accomphshing this is to determine the relative delays of the various sound sources between their reception at the microphones by determining the respective delays of the sound sources with respect to their reception at the microphones. For example, if person A is closer to Microphone A than to Microphone B, the sound output of person A will be received at Microphone A before it is received at Microphone B, due to the finite speed of sound, even though the sound output received at the microphones A and B may be very similar. Similarly, if person B is closer to Microphone B than to Microphone A, the sound output of person B will be received at Microphone B before it is received at Microphone A, due to the finite speed of sound, even though the sound output received at the microphones A and B may be very similar. By determining the different relative delays at the two microphones for the sounds received from person A and person B, it can be determined that there are at least two sound sources: here, person A and person B. Furthermore, the sounds from these two sources can be separated eg. by filtering out the unwanted sound source. By selecting one of these sources, it is possible to listen to one source in preference to another sound source. In another example, there can be one set of sounds that has a characteristic time delay between the reception at the two microphones (sound output from person A), and other sounds with no well-characterized delay between their reception at the two microphones or with significantly different delays between their reception at the two microphones compared to the delay which characterizes the sound from person A. Such other sounds could be background chatter from people in a crowded environment, such as in a train station, or in an airport, or such sounds could be vehicular traffic sounds in an urban environment. Those other sounds can be suppressed, so as to improve the audibility of the person one wants to listen to. An option can be selected on the meeting camera device (eg. a mobile device), to suppress background sound, to improve the audibility of the person one wants to listen to.
Note
It is to be understood that the above-referenced arrangements are only illustrative of the application for the principles of the present invention. Numerous modifications and alternative arrangements can be devised without departing from the spirit and scope of the present invention. While the present invention has been shown in the drawings and fully described above with particularity and detail in connection with what is presently deemed to be the most practical and preferred example(s) of the invention, it will be apparent to those of ordinary skill in the art that numerous modifications can be made without departing from the principles and concepts of the invention as set forth herein.
CONCEPTS
There are multiple concepts, described as concepts A.-D', in this disclosure. The following may be helpful in defining these concepts.
A. Meeting Camera Device
Meeting camera device including a screen and n>2 cameras, the cameras each situated off- centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view. The following features may also be present:
• device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face.
• n=2
• parallax information is available
• Parallax information may be used in constructing the virtual camera image.
• device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position.
• n=3
• n-4
• device provides for seeing eye-to-eye when video conferencing.
· place cameras on either side of the device screen
• create a virtual camera in the centre of the screen
• image taken by the virtual camera is what is shown to another party
• image taken by the virtual camera is what is shown to the other party: this gives the impression to the other party that the viewer is looking directly at the other party · create a virtual camera located at or near the screen centre
• viewer can approach a large panel display with continuous video-conferencing and talk direcdy to the person shown on it— giving the feeling of eye-to-eye contact. • face displayed by the virtual camera is placed in the centre of the screen, even if the face of the person whose image is being captured moves significandy away from the centre of the screen.
• face displayed by the virtual camera is placed in the centre of the screen, even if the face of the person whose image is being captured moves significandy away from the centre of the screen; placement can be accomplished by a tracking system implementing a tracking algorithm.
• virtual camera is arranged so as to provide an image of the first person as if the first person were looking direcdy at the virtual camera, even when the first person is in an off-centre position.
• Device is mobile phone, laptop computer, a desktop monitor, a television, or a large screen display device.
• display device may be a liquid crystal display device, a plasma screen display device, a cathode ray tube, an organic light emitting diode (OLED) display device, or a bistable display device.
• device may be a handheld portable device, a fixed device, a desktop device, a wall- mounted device, a conference room device, a device in an automobile, a device on a mobile phone, a device on a train, a device on an aeroplane, or a hotel room device.
· two cameras should be arranged with respect to the individual being filmed such that they each capture a significantly different image, but still somewhat similar images.
• where two images differ significandy, graphical modelling techniques may be used to generate a virtual camera image.
· images taken from different cameras of a face and head may be projected onto a head-shaped object so as to generate a three dimensional representation of the topography of a person's face and head; that three dimensional representation may be imaged from in front of the face, so as to generate a virtual camera image from in front of the face.
· Three or more cameras are used which are not collinearly arranged.
• Three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions. • Three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device.
• Three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device; parallax information may be used in constructing the virtual camera image.
• device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position, wherein the device has at least three non collinearly arranged cameras, and wherein the off-centre position is displaced vertically from a horizontal plane passing through the screen centre, and wherein the off-centre position is displaced horizontally from the screen centre.
• Three cameras are arranged on the vertices of an equilateral triangle, an isosceles triangle, a right angled triangle, a scalene triangle, or a triangle.
• Device has the profile of a rectangle.
• Device has the profile of a triangle.
• Four cameras are arranged on the vertices of a square, a rectangle, a kite, a parallelogram, or a quadrilateral.
· user may be in an off-centre position because they move with respect to a fixed device
• user may be in an off-centre position because the device is not fixed
• user may be in an off-centre position because the device is handheld
• device moves, tilts or pans with respect to a user
· the user and the device may move
• device which provides a virtual camera may also provide a microphone and speaker
• device which provides a virtual camera may also provide a microphone and speaker; user of the device can be in voice communication with another user of another device with a microphone and speaker
· virtual camera may be mobile in that its position can be located within a field of view that is obtained by combining the images from two, three, four or more real cameras mobile virtual camera may supply a target sub-image corresponding to an arbitrary section of the field of view constructed from adjacent source images from two, three, four or more real cameras
image from the mobile virtual camera may have a selectable zoom level
image from the mobile virtual camera may have selectable tilt or selectable pan, or both selectable tilt and selectable pan.
virtual camera may provide video output
virtual camera may provide a photograph.
virtual camera may be implemented so as to correct for unwanted tilt, unwanted pan, or unwanted tilt and unwanted pan, present in the image of a user
virtual camera may be implemented so as to correct for unwanted zoom
the optic axes of the n real cameras (where n>2) of the system meet exactly or approximately at the position which is the position in which a subject is located in an ideal or reference case
the optic axes of two cameras may meet at a point in front of the centre of the screen
optic axes of three cameras may meet at a point in front of the centre of the screen optic axes of four cameras may meet at a point in front of the centre of the screen, point may be about 40 cm in front of the screen
point may be about 2 m in front of the screen
point may be about 4 m in front of the screen
virtual camera is implemented so as to provide a two dimensional image virtual camera is implemented so as to provide a three dimensional image three dimensional image is for display on a three dimensional display
three dimensional display is an autostereoscopic display, or a holographic display meeting camera device comprises at least two microphones.
Meeting camera device is operable to identify at least one sound source from the sound input received at the two microphones.
Meeting camera is operable to provide to a receiving device a selectable option to transmit only the sound from the identified sound source.
Meeting camera device, wherein upon selection at the receiving device of the option to transmit only the sound from the identified sound source, the meeting camera device transmits only the sound from the identified sound source. Method of supplying a target sub-image corresponding to a portion of the fields of view for a meeting camera device, wherein the meeting camera device includes a screen and n≥2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein the method comprises the step of: using the virtual camera comprising an image construction system to supply a target sub-image corresponding to a portion of the fields of view.
Computer program product operable to supply a target sub-image corresponding to a portion of the fields of view for a meeting camera device, wherein the meeting camera device includes a screen and n>2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the device jncluding a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein the computer program product is operable to supply a target sub-image corresponding to a portion of the fields of view. B. Meeting Camera System
Meeting camera system including a device comprising a screen and n≥2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the system including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view. The following features may also be present:
• images from two, three, four or more cameras may be transmitted to a remote computer, at which the virtual camera image is generated
• system comprises the cloud, wherein images from the cameras are transmitted to the cloud, at which the virtual camera image is generated.
• images from two, three, four or more cameras may be transmitted to a display device, the virtual camera image being generated and displayed at the display device • Meeting camera system operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face.
• n=2
• parallax information is available
· Parallax information may be used in constructing the virtual camera image.
• Meeting camera system operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position.
• n=3
· n=4
• Meeting camera system provides for seeing eye-to-eye when video conferencing.
• place cameras on either side of the device screen
• create a virtual camera in the centre of the screen
• image taken by the virtual camera is what is shown to another party
· image taken by the virtual camera is what is shown to the other party: this gives the impression to the other party that the viewer is looking direcdy at the other party
• create a virtual camera located at or near the screen centre
• viewer can approach a large panel display with continuous video-conferencing and talk direcdy to the person shown on it - giving the feeling of eye-to-eye contact. · face displayed by the virtual camera is placed in the centre of the screen, even if the face of the person whose image is being captured moves significandy away from the centre of the screen.
• face displayed by the virtual camera is placed in the centre of the screen, even if the face of the person whose image is being captured moves significandy away from the centre of the screen; placement can be accomplished by a tracking system implementing a tracking algorithm.
• virtual camera is arranged so as to provide an image of the first person as if the first person were looking direcdy at the virtual camera, even when the first person is in an off-centre position.
· Device is mobile phone, laptop computer, a desktop monitor, a television, or a large screen display device. • display device may be a liquid crystal display device, a plasma screen display device, a cathode ray tube, an organic light emitting diode (OLED) display device, or a bistable display device.
• device may be a handheld portable device, a fixed device, a desktop device, a wall- mounted device, a conference room device, a device in an automobile, a device on a mobile phone, a device on a train, a device on an aeroplane, or a hotel room device.
• two cameras should be arranged with respect to the individual being filmed such that they each capture a significantly different image, but still somewhat similar images.
• where two images differ significantly, graphical modelling techniques may be used to generate a virtual camera image.
• images taken from different cameras of a face and head may be projected onto a head-shaped object to as to generate a three dimensional representation of the topography of a person's face and head; that three dimensional representation may be imaged from in front of the face, so as to generate a virtual camera image from in front of the face.
• Three or more cameras are used which are not collinearly arranged.
• Three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions.
• Three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device.
• Three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device; parallax information may be used in constructing the virtual camera image.
• Meeting camera system operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position, wherein the device has at least three non collinearly arranged cameras, and wherein the off-centre position is displaced vertically from a horizontal plane passing through the screen centre, and wherein the off-centre position is displaced horizontally from the screen centre. • Three cameras are arranged on the vertices of an equilateral triangle, an isosceles triangle, a right angled triangle, a scalene triangle, or a triangle.
• Device has the profile of a rectangle.
• Device has the profile of a triangle.
• Four cameras are arranged on the vertices of a square, a rectangle, a kite, a parallelogram, or a quadrilateral.
• user may be in an off-centre position because they move with respect to a fixed device
• user may be in an off-centre position because the device is not fixed
• user may be in an off-centre position because the device is handheld
• device moves, tilts or pans with respect to a user
• the user and the device may move
• Meeting camera system which provides a virtual camera, such that device may provide a microphone and speaker
• Meeting camera system which provides a virtual camera, such that device may provide a microphone and speaker; user of the device can be in voice communication with another user of another device with a microphone and speaker
• virtual camera may be mobile in that its position can be located within a field of view that is obtained by combining the images from two, three, four or more real cameras
• mobile virtual camera may supply a target sub-image corresponding to an arbitrary section of the field of view constructed from adjacent source images from two, three, four or more real cameras
• image from the mobile virtual camera may have a selectable zoom level
• image from the mobile virtual camera may have selectable tilt or selectable pan, or both selectable tilt and selectable pan.
• virtual camera may provide video output
• virtual camera may provide a photograph.
• virtual camera may be implemented so as to correct for unwanted tilt, unwanted pan, or unwanted tilt and unwanted pan, present in the image of a user
• virtual camera may be implemented so as to correct for unwanted zoom • the optic axes of the n real cameras (where n≥2) of the system meet exactly or approximately at the position which is the position in which a subject is located in an ideal or reference case
• the optic axes of two cameras may meet at a point in front of the centre of the screen
• optic axes of three cameras may meet at a point in front of the centre of the screen
• optic axes of four cameras may meet at a point in front of the centre of the screen.
• point may be about 40 cm in front of the screen
• point may be about 2 m in front of the screen
· point may be about 4 m in front of the screen
• virtual camera is implemented so as to provide a two dimensional image
• virtual camera is implemented so as to provide a three dimensional image
• three dimensional image is for display on a three dimensional display
• three dimensional display is an autostereoscopic display, or a holographic display · Meeting camera system wherein the device comprises at least two microphones.
• Meeting camera system, wherein the system is operable to identify at least one sound source from the sound input received at the two microphones.
• Meeting camera system, wherein the system is operable to provide at a receiving device a selectable option to reproduce only the sound from the identified sound source.
• Meeting camera system, wherein upon selection at the receiving device of the option to reproduce only the sound from the identified sound source, the receiving device reproduces only the sound from the identified sound source. Method of supplying a target sub-image corresponding to a portion of the fields of view for a meeting camera system, wherein the meeting camera system includes a meeting camera device including a screen and n≥2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the system including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein the method comprises the step of: using the virtual camera comprising an image construction system to supply a target sub-image corresponding to a portion of the fields of view. Computer program product operable to supply a target sub-image corresponding to a portion of the fields of view for a meeting camera system, wherein the meeting camera system includes a meeting camera device including a screen and n>2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the system including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein the computer program product is operable to supply a target sub-image corresponding to a portion of the fields of view.
C. Meeting Camera Device System
Meeting camera device system comprising two devices, each device including a screen and n>2 cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, each device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein each device provides a target sub-image to the other device. The following features may also be present:
• provision of target sub-images includes transmission via a mobile phone network
• provision of target sub-images includes transmission via the internet
• provision of target sub-images includes transmission via a network
• provision of target sub-images includes transmission via a wired network
• provision of target sub-images includes transmission via a wireless network
• each device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face.
• n=2
• For each device, parallax information is available
• For each device, Parallax information may be used in constructing the virtual camera image.
• Each device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position. • n=3
• n-4
• each device provides for seeing eye-to-eye when video conferencing.
• For each device, place cameras on either side of the device screen
· For each device, create a virtual camera in the centre of the screen
• For each device, image taken by the virtual camera is what is shown to the other party
• For each device, image taken by the virtual camera is what is shown to the other party: this gives the impression to the other party that the viewer is looking directly at the other party
• For each device, create a virtual camera located at or near the screen centre
• For each device, viewer can approach a large panel display with continuous videoconferencing and talk directly to the person shown on it - giving the feeling of eye- to-eye contact.
· For each device, face displayed by the virtual camera is placed in the centre of the screen, even if the face of the person whose image is being captured moves significantly away from the centre of the screen.
• For each device, face displayed by the virtual camera is placed in the centre of the screen, even if the face of the person whose image is being captured moves significandy away from the centre of the screen; placement can be accomplished by a tracking system implementing a tracking algorithm.
• For each device, virtual camera is arranged so as to provide an image of the first person as if the first person were looking directiy at the virtual camera, even when the first person is in an off-centre position.
· Each device is mobile phone, laptop computer, a desktop monitor, a television, or a large screen display device.
• Each display device may be a liquid crystal display device, a plasma screen display device, a cathode ray tube, an organic light emitting diode (OLED) display device, or a bistable display device.
· Each device may be a handheld portable device, a fixed device, a desktop device, a wall-mounted device, a conference room device, a device in an automobile, a device on a mobile phone, a device on a train, a device on an aeroplane, or a hotel room device. • For each device, two cameras should be arranged with respect to the individual being filmed such that they each capture a significantly different image, but still somewhat similar images.
• For each device, where two images differ significantly, graphical modelling techniques may be used to generate a virtual camera image.
• For each device, images taken from different cameras of a face and head may be projected onto a head-shaped object to as to generate a three dimensional representation of the topography of a person's face and head; that three dimensional representation may be imaged from in front of the face, so as to generate a virtual camera image from in front of the face.
• For each device, three or more cameras are used which are not collinearly arranged.
• For each device, three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions.
• For each device, three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions, such as along any pair of orthogonal directions-in the general plane of the device.
• For each device, three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device; parallax information may be used in constructing the virtual camera image.
• For each device, device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position, wherein the device has at least three non collinearly arranged cameras, and wherein the off-centre position is displaced vertically from a horizontal plane passing through the screen centre, and wherein the off-centre position is displaced horizontally from the screen centre.
• For each device, three cameras are arranged on the vertices of an equilateral triangle, an isosceles triangle, a right angled triangle, a scalene triangle, or a triangle.
• Each device has the profile of a rectangle.
· Each device has the profile of a triangle.
• For each device, four cameras are arranged on the vertices of a square, a rectangle, a kite, a parallelogram, or a quadrilateral. • For each device, user may be in an off-centre position because they move with respect to a fixed device
• For each device, user may be in an off-centre position because the device is not fixed
· For each device, user may be in an off-centre position because the device is handheld
• Each device moves, tilts or pans with respect to a user
• For each device, the user and the device may move
• For each device, device which provides a virtual camera may also provide a microphone and speaker
• For each device, device which provides a virtual camera may also provide a microphone and speaker; user of the device can be in voice communication with another user of the other device with a microphone and speaker
• For each device, virtual camera may be mobile in that its position can be located within a field of view that is obtained by combining the images from two, three, four or more real cameras
• For each device, mobile virtual camera may supply a target sub-image corresponding to an arbitrary section of the field of view constructed from adjacent source images from two, three, four or more real cameras
· For each device, image from the mobile virtual camera may have a selectable zoom level
• For each device, image from the mobile virtual camera may have selectable tilt or selectable pan, or both selectable tilt and selectable pan.
• For each device, virtual camera may provide video output
· For each device, virtual camera may provide a photograph.
• For each device, virtual camera may be implemented so as to correct for unwanted tilt, unwanted pan, or unwanted tilt and unwanted pan, present in the image of a user
• For each device, virtual camera may be implemented so as to correct for unwanted zoom
• For each device, the optic axes of the n real cameras (where n≥2) of the system meet exactly or approximately at the position which is the position in which a subject is located in an ideal or reference case • For each device, the optic axes of two cameras may meet at a point in front of the centre of the screen
• For each device, optic axes of three cameras may meet at a point in front of the centre of the screen
· For each device, optic axes of four cameras may meet at a point in front of the centre of the screen.
• For each device, point may be about 40 cm in front of the screen
• For each device, point may be about 2 m in front of the screen
• For each device, point may be about 4 m in front of the screen
· For each device, virtual camera is implemented so as to provide a two dimensional image
• For each device, virtual camera is implemented so as to provide a three dimensional image
• For each device, three dimensional image is for display on a three dimensional display
• For each device, three dimensional display is an autostereoscopic display, or a holographic display
• Meeting camera device system wherein each device comprises at least two microphones.
· Meeting camera device system, wherein for each device, the system is operable to identify at least one sound source from the sound input received at the two microphones.
• Meeting camera device system, wherein the system is operable to provide at a receiving device a selectable option to reproduce only the sound from the identified sound source.
• Meeting camera device system, wherein upon selection at the receiving device of the option to reproduce only the sound from the identified sound source, the receiving device reproduces only the sound from the identified sound source. Method of supplying a target sub-image corresponding to a portion of the fields of view for a meeting camera device system comprising two devices', each device including a screen and n≥2 cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, each device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein each device provides a target sub-image to the other device, wherein the method comprises the step of: for a meeting camera device, using the virtual camera comprising an image construction system to supply a target sub-image corresponding to a portion of the fields of view to the other device.
Computer program product operable to supply a target sub-image corresponding to a portion of the fields of view for a meeting camera device system comprising two devices, each device including a screen and n≥2 cameras, the cameras of each device situated off- centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, each device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein each device provides a target sub- image to the other device, wherein the computer program product is operable to supply a target sub-image corresponding to a portion of the fields of view from a meeting camera device to the other device.
D. Distributed Meeting Camera Device System
Meeting camera device system comprising two devices and a computer, each device including a screen and n≥2 cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, each device transmitting its camera images to a computer, the computer including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein each device receives a target sub-image based on data transmitted by the other device to the computer. The following features may also be present: · each device transmits its camera images to a respective computer, each respective computer including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein each device receives a target sub-image based on data transmitted by the other device to its respective computer. • provision of target sub-images includes transmission via a mobile phone network
• provision of target sub-images includes transmission via the internet
• provision of target sub-images includes transmission via a network
• provision of target sub-images includes transmission via a wired network
· provision of target sub-images includes transmission via a wireless network
• each device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face.
• n=2
• For each device, parallax information is available
· For each device, Parallax information may be used in constructing the virtual camera image.
• Each device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position.
· n=3
• n=4
• each device provides for seeing eye-to-eye when video conferencing.
• For each device, place cameras on either side of the device screen
• For each device, create a virtual camera in the centre of the screen
· For each device, image taken by the virtual camera is what is shown to the other party
• For each device, image taken by the virtual camera is what is shown to the other party: this gives the impression to the other party that the viewer is looking direcdy at the other party
· For each device, create a virtual camera located at or near the screen centre
• For each device, viewer can approach a large panel display with continuous videoconferencing and talk directly to the person shown on it— giving the feeling of eye- to-eye contact.
• For each device, face displayed by the virtual camera is placed in the centre of the screen, even if the face of the person whose image is being captured moves significandy away from the centre of the screen.
• For each device, face displayed by the virtual camera is placed in the centre of the screen, even if the face of the person whose image is being captured moves significantly away from the centre of the screen; placement can be accomplished by a tracking system implementing a tracking algorithm.
• For each device, virtual camera is arranged so as to provide an image of the first person as if the first person were looking directly at the virtual camera, even when the first person is in an off-centre position.
• Each device is mobile phone, laptop computer, a desktop monitor, a television, or a large screen display device.
• Each display device may be a liquid crystal display device, a plasma screen display device, a cathode ray tube, an organic light emitting diode (OLED) display device, or a bistable display device.
• Each device may be a handheld portable device, a fixed device, a desktop device, a wall-mounted device, a conference room device, a device in an automobile, a device on a mobile phone, a device on a train, a device on an aeroplane, or a hotel room device.
· For each device, two cameras should be arranged with respect to the individual being filmed such that they each capture a significantly different image, but still somewhat similar images.
• For each device, where two images differ significantly, graphical modelling techniques may be used to generate a virtual camera image.
· For each device, images taken from different cameras of a face and head may be projected onto a head-shaped object to as to generate a three dimensional representation of the topography of a person's face and head; that three dimensional representation may be imaged from in front of the face, so as to generate a virtual camera image from in front of the face.
· For each device, three or more cameras are used which are not collinearly arranged.
• For each device, three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions.
• For each device, three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device.
• For each device, three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device; parallax information may be used in constructing the virtual camera image.
• For each device, device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position, wherein the device has at least three non collinearly arranged cameras, and wherein the off-centre position is displaced vertically from a horizontal plane passing through the screen centre, and wherein the off-centre position is displaced horizontally from the screen centre.
• For each device, three cameras are arranged on the vertices of an equilateral triangle, an isosceles triangle, a right angled triangle, a scalene triangle, or a triangle.
• Each device has the profile of a rectangle.
• Each device has the profile of a triangle.
• For each device, four cameras are arranged on the vertices of a square, a rectangle, a kite, a parallelogram, or a quadrilateral.
· For each device, user may be in an off-centre position because they move with respect to a fixed device
• For each device, user may be in an off-centre position because the device is not fixed
• For each device, user may be in an off-centre position because the device is handheld
• Each device moves, tilts or pans with respect to a user
• For each device, the user and the device may move
• For each device, device which provides a virtual camera may also provide a microphone and speaker
· For each device, device which provides a virtual camera may also provide a microphone and speaker; user of the device can be in voice communication with another user of the other device with a microphone and speaker
• For each device, virtual camera may be mobile in that its position can be located within a field of view that is obtained by combining the images from two, three, four or more real cameras
• For each device, mobile virtual camera may supply a target sub-image corresponding to an arbitrary section of the field of view constructed from adjacent source images from two, three, four or more real cameras • For each device, image from the mobile virtual camera may have a selectable zoom level
• For each device, image from the mobile virtual camera may have selectable tilt or selectable pan, or both selectable tilt and selectable pan.
· For each device, virtual camera may provide video output
• For each device, virtual camera may provide a photograph.
• For each device, virtual camera may be implemented so as to correct for unwanted tilt, unwanted pan, or unwanted tilt and unwanted pan, present in the image of a user
· For each device, virtual camera may be implemented so as to correct for unwanted zoom
• For each device, the optic axes of the n real cameras (where n≥2) of the system meet exactly or approximately at the position which is the position in which a subject is located in an ideal or reference case
· For each device, the optic axes of two cameras may meet at a point in front of the centre of the screen
• For each device, optic axes of three cameras may meet at a point in front of the centre of the screen
• For each device, optic axes of four cameras may meet at a point in front of the centre of the screen.
• For each device, point may be about 40 cm in front of the screen
• For each device, point may be about 2 m in front of the screen
• For each device, point may be about 4 m in front of the screen
• For each device, virtual camera is implemented so as to provide a two dimensional image
• For each device, virtual camera is implemented so as to provide a three dimensional image
• For each device, three dimensional image is for display on a three dimensional display
· For each device, three dimensional display is an autostereoscopic display, or a holographic display
• Computer is in the Cloud. • Meeting camera device system wherein each device comprises at least two microphones.
• Meeting camera device system, wherein for each device, the system is operable using the computer to identify at least one sound source from the sound input received at the two microphones.
• Meeting camera device system, wherein the system is operable to provide at a receiving device a selectable option to reproduce only the sound from the identified sound source.
• Meeting camera device system, wherein upon selection at the receiving device of the option to reproduce only the sound from the identified sound source, the receiving device reproduces only the sound from the identified sound source.
Method of supplying a target sub-image corresponding to a portion of fields of view for a meeting camera device system comprising two devices and a computer, each device including a screen and n≥2 cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, wherein the computer includes a virtual camera comprising" an image construction system operable to supply a target sub-image corresponding to a portion of fields of view, the method comprising the steps of:
(i) a first device transmitting its camera images to a computer;
(ii) a second device transmitting its camera images to a computer;
(iii) the computer supplying to the second device a target sub-image corresponding to a portion of the fields of view, based on data transmitted by the first device to the computer, and
(iv) the computer supplying to the first device a target sub-image corresponding to a portion of the fields of view, based on data transmitted by the second device to the computer.
Computer program product operable to supply a target sub-image corresponding to a portion of fields of view for a meeting camera device system comprising two devices and a computer, each device including a screen and n≥2 cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, wherein the computer includes a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of fields of view, wherein a first device transmits its camera images to a computer, and a second device transmits its camera images to a computer, and the computer program product running on the computer is operable to supply to the second device a target sub-image corresponding to a portion of the fields of view, based on data transmitted by the first device to the computer, and the computer program product running on the computer is operable to supply to the first device a target sub-image corresponding to a portion of the fields of view, based on data transmitted by the second device to the computer.
APPENDIX 1: OTHER CONCEPTS
A. Yota introduction 1. The main focus for Yota's IP protection strategy will be its new LTE phone. The LTE phone will include innovative software, hardware and provide an innovative user experience. See for example Figs. 1 to 23.
B. List of Concepts
1. 'Meet Camera' - seeing eye-to-eye when video conferencing
Conventional video phones give a very poor user experience because there's rarely eye-to- eye contact— instead, the caller seems to be looking away from you since he's looking away from the camera. We place cameras on either side of the LCD screen to create a virtual camera in the centre of the screen, using an algorithm based on the two images. The image taken by the virtual camera is what is shown to the other party: this gives the impression to the other party that you are looking directly at them— a much better user experience.
One advantage of Meet Camera is that one can approach a large panel display with always on video-conferencing and talk directly to the person shown on it - giving the feeling of eye-to-eye contact.
The face displayed by the virtual camera can be placed in the centre of the screen, even if the face of the person whose image is being captured moves significantly away from the centre of the screen. This placement can be accomplished by a tracking algorithm.
2. Capacitive 'Hold' sensors
With a conventional phone, one has to manually activate the home screen if the phone is in its idle state, usually by pressing a button. We use capacitor sensor strips in the phone, so that the phone can know if the user has picked it up and then automatically wake-up— e.g. activate the start-up/home screen.
This could be used instead of a soft or hard key lock on the phone as well as for the screen brightness.
3. DML Phone speaker It's hard to get good quality audio performance, unless you have a large speaker with a large and ugly speaker hole. We use NXT pic distributed mode loudspeaker (DML) technology here to vibrate the entire phone screen - the whole screen surface acts as the speaker. The speaker hole can be fully eliminated. One can use two small drivers/ exciters under the glass to make the screen vibrate. DML has never been used before to drive a screen surface in a mobile phone. Haptic feedback can be provided by the drivers too— a new use for the DML exciters. 4. Mobile hot spot - 'Instant 4G'
We provide a simple hard (or soft) switch on phone, to enable instant and automatic sharing of a WiFi network, using the phone as a mobile hot spot. A user can instantly share internet access using this switch on the phone, instead of a complex user interface (UI). So one could use be at a party to instantly enable friends to access the internet via your phone. Files on the phone could then also be shared (access control would prevent other files from being shared).
5. Virtual Web-USB interface for wireless devices iPhone/iPad has no USB connector — a major disaadvantage. We provide a WiFi connection from a WiFi dongle with a USB interface; the iPhone/iPad can then interface to a memory in the WiFi dongle, plus any external device that the USB dongle is plugged into, just as though the USB interface was native to the iPhone. So you could view the file structure of files stored on the USB dongle itself in a web browser on the iPhone, or print to a printer the USB dongle is interfaced to.
6. USB stick for in-car audio In-car audio systems often have USB interfaces for MP3 files, but will have no way of accessing internet radio (that is currendy only available on really high-end systems). We provide a wireless data enabled USB dongle that can receive streaming radio (e.g. for internet radio stations, Spotify etc.) The USB dongle captures the data stream and converts it to a sequence of files— just like the MP3 files the in-car audio is designed to read. This enables even a basic in-car audio device to have playback/rewind, store etc. functionality for internet radio.
The streamed audio is stored as at least two separate files, which allows the user to choose to skip to the next track using the car audio system software. The user can listen to music online in his car with no modifications to the in-car audio system. An online interface is used for setting up the service, selecting stream source.
7. User experience (UX) to identify sound sources Individual sound sources (different people speaking at a phone in hands-free mode) are identified with two or more inbuilt microphones. Then the individual sources are graphically represented on the device relative to their position in the room. A visual interface on the phone enables selection by hand of which sound source to record e.g. to optimise the noise cancellation/sonic focus for the selected sound source. This could be advantageous in for instance meetings where one person is talking and you want to aggressively noise cancel everything else.
8. Phone with no visible mechanical buttons The phone presents a seamless, unibody surface— although it can still have hidden mechanical buttons e.g. for volume up, volume down.
9. Squeeze control You can turn the phone on or off by squeezing it.
10. Curved phone
A unique and organic phone shape - essential for rapid product differentiation in a crowded space. The mobile phone has a concave front face and a convex rear face, of same or similar magnitude of curvature. Concave front matches path of finger as wrist rotates. Hence it's very natural to use. Having a curved surface as the vibrating DML speaker is also better since if the LCD with the speaker exciters was instead a flat surface, then it would sound unpleasant if that flat surface is placed down against a tabletop. Curving the surface prevents this happening. Preferred curvature of front and back is cylindrical, rather than spherical or aspherical. See eg. Figs 13, 14, 17.
The convex back can have a bistable display. Since the normal resting position is front face down, the back screen with bi-stable display is normally displayed when phone is in the resting position. This resting position is stable. If phone is placed back down (ie convex face down), the phone could spin, which is unstable. Hence a user will likely place phone front face (i.e. concave face) down, with the bi-stable screen showing. When the phone is in a pocket, the front face (concave face) can face inwards, since this better matches leg curvature. This can be the better configuration (as opposed to front face up) for antenna reception.
11. Microphone in SIM card "eject hole"
The microphone is placed in a hole in the body of the mobile device, in the SIM card's eject hole. See Fig. 23.
12. Tactile casing of mobile device
The casing of the mobile device consists of a material that can change its tactile properties from wood to metal ("mo^hing").
APPENDIX 2: PRIMER ON LTE
3GPP Long Term Evolution (LTE), is the latest standard in the mobile network technology tree that produced the GSM/EDGE and UMTS/HSPA network technologies. It is a project of the 3rd Generation Partnership Project (3GPP), operating under a name trademarked by one of the associations within the partnership, the European Telecommunications Standards Institute.
The current generation of mobile telecommunication networks are collectively known as 3G (for "third generation"). Although LTE is often marketed as 4G, first-release LTE does not fully comply with the IMT Advanced 4G requirements. The pre-4G standard is a step toward LTE Advanced, a 4th generation standard (4G) of radio technologies designed to increase the capacity and speed of mobile telephone networks. LTE Advanced is backwards compatible with LTE and uses the same frequency bands, while LTE is not backwards compatible with 3G systems.
MetroPCS and Verizon Wireless in the United States and several worldwide carriers announced plans, beginning in 2009, to convert their networks to LTE. The world's first publicly available LTE-service was opened by TeliaSonera in the two Scandinavian capitals Stockholm and Oslo on the 14th of December 2009. LTE is a set of enhancements to the Universal Mobile Telecommunications System (UMTS) which was introduced in 3rd Generation Partnership Project (3GPP) Release 8. Much of 3GPP Release 8 focuses on adopting 4G mobile communication's technology, including an all-IP flat networking architecture. On August 18, 2009, the European Commission announced it will invest a total of€18 million into researching the deployment of LTE and the certified 4G system LTE Advanced.
While it is commonly seen as a cell phone or common carrier development, LTE is also endorsed by public safety agencies in the US as the preferred technology for the new 700 MHz public-safety radio band. Agencies in some areas have filed for waivers hoping to use the 700 MHz spectrum with other technologies in advance of the adoption of a nationwide standard. The LTE specification provides downlink peak rates of at least 100 Mbps, an uplink of at least 50 Mbps and RAN round-trip times of less than 10 ms. LTE supports scalable carrier bandwidths, from 1.4 MHz to 20 MHz and supports both frequency division duplexing (FDD) and time division duplexing (TDD).
Part of the LTE standard is the System Architecture Evolution, a flat IP-based network architecture designed to replace the GPRS Core Network and ensure support for, and mobility between, some legacy or non-3GPP systems, for example GPRS and WiMAX respectively.
The main advantages with LTE are high throughput, low latency, plug and play, FDD and TDD in the same platform, an improved end-user experience and a simple architecture resulting in low operating costs. LTE will also support seamless passing to cell towers with older network technology such as GSM, cdmaOne, UMTS, and CDMA2000. The next step for LTE evolution is LTE Advanced and is currendy being standardized in 3GPP Release 10.
APPENDIX 3: PRIMER ON LTE ADVANCED
LTE Advanced is a preliminary mobile communication standard, formally submitted as a candidate 4G system to ITU-T in late 2009, was approved into ITU, International Telecommunications Union, IMT-Advanced and expected to be finalized by 3GPP in early 2011. It is standardized by the 3rd Generation Partnership Project (3GPP) as a major enhancement of the 3GPP Long Term Evolution (LTE) standard.
The LTE format was first proposed by NTT DoCoMo of Japan and has been adopted as the international standards. LTE standardization has come to a mature state by now where changes in the specification are limited to corrections and bug fixes. The first commercial services were launched in Scandinavia in December 2009 followed by the United States and Japan in 2010. More first release LTE networks are expected to be deployed globally during 2010 as a natural evolution of several 2G and 3G systems, including Global system for mobile communications (GSM) and Universal Mobile Telecommunications System (UMTS) (3GPP as well as 3GPP2).
Being described as a 3.9G (beyond 3G but pre-4G) technology the first release LTE does not meet the IMT Advanced requirements for 4G also called IMT Advanced as defined by the International Telecommunication Union such as peak data rates up to 1 Gbit/ s. The ITU has invited the submission of candidate Radio Interface Technologies (RITs) following their requirements as mentioned in a circular letter. The work by 3GPP to define a 4G candidate radio interface technology started in Release 9 with the study phase for LTE-Advanced. The requirements for LTE-Advanced are defined in 3 GPP Technical Report (TR) 36.913, "Requirements for Further Advancements for E-UTRA (LTE- Advanced)." These requirements are based on the ITU requirements for 4G and on 3GPP operators' own requirements for advancing LTE. Major technical considerations include the following:
• Continual improvement to the LTE radio technology and architecture
· Scenarios and performance requirements for interworking with legacy radio access technologies
• Backward compatibility of LTE-Advanced with LTE. An LTE terminal should be able to work in an LTE-Advanced network and vice versa. Any exceptions will be considered by 3 GPP. • Account taken of recent World Radiocommunication Conference (WRC-07) decisions regarding new IMT spectrum as well as existing frequency bands to ensure that LTE-Advanced geographically accommodates available spectrum for channel allocations above 20 MHz. Also, requirements must recognize those parts of the world in which wideband channels are not available.
Likewise, 802.16m, 'WiMAX 2', has been approved by ITU into the IMT Advanced family. WiMAX 2 is designed to be backward compatible with WiMAX 1/1.5 devices. Most vendors now support ease of conversion of earlier 'pre-4G', pre-advanced versions and some support software defined upgrades of core base station equipment from 3G.
The mobile communication industry and standardization organizations have therefore started to work on 4G access technologies such as LTE Advanced. At a workshop in April 2008 in China 3GPP agreed the plans for future work on Long Term Evolution (LTE). A first set of 3GPP requirements on LTE Advanced has been approved in June 2008. Besides the peak data rate 1 Gbit/ s that fully supports the 4G requirements as defined by the ITU-R, it also targets faster switching between power states and improved performance at the cell edge. Detailed proposals are being studied within the working groups.

Claims

1. Meeting camera device including a screen and two or more cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view.
2. Meeting camera device of Claim 1 operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face.
3. Meeting camera device of Claim 2 wherein the viewer is located in an off-centre position with respect to the screen.
4. Meeting camera device of Claim 3 wherein virtual camera placement is accomplished by a tracking system tracking the viewer and implementing a tracking algorithm.
5. Meeting camera device of Claim 4 wherein the tracking system tracks a viewer's eye or eyes, and the virtual camera is centred on an eye of the viewer.
6. Meeting camera device of Claim 5 wherein the virtual camera is centred on a right eye of the viewer.
7. Meeting camera device of Claims 5 or 6, wherein the tracking system is operable to record its tracking statistics for the tracking of a user's eye or eyes.
8. Meeting camera device of Claim 7 wherein the tracking system is operable to record its tracking of a user's eye or eyes to provide data which if corresponding to a predefined sequence will unlock the device.
9. Meeting camera device of Claims 1 or 2 wherein the virtual camera is situated in the centre of the screen.
10. Meeting camera device of any of Claims 1 to 9, wherein parallax information is used in constructing the virtual camera image.
11. Meeting camera device of any of Claims 1 to 10, wherein two cameras are arranged with respect to the viewer such that they each capture a significantly different image, but still somewhat similar images.
12. Meeting camera device of any of Claims 1 to 11, wherein where two images differ significandy, graphical modelling techniques are used to generate a virtual camera image.
13. Meeting camera device of any of Claims 1 to 12, wherein images taken from different cameras of a face and head are projected onto a head-shaped object so as to generate a three dimensional representation of the topography of a person's face and head; that three dimensional representation being imaged from in front of the face, so as to provide a virtual camera image from in front of the face.
14. Meeting camera device of any previous Claim, wherein optic axes of the cameras meet exactly or approximately at a position which is the position in which a subject is located in an ideal or reference case.
15. Meeting camera device of any of Claims 1 to 14, wherein optic axes of the cameras meet at a point in front of the centre of the screen.
16. Meeting camera device of any previous Claim, comprising exacdy two cameras.
17. Meeting camera device of Claim 16, wherein cameras are placed on either side of the device screen.
18. Meeting camera device of any of Claims 1 to 15, comprising three cameras.
19. Meeting camera device of Claim 18, wherein the cameras are arranged on the vertices of a triangle.
20. Meeting camera device of Claim 19, wherein parallax information is available along orthogonal directions.
21. Meeting camera device of any of Claims 1 to 15, comprising exactly four cameras.
22. Meeting camera device of Claim 21, wherein the cameras are arranged on the vertices of a quadrilateral.
23. Meeting camera device of Claim 22, wherein parallax information is available along orthogonal directions.
24. Meeting camera device of any previous Claim, wherein an image taken by the virtual camera is shown to another party.
25. Meeting camera device of Claim 24, wherein the device provides for seeing eye-to- eye when video conferencing.
26. Meeting camera device of Claim 25, wherein a viewer can approach a large panel display with continuous video-conferencing and talk direcdy to the person shown on it, giving the feeling of eye-to-eye contact.
27. Meeting camera device of any previous Claim, wherein the device comprises an integral microphone and speaker.
28. Meeting camera device of any previous Claim, wherein an image from the virtual camera has a selectable zoom level.
29. Meeting camera device of any previous Claim, wherein an image from the virtual camera has selectable tilt or selectable pan, or both selectable tilt and selectable pan.
30. Meeting camera device of any previous Claim, wherein the virtual camera is operable to correct for unwanted zoom present in the image of a user.
31. Meeting camera device of any previous Claim, wherein the virtual camera is operable to correct for unwanted tilt, unwanted pan, or unwanted tilt and unwanted pan, present in the image of a user.
32. Meeting camera device of any previous Claim, wherein the device is a mobile phone, a laptop computer, a desktop monitor, a television, or a large screen display device.
33. Meeting camera device of any previous Claim, wherein a device display is a liquid crystal display, a plasma screen display, a cathode ray tube, an organic light emitting diode (OLED) display, or a bistable display.
34. Meeting camera device of any previous Claim, wherein the device is a handheld portable device, a fixed device, a desktop device, a wall-mounted device, a conference room device, a device in an automobile, a device on a mobile phone, a device on a train, a device on an aeroplane, or a hotel room device.
35. Meeting camera device of any previous Claim, wherein the virtual camera provides video output.
36. Meeting camera device of any previous Claim, wherein the virtual camera provides a photograph.
37. Meeting camera device of any previous Claim, wherein the device has a profile of a triangle.
38. Meeting camera device of Claim 15, wherein the point is about 40 cm from the screen.
39. Meeting camera device of Claim 15, wherein the point is about 2 m from the screen.
40. Meeting camera device of any of Claims 1 to 15, wherein the virtual camera is implemented so as to provide a two dimensional image.
41. Meeting camera device of any of Claims 1 to 15, wherein the virtual camera is implemented so as to provide a three dimensional image.
42. Meeting camera device of Claim 41, wherein the three dimensional image is for display on a three dimensional display.
43. Meeting camera device of Claim 42, wherein the three dimensional display is an autostereoscopic display, or a holographic display.
44. Meeting camera device of any previous Claim, wherein the device comprises at least two microphones.
45. Meeting camera device of Claim 44, wherein the device is operable to identify at least one sound source from the sound input received at the two microphones.
46. Meeting camera device of Claim 45, wherein the device is operable to provide to a receiving device a selectable option to transmit only the sound from the identified sound source.
47. Meeting camera device of Claim 46, wherein upon selection at the receiving device of the option to transmit only the sound from the identified sound source, the meeting camera device transmits only the sound from the identified sound source.
48. Meeting camera system including a device comprising a screen and two or more cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the system including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view.
49. Meeting camera system of Claim 48 comprising a remote computer, wherein images from the cameras are transmitted to the remote computer, at which the virtual camera image is generated.
50. Meeting camera system of Claim 48 comprising the cloud, wherein images from the cameras are transmitted to the cloud, at which the virtual camera image is generated.
51. Meeting camera system of Claim 48 comprising a different display device, wherein images from the cameras are transmitted to the different display device, and wherein the virtual camera image is generated and displayed at the different display device.
52. Meeting camera system of any of Claims 49 to 51, wherein the device comprises exacdy two cameras.
53. Meeting camera system of any of Claims 49 to 51, wherein the device comprises three cameras.
54. Meeting camera system of any of Claims 48 to 53, wherein the virtual camera is implemented so as to provide a three dimensional image.
55. Meeting camera device system comprising two devices, each device including a screen and two or more cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, each device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein each device provides a target sub-image to the other device.
56. Meeting camera device system of Claim 55, wherein provision of a target sub- images includes transmission via a mobile phone network, or transmission via the internet, or transmission via a network, or transmission via a wired network, or transmission via a wireless network.
57. Meeting camera device system of Claims 55 or 56, wherein each device includes exacdy two cameras.
58. Meeting camera device system of Claims 55 or 56, wherein each device includes three cameras.
59. Meeting camera device system of any of Claims 55 to 58, wherein each virtual camera is implemented so as to provide a three dimensional image.
60. Meeting camera device system comprising two devices and a computer, each device including a screen and two or more cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, each device transmitting its camera images to a computer, the computer including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein each device receives a target sub-image based on data transmitted by the other device to the computer.
61. Meeting camera device system of Claim 60, wherein provision of a target sub- images includes transmission via a mobile phone network, or transmission via the internet, or transmission via a network, or transmission via a wired network, or transmission via a wireless network.
62. Meeting camera device system of Claims 60 or 61, wherein each device includes exactly two cameras.
63. Meeting camera device system of Claims 60 or 61, wherein each device includes three cameras.
64. Meeting camera device system of any of Claims 60 to 63, wherein each virtual camera is implemented so as to provide a three dimensional image.
65. Meeting camera device system of any of Claims 60 to 64, wherein the computer is in the Cloud.
PCT/RU2011/000817 2010-10-20 2011-10-20 Meeting camera WO2012053940A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/RU2012/000027 WO2012099505A1 (en) 2011-01-21 2012-01-23 Mobile device with lighting
TW101134923A TW201332336A (en) 2011-10-03 2012-09-24 Device with display screen

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GBGB1017776.4A GB201017776D0 (en) 2010-10-20 2010-10-20 Yota 201010
GB1017776.4 2010-10-20
GBGB1020999.7A GB201020999D0 (en) 2010-12-10 2010-12-10 Yota UI 1
GB1020999.7 2010-12-10

Publications (2)

Publication Number Publication Date
WO2012053940A2 true WO2012053940A2 (en) 2012-04-26
WO2012053940A3 WO2012053940A3 (en) 2012-06-14

Family

ID=45464816

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/RU2011/000817 WO2012053940A2 (en) 2010-10-20 2011-10-20 Meeting camera

Country Status (1)

Country Link
WO (1) WO2012053940A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9363426B2 (en) 2014-05-29 2016-06-07 International Business Machines Corporation Automatic camera selection based on device orientation

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5650814A (en) 1993-10-20 1997-07-22 U.S. Philips Corporation Image processing system comprising fixed cameras and a system simulating a mobile camera

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6466250B1 (en) * 1999-08-09 2002-10-15 Hughes Electronics Corporation System for electronically-mediated collaboration including eye-contact collaboratory
JP2004048644A (en) * 2002-05-21 2004-02-12 Sony Corp Information processor, information processing system and interlocutor display method
US7330584B2 (en) * 2004-10-14 2008-02-12 Sony Corporation Image processing apparatus and method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5650814A (en) 1993-10-20 1997-07-22 U.S. Philips Corporation Image processing system comprising fixed cameras and a system simulating a mobile camera

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9363426B2 (en) 2014-05-29 2016-06-07 International Business Machines Corporation Automatic camera selection based on device orientation

Also Published As

Publication number Publication date
WO2012053940A3 (en) 2012-06-14

Similar Documents

Publication Publication Date Title
KR101814151B1 (en) Mobile device, display apparatus and control method thereof
US8606183B2 (en) Method and apparatus for remote controlling bluetooth device
CN106358319A (en) Wireless projection device, system and method
CN108243481B (en) File transmission method and device
KR20180092621A (en) Terminal and method for controlling the same
KR20150009069A (en) Mobile terminal and control method thereof
WO2021023055A1 (en) Video call method
CN103516894A (en) Mobile terminal and audio zooming method thereof
EP2659729A2 (en) Wireless network sharing device
CN114610253A (en) Screen projection method and equipment
CN109121047B (en) Stereo realization method of double-screen terminal, terminal and computer readable storage medium
CN108197554B (en) Camera starting method, mobile terminal and computer readable storage medium
CN114338965B (en) Audio processing method and electronic equipment
KR20150047032A (en) Mobile terminal and method for controlling the same
CN106982286B (en) Recording method, recording equipment and computer readable storage medium
KR20140099123A (en) Mobile terminal and control method thereof
CN106507072A (en) A kind of wireless display device, system and method
WO2012099505A1 (en) Mobile device with lighting
CN106375843A (en) Wireless projection apparatus, system and method
KR101988897B1 (en) Obile terminal and audio zooming sharing method thereof
KR20160092820A (en) Mobile terminal and method for controlling the same
KR20150030082A (en) Mobile terminal and control control method for the mobile terminal
US9900421B2 (en) Mobile terminal and control method therefor
WO2012053940A2 (en) Meeting camera
CN109587344A (en) Call control method, device, mobile terminal and medium based on mobile terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11806001

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct app. not ent. europ. phase

Ref document number: 11806001

Country of ref document: EP

Kind code of ref document: A2