WO2012099505A1 - Appareil mobile muni d'un dispositif d'éclairage - Google Patents

Appareil mobile muni d'un dispositif d'éclairage Download PDF

Info

Publication number
WO2012099505A1
WO2012099505A1 PCT/RU2012/000027 RU2012000027W WO2012099505A1 WO 2012099505 A1 WO2012099505 A1 WO 2012099505A1 RU 2012000027 W RU2012000027 W RU 2012000027W WO 2012099505 A1 WO2012099505 A1 WO 2012099505A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile device
infra red
image
display
cameras
Prior art date
Application number
PCT/RU2012/000027
Other languages
English (en)
Inventor
Dmitry Alekseevich Gorilovsky
Original Assignee
Yota Devices Ipr Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GBGB1112458.3A external-priority patent/GB201112458D0/en
Priority claimed from PCT/RU2011/000817 external-priority patent/WO2012053940A2/fr
Application filed by Yota Devices Ipr Ltd. filed Critical Yota Devices Ipr Ltd.
Publication of WO2012099505A1 publication Critical patent/WO2012099505A1/fr
Priority to TW101134923A priority Critical patent/TW201332336A/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/20Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only
    • H04N23/21Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only from near infrared [NIR] radiation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • H04N7/144Constructional details of the terminal equipment, e.g. arrangements of the camera and the display camera and display on the same optical axis, e.g. optically multiplexing the camera and display for eye to eye contact

Definitions

  • the invention relates to mobile devices comprising a light sensor, a display and a source of infra red light, wherein the infra red light source is arranged to provide illumination of a field of view of the light sensor, and to methods and computer program products associated with such devices.
  • Figure 24 shows an example of a problem which can arise.
  • a first person is having a video call in low lighting conditions
  • a second person is having a video call under bright lighting conditions
  • the first person and the second person are having a video call with each other. Because the first person is in low lighting conditions, a very dark image of the first person appears on the second person's screen, so that the second person may not be able to see the first person well, or even at all, which is not desirable in a video call.
  • FIG 1 an image of a first person is shown on the screen of a second person.
  • the image of the first person is seen from off-centre, because the first person is looking at their screen centre while being filmed by an off-centre camera.
  • Figure 1 we see the back of the head of a second person who is viewing their (the second person's) screen. The second person is being filmed by an off-centre camera.
  • the image of the second person provided to the first person will be off-centre, in common with the image of the first person supplied to the second person.
  • an image processing system comprising: a system of n fixed real cameras arranged in such a way that their individual fields of view merge so as to form a single wide-angle field of view for recording a panoramic scene; an image construction system simulating a mobile virtual camera continuously scanning the panoramic scene to furnish a target sub- image corresponding to an arbitrary section of the wide-angle field of view and constructed from adjacent source images furnished by the n real cameras.
  • a mobile device comprising a camera, a display and a source of infra red light, wherein the infra red light source is arranged to provide illumination of a field of view of the camera.
  • the mobile device may be arranged to obtain an image from the camera, to obtain a parameter from the image, and to adjust the infra red light source intensity based on the parameter.
  • the mobile device may be arranged to adjust the infra red light source intensity so as to optimize an image intensity obtained by the device camera.
  • the mobile device may be one wherein adjustment is automatic, without user intervention.
  • the mobile device may be one wherein the device is arranged to turn off the infra red light source when the light intensity at the camera is above a threshold.
  • the mobile device may be one wherein the infra red light source is a near infra red light source.
  • the mobile device may be one wherein the display includes a source of visible light.
  • the mobile device may be one wherein the source of infra red light and source of visible light are situated near to each other. .
  • the mobile device may be one wherein the infra red light source intensity is adjustable independently of the visible light source intensity.
  • the mobile device may be one wherein the display is provided on a major face of the device, and the infra red light source is provided on the same major face of the device as the display.
  • the mobile device may be one wherein the infra red light source is an infra red LED incorporated into the display.
  • the mobile device may be one wherein the infra red light source is an infra red LED provided near to the display.
  • the mobile device may be one wherein the infra red light source is an infra red LED provided near to a border of the display.
  • the mobile device may be one wherein an infra red LED is an AlGaAs LED.
  • the mobile device may be one wherein the display is a liquid crystal display (LCD).
  • the mobile device may be one wherein a source of infra red light is in a backlight of the LCD display, wherein the backlight includes a source of visible light.
  • the mobile device may be one wherein an infra red light source includes an infra red LED.
  • the mobile device may be one including a major face, wherein the display is an OLED display on the major face of the device.
  • the mobile device may be one wherein an infra red OLED is provided on the same major face of the device as the display.
  • the mobile device may be one wherein infra red OLEDs are incorporated into the display.
  • the mobile device may be one wherein an infra red OLED is provided near to the display.
  • the mobile device may be one wherein an infra red OLED is provided near to a border of the display.
  • the mobile device may be one wherein the mobile device is arranged to transmit an image taken by the camera under infra red illumination to a further mobile device.
  • the mobile device may be one wherein the device is operable to register that an object has come close to the device, using an increase in light measured at the camera, when the infra red source is on.
  • the mobile device may be a mobile phone.
  • the mobile device may be a portable navigation device.
  • the mobile device may be a personal digital assistant.
  • the mobile device may be a tablet computer.
  • the mobile device may be a laptop computer.
  • the mobile device may be one including two or more cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view.
  • the mobile device may be one operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face.
  • the mobile device may be one wherein the viewer is located in an off-centre position with respect to the display.
  • the mobile device may be one wherein virtual camera placement is accomplished by a tracking system tracking the viewer and implementing a tracking algorithm.
  • the mobile device may be one wherein the tracking system tracks a viewer's eye or eyes, and the virtual camera is centred on an eye of the viewer.
  • the mobile device may be one wherein the virtual camera is centred on a right eye of the viewer.
  • the mobile device may be one wherein the tracking system is operable to record its tracking statistics for the tracking of a user's eye or eyes.
  • the mobile device may be one wherein the tracking system is operable to record its tracking of a user's eye or eyes to provide data which if corresponding to a predefined sequence will unlock the device.
  • the mobile device may be one wherein the virtual camera is situated in the centre of the display.
  • the mobile device may be one wherein parallax information is used in constructing the virtual camera image.
  • the mobile device may be one wherein two cameras are arranged with respect to the viewer such that they each capture a significantly different image, but still somewhat similar images.
  • the mobile device may be one wherein where two images differ significantly, graphical modelling techniques are used to generate a virtual camera image.
  • the mobile device may be one wherein images taken from different cameras of a face and head are projected onto a head-shaped object so as to generate a three dimensional representation of the topography of a person's face and head; that three dimensional representation being imaged from in front of the face, so as to provide a virtual camera image from in front of the face.
  • the mobile device may be one wherein optic axes of the cameras meet exactly or approximately at a position which is the position in which a subject is located in an ideal or reference case.
  • the mobile device may be one wherein optic axes of the cameras meet at a point in front of the centre of the display.
  • the mobile device may be one comprising exactly two cameras.
  • the mobile device may be one wherein cameras are placed on either side of the device display.
  • the mobile device may be one comprising three cameras.
  • the mobile device may be one wherein the cameras are arranged on the vertices of a triangle.
  • the mobile device may be one wherein parallax information is available along orthogonal directions.
  • the mobile device may be one wherein an image taken by the virtual camera is shown to another party.
  • the mobile device may be one wherein the device provides for seeing eye-to-eye when video conferencing.
  • the mobile device may be one wherein a viewer can view the display with continuous video-conferencing and talk directly to the person shown on it, giving the feeling of eye-to-eye contact.
  • the mobile device may be one wherein an image from the virtual camera has a selectable zoom level.
  • the mobile device may be one wherein an image from the virtual camera has selectable tilt or selectable pan, or both selectable tilt and selectable pan.
  • the mobile device may be one wherein the virtual camera is implemented so as to provide a three dimensional image.
  • the mobile device may be one wherein the device comprises an integral microphone and speaker.
  • Figure 1 shows a system in which there is displayed an image of a first person looking at their screen centre, being filmed by their off-centre camera, and being shown on a second person' s screen.
  • Figure 2 shows a device comprising a screen and two off-centre cameras. The cameras are arranged in a major face of the device.
  • Figure 3 shows a system in which there is displayed an image of a first person looking at
  • Figure 4 shows a system in which there is displayed an image of a first person looking at
  • Figure 5 shows a device comprising a screen and three off-centre cameras. The cameras are arranged in a major face of the device.
  • Figure 6 shows a system in which there is displayed an image of a first person looking at
  • Figure 7 shows a system in which there is displayed an image of a first person looking at
  • Figure 8 shows a system in which there is displayed an image of a first person looking at
  • Figure 9 shows a device comprising a screen and four off-centre cameras. The cameras are arranged in a major face of the device.
  • Figure 10 shows a system in which there is displayed an image of a first person looking at their screen centre from an off-centre position (not shown), being filmed by their four off-centre cameras, which are part of a virtual camera, and being shown on second person's screen by the virtual camera.
  • Figure 11 shows an example of a customer proposition.
  • Figure 12 shows an example of a smartphone specification.
  • Figure 13 shows an example of a mobile device industrial design.
  • Figure 14 shows an example of a mobile device industrial design.
  • Figure 15 shows an example of a mobile phone hardware specification.
  • Figure 16 shows examples of chipsets for mobile devices.
  • Figure 17 shows an example specification for a back screen of a mobile device.
  • Figure 18 shows an example software architecture of a mobile device.
  • Figure 19 shows examples of aspects of an example mobile device.
  • Figure 20 shows examples of an applications concept for a mobile device.
  • Figure 21 shows examples of applications for a mobile device.
  • Figure 22 shows further examples of applications for a mobile device.
  • Figure 23 shows an example of a mobile device in which the microphone is placed in a hole in the body of the mobile device, in the SIM card's eject hole.
  • Figure 24 shows a first person having a video call in low lighting conditions, and a second person having a video call under bright lighting conditions; the first person and the second person are having a video call with each other. Because the first person is in low lighting conditions, a very dark image of the first person appears on the second person's screen, so that the second person may not be able to see the first person well, or even at all, which is not desirable in a video call.
  • Figure 25 shows an example of a first and second person having a video call with each other, where the first person is in low lighting conditions and the second person is in bright lighting conditions.
  • the first person's phone has an infra red light source in its backlight.
  • Figure 26 shows an infra red OLED is provided near to each of the four corners of the first person's display. These infra red OLEDs can be illuminated in low lighting conditions so as to provide an optimum image intensity at a camera of the first person's display device.
  • Figure 27 shows an infra red LED is provided near to each of the four corners of the first person's display. These infra red LEDs can be illuminated in low lighting conditions so as to provide an optimum image intensity at a camera of the first person's display device.
  • Figure 28 shows an example of a first and second person having a video call with each other, where the first person is in low lighting conditions and the second person is in low lighting conditions.
  • the first person's phone has an infra red light source in its backlight.
  • the second person's phone has an infra red light source in its backlight.
  • an infra red light lighting arrangement for a device with a camera. More particularly there is provided an infra red light lighting arrangement for a mobile device with a camera. More particularly there is provided an infra red light lighting arrangement for a mobile phone with a camera.
  • a backlight with an infra red light source where the device (eg. a mobile device, eg. a mobile phone) includes a camera.
  • An advantage is that this helps the user to have a good user in experience for video calls in low lighting conditions.
  • an infrared backlight is provided for a front-facing camera on a mobile device (eg. a mobile phone).
  • a mobile device eg. a mobile phone
  • An advantage is that users in low lighting conditions are not “blinded” (or dazzled) with visible light from their device display when the device is trying to illuminate them better so as to obtain a better image of them.
  • this may be readily implemented because all camera optical sensors are capable of capturing at least some of the near infrared spectrum as well as the visible spectrum.
  • an infrared backlight or other infra red light source on a device (eg. a mobile device, such as a mobile phone) with a proximity sensor.
  • Proximity sensors may operate using an infrared light source (eg. an infra red LED) and an infrared sensor to measure a reflected or scattered infra red light signal from an object close to the proximity sensor. For example, when the device measures a sufficient increase in the quantity of reflected light at the camera when the infra red source is on, the device can register that an object has come close to the device.
  • Figure 25 shows an example of a first person and a second person having a video call with each other, where the first person is in low lighting conditions and the second person is in bright lighting conditions.
  • the first person's phone has an infra red light source in its backlight. Infra red light from this backlight is not visible to the user. But the light is emitted from the display, reflects or scatters off the first person, and is detected at the camera of the first person's device.
  • the infra red backlight intensity may be adjusted (eg. automatically by the device) so as to provide a suitably bright image at the camera.
  • That image of the first person can then be transmitted for display at the second person's display, so that the second person can see a suitably bright image of the first person on their display, in contrast to Figure 24, in which no infra red backlight is provided on the first person's phone.
  • Figure 28 shows an example of a first person and a second person having a video call with each other, where the first person is in low lighting conditions and the second person is in low lighting conditions.
  • the first person's phone has an infra red light source in its backlight. Infra red light from this backlight is not visible to the user. But the light is emitted from the display, reflects or scatters off the first person, and is detected at the camera of the first person's device.
  • the infra red backlight intensity may be adjusted (eg. automatically by the device) so as to provide a suitably bright image at the camera.
  • That image of the first person can then be transmitted for display at the second person's display, so that the second person can see a suitably bright image of the first person on their display, in contrast to Figure 24, in which no infra red backlight is provided on the first person's phone.
  • the second person's phone has an infra red light source in its backlight. Infra red light from this backlight is not visible to the user. But the light is emitted from the display, reflects or scatters off the second person, and is detected at the camera of the second person's device. The infra red backlight intensity may be adjusted (eg. automatically by the device) so as to provide a suitably bright image at the camera.
  • That image of the second person can then be transmitted for display at the first person's display, so that the first person can see a suitably bright image of the second person on their display.
  • a mobile phone has a liquid crystal display (LCD)
  • the LCD is illuminated by a backlight.
  • An infra red source may be incorporated into the backlight; the infra red source may be a light emitting diode (LED), such as one fabricated using AlGaAs semiconductor materials; other semiconductor materials for infra red LEDs are known, such as GaAsP, InGaP, or organic light emitting materials.
  • LED light emitting diode
  • infra red source which is independent from the visible light source in the backlight (which might be a white light LED, or a cold cathode fluorescent lamp (CCFL)), because this enables the device to adjust the intensities of the white light source and the infra red light source independently. So for example, under normal levels of visible light illumination, the infra red light source may be unnecessary, in which case it can be switched off, which can save on device power such as battery power. In other situations, the intensity of the infra red light can be adjusted, so as to provide an optimum image intensity at the device camera.
  • An infra red light source can be incorporated into the backlight of a LCD colour display.
  • Such an infra red light source may be situated near to the white light source, so that the backlight may provide similar light redirecting and redistribution characteristics for the infra red source to those provided for the white light source.
  • the infra red light will be emitted by the display, because at least the red filters of the colour LCD display will transmit well in the near infra red, even though the green and blue filters of the colour LCD display may not transmit very well in the near infra red.
  • a black and white LCD will transmit infra red well, because it transmits light across a broad range of wavelengths.
  • an advantage of including an infra red light source in the backlight of a LCD display is that such backlights provide a uniform illumination source across the display.
  • the backlight including an infra red light source will provide a relatively uniform infra red light source.
  • infra red light may be transmitted predominantly by the red filters in a LCD display, any image typically displayed on a LCD display will include a significant amount of light transmitted through the red sub-pixels of the pixels of the display. Hence the infra red light will typically be emitted plentifully from the LCD display.
  • infra red OLEDs may be incorporated into the display, or be provided near to the display, such as near a border or the borders of the display, so as to provide a source of infra red light.
  • the intensity of the infra red OLEDs can be adjusted independently from the intensity of the visible OLEDs on the display. So for example, in good lighting conditions, the infra red OLEDs can be turned off, which can save on device power, such as device battery power.
  • the intensity of the infra red OLEDs can be adjusted, so as to provide an optimum image at the device camera.
  • an infra red OLED is provided near to each of the four corners of the display. These infra red OLEDs can be illuminated in low lighting conditions so as to provide an optimum image intensity at a camera of the display device.
  • infra red LEDs may be incorporated into the display, or be provided near to the display, such as near a border or the borders of the display, so as to provide a source of infra red light.
  • the intensity of the infra red LEDs can be adjusted independently from the intensity of the display. So for example, in good lighting conditions, the infra red LEDs can be turned off, which can save on device power, such as device battery power.
  • the intensity of the infra red LEDs can be adjusted, so as to provide an optimum image at the device camera.
  • an infra red LED is provided near to each of the four corners of the . display. These infra red LEDs can be illuminated in low lighting conditions so as to provide an optimum image intensity at a camera of the display device.
  • the above infra red illumination arrangements may be applied to mobile devices (eg. tablet computers, personal digital assistants, portable navigation devices, laptop computers), rather than just to mobile phones.
  • the above infra red illumination arrangements, such as those shown in Figures 25 to 28, may be applied to display devices (eg. desktop computers, desktop monitors, television sets), rather than just to mobile phones.
  • Infra red light sources referred to here refer predominantly to the near infra red spectral region, which covers the spectral range from about 700 nm to about 1.4 micrometres.
  • a 'Meet Camera' which provides for seeing eye-to-eye when video conferencing.
  • FIG 3 An example of a result of using the virtual camera described above in relation to Figure 2 is shown in Figure 3.
  • an image of a first person is shown on the screen of a second person.
  • the image of the first person is seen from the centre, because the first person is looking at their screen centre while being filmed by two off-centre cameras, such as those shown in Figure 2, from which a virtual camera located at or near the screen centre has been created.
  • Figure 3 we see the back of the head of a second person who is viewing their (the second person's) screen.
  • the second person is being filmed by two off-centre cameras, from which a virtual camera at or near the screen centre is created.
  • the image of the second person provided to the first person will be seen from at or near the screen centre, in common with the image of the first person supplied to the second person.
  • 'Meet Camera' One advantage of 'Meet Camera' is that one can approach a large panel display with always on video-conferencing and talk directly to the person shown on it - giving the feeling of eye-to-eye contact.
  • the face displayed by the virtual camera can be placed in the centre of the screen, even if the face of the person whose image is being captured moves significantly away from the centre of the screen.
  • This placement can be accomplished by a tracking system implementing a tracking algorithm.
  • the tracking system may track an eye or the eyes of a viewer.
  • An example is shown in Figure 4.
  • Figure 4 an image of a first person is shown on the screen of a second person. The image of the first person is seen from the centre, even though the first person is located in an off-centre position (not shown), similar to the off-centre position of the second person shown in Fig. 4. This is because the first person is looking at their screen centre while being filmed by two off-centre cameras, such as those shown in Figure 2, from which a virtual camera has been created.
  • the virtual camera is arranged so as to provide an image of the first person as if the first person were looking directly at the virtual camera.
  • Figure 4 we see the back of the head of a second person who is viewing their (the second person's) screen.
  • the second person is being filmed by two off-centre cameras; the second person is in an off-centre position.
  • a second virtual camera for the second person is arranged so as to provide an image of the second person as if the second person were looking directly at the second virtual camera.
  • the image of the second person provided to the first person is a front view of the second person, centrally located on the device screen, in common with the image of the first person supplied to the second person.
  • the tracking system may record its tracking statistics for the tracking of a user's eye or eyes. Such statistics could be useful in determining the user's degree of attentiveness, or for measuring the effectiveness of advertising.
  • the tracking system may be useful in implementing a form of user password for unlocking a device. For example, a user may look at points on the device in a sequence, and this will unlock the device.
  • Tracking system output may be used to control the user interface. For example, high priority information may be presented on a part of the screen that the tracking system indicates the viewer is looking at.
  • the virtual camera may be implemented with respect to a display device which displays an image from the virtual camera, or with respect to a display device which obtains an image for display on another display device.
  • the display device with respect to which the virtual image is captured, or on which the virtual image is displayed may be a mobile phone display, a laptop computer display, a desktop monitor display, a television display, or a large screen display device.
  • the display device may be a liquid crystal display device, a plasma screen display device, a cathode ray tube, an organic light emitting diode (OLED) display device, or a bistable display device.
  • the virtual camera may be implemented with respect to a device which captures images for use in generating the virtual camera image, where that device is a handheld portable device, a fixed device, a desktop device, a wall-mounted device, a conference room device, a device in an automobile, a device on a mobile phone, a device on a train, a device on an aeroplane, or a hotel room device.
  • Figure 2 shows a particular arrangement of two cameras on a device
  • two cameras used to generate a virtual camera may be arranged in many ways. It is preferable that the two cameras should be arranged with respect to the individual being filmed such that they each capture a significantly different image, but still somewhat similar images.
  • This enables the virtual camera algorithm to combine the two images obtained by the two cameras such as to generate an image as if it had been obtained from a different location.
  • this process becomes less useful if the two images do not differ significantly i.e. if the two cameras are located in very similar positions.
  • this process becomes less reliable if the two images differ too greatly, so that they cannot be readily combined.
  • graphical modelling techniques may be used to generate a virtual camera image.
  • images taken from two different cameras of a face and head may be projected onto a head-shaped object so as to generate a three dimensional representation of the topography of a person's face and head. That three dimensional representation may be imaged from in front of the face, so as to generate a virtual camera image from in front of the face.
  • This process can be extended to three, four or more cameras.
  • three cameras, four cameras, or more than four cameras can be arranged so as to generate a virtual camera.
  • three cameras may be arranged on a device as shown in Figure 5.
  • parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device (eg. the device of Figure 5), which is useful when generating an image from a virtual camera, as would be clear to those skilled in the art.
  • Parallax information may be used in constructing the virtual camera image.
  • Three non collinear cameras are useful when the person being filmed is located off-centre, because they may be off-centre not just as shown in Figure 4, i.e.
  • off-centre substantially in the direction parallel to the line passing through the two cameras of Figure 4, but they may be off-centre along the orthogonal direction to the line passing through the two cameras in Figure 4 i.e. too high or too low with respect to the off- centre position in Figure 4.
  • four cameras may be arranged on a device as shown in Figure 9, which shows four cameras each near the vertices of a device with a rectangular profile.
  • Figure 6 shows an example in which the second person is off-centre along the orthogonal direction to the line passing through the two cameras in Figure 4 i.e. too high or too low (in this example, too low) with respect to their corresponding position in Figure 4.
  • an image of a first person is shown on the screen of a second person. The image of the first person is seen from the centre, even though the first person is located in an off-centre position (not shown), which is similar to the off- centre position shown for the second person in Figure 6. This is because the first person is looking at their screen centre while being filmed by three off-centre cameras, such as those shown in Figure 5, from which a virtual camera has been created.
  • the virtual camera is arranged so as to provide an image of the first person as if the first person were looking directly at the virtual camera.
  • Figure 6 we see the back of the head of a second person who is viewing their (the second person's) screen.
  • the second person is being filmed by three off-centre cameras; the second person is in an off-centre position, which differs from the off-centre position shown in Figure 4.
  • a second virtual camera for the second person is arranged so as to provide an image of the second person as if the second person were looking directly at the second virtual camera.
  • the image of the second person provided to the first person is a front view of the second person, centrally located on the device screen, in common with the image of the first person supplied to the second person.
  • Figure 7 shows an example which may be especially effective in generating parallax information along orthogonal directions, or in generating a wide field of view, which may be useful when generating an image from a virtual camera, as would be clear to those skilled in the art.
  • Parallax information may be used in constructing the virtual camera image.
  • the three cameras are arranged on the vertices of an equilateral triangle.
  • the device has the profile of an equilateral triangle, although this is not necessary in order for the three cameras to be arranged on an equilateral triangle: the device profile could be another shape such as rectangular, for example.
  • An equilateral triangle arrangement of cameras is useful in generating parallax information, such as when the user is in an off-centre position.
  • Parallax information may be used in constructing the virtual camera image.
  • the three cameras may be arranged on the vertices of an isosceles triangle, a right angled triangle, or a scalene triangle.
  • the three cameras may be arranged on the vertices of a triangle.
  • Figure 7 the image of the first person is seen from the centre, even though the first person is located in an off-centre position (not shown), which is similar to the off-centre position shown for the second person.
  • Figure 8 shows an example which may be especially effective in generating parallax information along orthogonal directions, or in generating a wide field of view, which may be useful when generating an image from a virtual camera, as would be clear to those skilled in the art.
  • Parallax information may be used in constructing the virtual camera image.
  • the three cameras are arranged on the vertices of a triangle, such as an equilateral triangle.
  • the device has the profile of a rectangle.
  • a triangular arrangement (for example, on an equilateral triangle) of cameras is useful in generating parallax information, such as when the user is in an off-centre position.
  • Parallax information may be used in constructing the virtual camera image.
  • the three cameras may be arranged on the vertices of an isosceles triangle, a right angled triangle, or a scalene triangle.
  • the three cameras may be arranged on the vertices of a triangle.
  • the image of the first person is seen from the centre, even though the first person is located in an off-centre position (not shown), which is similar to the off-centre position shown for the second person.
  • four cameras may be arranged on a device as shown in Figure 9, which shows four cameras each near the vertices of a device with a rectangular profile.
  • parallax information is available along orthogonal directions, which is useful when generating an image from a virtual camera, as would be clear to those skilled in the art. Parallax information may be used in constructing the virtual camera image.
  • Figure 10 shows an example which may be especially effective in generating parallax information along orthogonal directions, or in generating a wide field of view, which may be useful when generating an image from a virtual camera, as would be clear to those skilled in the art.
  • Parallax information may be used in constructing the virtual camera image.
  • the four cameras are arranged on the vertices of a rectangle.
  • the device has the profile of a rectangle.
  • a quadrilateral arrangement (for example, on a rectangle) of cameras is useful in generating parallax information, such as when the user is in an off-centre position.
  • Parallax information may be used in constructing the virtual camera image.
  • the four cameras may be arranged on the vertices of a square, a kite, or a parallelogram.
  • the four cameras may be arranged on the vertices of a quadrilateral.
  • the image of the first person is seen from the centre, even though the first person is located in an off-centre position (not shown), which is similar to the off-centre position shown for the second person.
  • the user may be in an off-centre position because they move with respect to a fixed device, or because the device is not fixed (eg. it is handheld), and the device moves, tilts or pans with respect to a user.
  • the user and the device may move eg. a moving user using a handheld device, which may tilt or pan.
  • the device which provides a virtual camera may also provide a microphone and speaker, so that a user of the device can be in voice communication with another user of another device with a microphone and speaker.
  • the virtual camera may be mobile in that its position can be located within a field of view that is obtained by combining the images from two, three, four or more real cameras.
  • the mobile virtual camera may supply a target sub-image corresponding to an arbitrary section of the field of view constructed from adjacent source images from two, three, four or more real cameras.
  • the view generated by the virtual camera may be displayed on a display.
  • the image from the mobile virtual camera may have a selectable zoom level.
  • the image from the mobile virtual camera may have selectable tilt or selectable pan, or both selectable tilt and selectable pan.
  • the virtual camera may be situated so as to provide the view seen from a particular eye of the user.
  • the eye may be a right eye or a left eye.
  • the right eye is the preferred eye.
  • the virtual camera may provide video output.
  • the virtual camera may provide a photograph.
  • the different cameras may supply source images with different luminances. Accordingly, at the boundary between the different camera images, a boundary line may appear, across which the image brightness is seen to fall or rise relatively abruptly. Accordingly, the luminance difference between different source images which form part of the target image must be corrected, so as to provide an image which is acceptably free of one or more boundary lines to a user who views the target image. Correction may be implemented as described in US5,650,814, which is incorporated by reference, or by other methods known to those skilled in the art.
  • the virtual camera image may be generated on a device with includes two, three, four or more cameras from which the virtual camera image is generated. Alternatively, the images from two, three, four or more cameras may be transmitted to a remote computer, at which the virtual camera image is generated. The virtual camera image thus generated may be transmitted to a display device for display. Alternatively still, the images from two, three, four or more cameras may be transmitted to a display device, the virtual camera image being generated and displayed at the display device.
  • the virtual camera may be implemented so as to correct for unwanted tilt, unwanted pan, or unwanted tilt and unwanted pan, present in the image of a user.
  • the virtual camera may be implemented so as to correct for unwanted zoom (i.e. image too close or too far), present in the image of a user.
  • the virtual camera may be implemented so as to provide a two dimensional image.
  • the virtual camera may be provided so as to provide a three dimensional image.
  • images taken from two, three or more different cameras of a face and head may be projected onto a head-shaped object so as to generate a three dimensional representation of the topography of a person's face and head. That three dimensional representation can provide a three dimensional image.
  • a two dimensional image may be displayed on a display.
  • a three dimensional image may be displayed on a three dimensional display, such as on an autostereoscopic display, on a holographic display, or on any three dimensional display known to those skilled in the art.
  • a virtual camera may be implemented in many ways using the images from two, three, four or more cameras.
  • One example is provided by US5, 650,814 "Image Processing System Comprising Fixed Cameras and a System Simulating a Mobile Camera", which is incorporated here by reference.
  • a virtual camera may be facilitated if the optic axes of the n real cameras (where n>2) of the system meet exactly or approximately at the position which is the position in which a subject is located in an ideal or reference case eg. a position a fixed distance perpendicular from the centre of a screen.
  • an ideal or reference case eg. a position a fixed distance perpendicular from the centre of a screen.
  • this provides a common reference point for all n cameras.
  • the optic axes of the two cameras may meet at a point in front of the centre of the screen.
  • the optic axes of the three cameras may meet at a point in front of the centre of the screen.
  • the optic axes of the four cameras may meet at a point in front of the centre of the screen.
  • that point may be about 40 cm in front of the screen in the case of a screen on a portable device, or about 2 m in front of the screen in the case of a medium sized television screen, or about 4 m in front of the screen in the case of a large sized television screen.
  • such a point may be the position of the centre of the face of the second person; such a position is possible for the device of Fig. 2, or for the device of Fig. 5 or for the device of Fig. 9.
  • each device including an image processing system, each image processing system comprising a system of n>2 fixed real cameras arranged that individual fields of view merge so as to form a single field of view, an image construction system simulating a mobile, virtual camera supplying a target sub- image corresponding to a section of the field of view and constructed from source images from the n real cameras, wherein the image from each virtual camera of a particular device is displayed at the other device.
  • the image processing system may be a digital system that further comprises a luminance equalizing system for overall equalizing of corresponding luminance levels of first and second portions of a digital target image derived from two adjacent source images (Ii, Ij).
  • each device including an image processing system, each image processing system comprising a system of n>2 fixed real cameras arranged that individual fields of view merge so as to form a single field of view, an image construction system simulating a mobile, virtual camera continuously scanning the field of view to construct a target sub-image corresponding to an arbitrary section of the field of view and derived from adjacent source images from the n real cameras, wherein the image from each virtual camera of a particular device is displayed at the other device.
  • the image processing system may be a digital system that further comprises a luminance equalizing system for overall equalizing of corresponding luminance levels of first and second portions of a digital target image derived from two adjacent source images (Ii, Ij).
  • a device screen may be a display.
  • Individual sound sources are identified through the use of two or more inbuilt microphones in the meeting camera device, eg. a mobile device. Then the individual sources are graphically represented on a receiving device relative to their position eg. in the room.
  • a visual interface on the receiving device enables selection by hand of which sound source to record e.g. to optimise the noise cancellation/sonic focus for the selected sound source. This could be advantageous in for instance meetings where one person is talking and you want to aggressively noise cancel everything else.
  • One method for accomplishing this is to determine the relative delays of the various sound sources between their reception at the microphones by determining the respective delays of the sound sources with respect to their reception at the microphones. For example, if person A is closer to Microphone A than to Microphone B, the sound output of person A will be received at Microphone A before it is received at Microphone B, due to the finite speed of sound, even though the sound output received at the microphones A and B may be very similar. Similarly, if person B is closer to Microphone B than to Microphone A, the sound output of person B will be received at Microphone B before it is received at Microphone A, due to the finite speed of sound, even though the sound output received at the microphones A and B may be very similar.
  • the sounds from these two sources can be separated eg. by filtering out the unwanted sound source.
  • the sounds from these two sources can be separated eg. by filtering out the unwanted sound source.
  • Such other sounds could be background chatter from people in a crowded environment, such as in a train station, or in an airport, or such sounds could be vehicular traffic sounds in an urban environment. Those other sounds can be suppressed, so as to improve the audibility of the person one wants to listen to.
  • An option can be selected on the meeting camera device (eg. a mobile device), to suppress background sound, to improve the audibility of the person one wants to listen to.
  • a "screen” may be a display.
  • A. Mobile device comprising a camera, a display and a source of infra red light
  • a mobile device comprising a camera, a display and a source of infra red light, wherein the infra red light source is arranged to provide illumination of a field of view of the camera. This may also include the following features:
  • the device is arranged to obtain an image from the camera, to obtain a parameter from the image, and to adjust the infra red light source intensity based on the parameter.
  • the device is arranged to adjust the infra red light source intensity so as to optimize an image intensity obtained by the device camera.
  • the device is arranged to turn off the infra red light source when the light intensity at the camera is above a threshold.
  • the infra red light source is a near infra red light source.
  • the display includes a source of visible light.
  • the infra red light source intensity is adjustable independently of the visible light source intensity.
  • the display is provided on a major face of the device, and the infra red light source is provided on the same major face of the device as the display.
  • the infra red light source is an infra red LED incorporated into the display.
  • the infra red light source is an infra red LED provided near to the display.
  • the infra red light source is an infra red LED provided near to a border of the display.
  • an infra red LED is an AlGaAs LED.
  • the display is a liquid crystal display (LCD).
  • LCD liquid crystal display
  • a source of infra red light is in a backlight of the LCD display, wherein the backlight includes a source of visible light.
  • an infra red light source includes an infra red LED.
  • the mobile device including a major face, wherein the display is an OLED display on the major face of the device.
  • the mobile device is arranged to transmit an image taken by the camera under infra red illumination to a further mobile device.
  • the device is operable to register that an object has come close to the device, using an increase in light measured at the camera, when the infra red source is on.
  • the mobile device is a mobile phone.
  • the mobile device is a portable navigation device.
  • the mobile device is a personal digital assistant. • the mobile device is a tablet computer.
  • the mobile device is a laptop computer.
  • a communications system comprising a first mobile device comprising a first camera, a first display and a first source of infra red light, wherein the first infra red light source is arranged to provide illumination of a field of view of the first camera, and a second mobile device comprising a second camera, a second display and a second source of infra red light, wherein the second infra red light source is arranged to provide illumination of a field of view of the second camera, wherein the system is arranged to transmit a first image of a first user viewing the first display to the second device, and the system is arranged to transmit a second image of a second user viewing the second display to the first device.
  • the first mobile device may be a device as described in concept A.
  • the second mobile device may be a device as described in concept A.
  • a method of adjusting a camera image quality obtained under infra red illumination in a mobile device comprising a camera, a display and a source of infra red light, wherein the infra red light source is arranged to provide illumination of a field of view of the camera, comprising the steps of:
  • the method may be used on a device of concept A.
  • a computer program product for adjusting a camera image quality obtained under infra red illumination operable to run on a mobile device comprising a camera, a display and a source of infra red light, wherein the infra red light source is arranged to provide illumination of a field of view of the camera, the computer program product operable to:
  • the computer program product may be used on a device of concept A.
  • a method of saving battery power in a mobile device comprising a battery, a light sensor, a display and a source of infra red light, wherein the infra red light source is arranged to provide illumination of a field of view of the camera, comprising the steps of:
  • the light sensor may be a camera.
  • the method may be used on a device of concept A.
  • a computer program product for saving battery power in a mobile device comprising a battery, a light sensor, a display and a source of infra red light, wherein the infra red light source is arranged to provide illumination of a field of view of the camera, the computer program product operable to:
  • the light sensor may be a camera.
  • the computer program product may be used on a device of concept A.
  • a device as described in concept A including n>2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view.
  • Parallax information may be used in constructing the virtual camera image.
  • device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position.
  • virtual camera is arranged so as to provide an image of the first person as if the first person were looking directly at the virtual camera, even when the first person is in an off-centre position.
  • Device is mobile phone, laptop computer, a desktop monitor, a television, or a large screen display device.
  • display device may be a liquid crystal display device, a plasma screen display device, a cathode ray tube, an organic light emitting diode (OLED) display device, or a bistable display device.
  • OLED organic light emitting diode
  • device may be a handheld portable device, a fixed device, a desktop device, a wall-mounted device, a conference room device, a device in an automobile, a device on a mobile phone, a device on a train, a device on an aeroplane, or a hotel room device.
  • graphical modelling techniques may be used to generate a virtual camera image.
  • images taken from different cameras of a face and head may be projected onto a head-shaped object so as to generate a three dimensional representation of the topography of a person's face and head; that three dimensional representation may be imaged from in front of the face, so as to generate a virtual camera image from in front of the face.
  • Three or more cameras are used which are not collinearly arranged; parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device.
  • parallax information is available along orthogonal directions, such as along any pair of orthogonal directions in the general plane of the device; parallax information may be used in constructing the virtual camera image.
  • device operable to provide an image of a viewer taken from a virtual camera in front of a viewer's face when the viewer is looking at their screen centre from an off-centre position, wherein the device has at least three non collinearly arranged cameras, and wherein the off-centre position is displaced vertically from a horizontal plane passing through the screen centre, and wherein the off- centre position is displaced horizontally from the screen centre.
  • Three cameras are arranged on the vertices of an equilateral triangle, an isosceles triangle, a right angled triangle, a scalene triangle, or a triangle.
  • Device has the profile of a triangle. • Four cameras are arranged on the vertices of a square, a rectangle, a kite, a parallelogram, or a quadrilateral.
  • device which provides a virtual camera may also provide a microphone and speaker
  • device which provides a virtual camera may also provide a microphone and speaker; user of the device can be in voice communication with another user of another device with a microphone and speaker
  • virtual camera may be mobile in that its position can be located within a field of view that is obtained by combining the images from two, three, four or more real cameras
  • mobile virtual camera may supply a target sub-image corresponding to an arbitrary section of the field of view constructed from adjacent source images from two, three, four or more real cameras
  • image from the mobile virtual camera may have a selectable zoom level
  • image from the mobile virtual camera may have selectable tilt or selectable pan, or both selectable tilt and selectable pan.
  • virtual camera may be implemented so as to correct for unwanted tilt, unwanted pan, or unwanted tilt and unwanted pan, present in the image of a user
  • the optic axes of two cameras may meet at a point in front of the centre of the screen
  • optic axes of three cameras may meet at a point in front of the centre of the screen
  • optic axes of four cameras may meet at a point in front of the centre of the screen.
  • three dimensional display is an autostereoscopic display, or a holographic display
  • meeting camera device comprises at least two microphones.
  • Meeting camera device is operable to identify at least one sound source from the sound input received at the two microphones.
  • Meeting camera is operable to provide to a receiving device a selectable option to transmit only the sound from the identified sound source.
  • Meeting camera device wherein upon selection at the receiving device of the option to transmit only the sound from the identified sound source, the meeting camera device transmits only the sound from the identified sound source.
  • a device screen may be a display
  • Method of supplying a target sub-image corresponding to a portion of the fields of view for a meeting camera device wherein the meeting camera device includes a screen and n>2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein the method comprises the step of: using the virtual camera comprising an image construction system to supply a target sub-image corresponding to a portion of the fields of view.
  • Computer program product operable to supply a target sub-image corresponding to a portion of the fields of view for a meeting camera device, wherein the meeting camera device includes a screen and n>2 cameras, the cameras each situated off- centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein the computer program product is operable to supply a target sub-image corresponding to a portion of the fields of view.
  • Meeting camera system including a mobile device of concept A with n>2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the system including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view.
  • the meeting camera system includes a mobile device of concept A including n>2 cameras, the cameras each situated off- centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the system including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein the method comprises the step of: using the virtual camera comprising an image construction system to supply a target sub-image corresponding to a portion of the fields of view.
  • Computer program product operable to supply a target sub-image corresponding to a portion " of the fields of view for a meeting camera system
  • the meeting camera system includes a mobile device of concept A including n>2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the system including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein the computer program product is operable to supply a target sub-image corresponding to a portion of the fields of view.
  • the meeting camera system includes a mobile device of concept A including n>2 cameras, the cameras each situated off-centre of a major face of the device, the cameras arranged such that their individual fields of view overlap, the system including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein the computer program product is operable to supply a target sub-image corresponding to a
  • Meeting camera device system comprising two devices, each device falling within concept A and including n>2 cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, each device including a virtual camera comprising an image construction system operable to supply a target sub- image corresponding to a portion of the fields of view, wherein each device provides a target sub-image to the other device.
  • Method of supplying a target sub-image corresponding to a portion of the fields of view for a meeting camera device system comprising two devices, each device falling within concept A and including n>2 cameras, the cameras of each device situated off- centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, each device including a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of the fields of view, wherein each device provides a target sub-image to the other device, wherein the method comprises the step of: for a meeting camera device, using the virtual camera comprising an image construction system to supply a target sub-image corresponding to a portion of the fields of view to the other device.
  • Meeting camera device system comprising two devices and a computer, each device falling within concept A and including n>2 cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap, each device transmitting its camera images to a computer, the computer including a virtual camera comprising an image construction system operable to supply a target sub- image corresponding to a portion of the fields of view, wherein each device receives a target sub-image based on data transmitted by the other device to the computer.
  • Method of supplying a target sub-image corresponding to a portion of fields of view for a meeting camera device system comprising two devices and a computer, each device falling within concept A and including n>2 cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap
  • the computer includes a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of fields of view, the method comprising the steps of:
  • Computer program product operable to supply a target sub-image corresponding to a portion of fields of view for a meeting camera device system comprising two devices and a computer, each device falling within concept A and including n>2 cameras, the cameras of each device situated off-centre of a major face of the respective device, the cameras of a respective device arranged such that their individual fields of view overlap
  • the computer includes a virtual camera comprising an image construction system operable to supply a target sub-image corresponding to a portion of fields of view, wherein a first device transmits its camera images to a computer, and a second device transmits its camera images to a computer, and the computer program product running on the computer is operable to supply to the second device a target sub-image corresponding to a portion of the fields of view, based on data transmitted by the first device to the computer, and the computer program product running on the computer is operable to supply to the first device a target sub- image corresponding to a portion of the fields of view, based on data transmitted by the second device to the computer
  • the main focus for Yota's IP protection strategy will be its new LTE phone.
  • the LTE phone will include innovative software, hardware and provide an innovative user experience. See for example Figs. 1 1 to 23.
  • Meet Camera One advantage of Meet Camera is that one can approach a large panel display with always on video-conferencing and talk directly to the person shown on it - giving the feeling of eye-to-eye contact.
  • the face displayed by the virtual camera can be placed in the centre of the screen, even if the face of the person whose image is being captured moves significantly away from the centre of the screen. This placement can be accomplished by a tracking algorithm.
  • DML Phone speaker It's hard to get good quality audio performance, unless you have a large speaker with a large and ugly speaker hole.
  • NXT pic distributed mode loudspeaker (DML) technology here to vibrate the entire phone screen - the whole screen surface acts as the speaker.
  • the speaker hole can be fully eliminated.
  • DML has never been used before to drive a screen surface in a mobile phone. Haptic feedback can be provided by the drivers too - a new use for the DML exciters.
  • iPhone/iPad has no USB connector— a major disaadvantage.
  • USB dongle is interfaced to.
  • USB dongle that can receive streaming radio (e.g. for internet radio stations, Spotify etc.)
  • streaming radio e.g. for internet radio stations, Spotify etc.
  • the USB dongle captures the data stream and converts it to a sequence of files - just like the MP3 files the in-car audio is designed to read. This enables even a basic in-car audio device to have playback/rewind, store etc. functionality for internet radio.
  • the streamed audio is stored as at least two separate files, which allows the user to choose to skip to the next track using the car audio system software.
  • the user can listen to music online in his car with no modifications to the in-car audio system.
  • An online interface is used for setting up the service, selecting stream source.
  • Individual sound sources are identified with two or more inbuilt microphones. Then the individual sources are graphically represented on the device relative to their position in the room.
  • a visual interface on the phone enables selection by hand of which sound source to record e.g. to optimise the noise cancellation/sonic focus for the selected sound source. This could be advantageous in for instance meetings where one person is talking and you want to aggressively noise cancel everything else.
  • the phone presents a seamless, unibody surface - although it can still have hidden mechanical buttons e.g. for volume up, volume down.
  • the mobile phone has a concave front face and a convex rear face, of same or similar magnitude of curvature.
  • Concave front matches path of finger as wrist rotates. Hence it's very natural to use.
  • Having a curved surface as the vibrating DML speaker is also better since if the LCD with the speaker exciters was instead a flat surface, then it would sound unpleasant if that flat surface is placed down against a tabletop. Curving the surface prevents this happening.
  • Preferred curvature of front and back is cylindrical, rather than spherical or aspherical. See eg. Figs 13, 14, 17.
  • the convex back can have a bistable display. Since the normal resting position is front face down, the back screen with bi-stable display is normally displayed when phone is in the resting position. This resting position is stable. If phone is placed back down (ie convex face down), the phone could spin, which is unstable. Hence a user will likely place phone front face (i.e. concave face) down, with the bi-stable screen showing.
  • the front face can face inwards, since this better matches leg curvature. This can be the better configuration (as opposed to front face up) for antenna reception. 11.
  • Microphone in SIM card "eject hole"
  • the microphone is placed in a hole in the body of the mobile device, in the SIM card's eject hole. See Fig. 23.
  • the casing of the mobile device consists of a material that can change its tactile properties from wood to metal ("morphing").
  • 3 GPP Long Term Evolution is the latest standard in the mobile network technology tree that produced the GSM/EDGE and UMTS/HSPA network technologies. It is a project of the 3rd Generation Partnership Project (3GPP), operating under a name trademarked by one of the associations within the partnership, the European Telecommunications Standards Institute.
  • 3GPP 3rd Generation Partnership Project
  • LTE Long Term Evolution
  • 4G fourth generation
  • 4G 4th generation standard
  • LTE Advanced is backwards compatible with LTE and uses the same frequency bands, while LTE is not backwards compatible with 3G systems.
  • LTE Universal Mobile Telecommunications System
  • 3 GPP 3rd Generation Partnership Project
  • LTE Long Term Evolution
  • GTE Global System for Mobile communications
  • Agencies in some areas have filed for waivers hoping to use the 700 MHz spectrum with other technologies in advance of the adoption of a nationwide standard.
  • LTE provides downlink peak rates of at least 100 Mbps, an uplink of at least 50 Mbps and RAN round-trip times of less than 10 ms.
  • LTE supports scalable carrier bandwidths, from 1.4 MHz to 20 MHz and supports both frequency division duplexing (FDD) and time division duplexing (TDD).
  • FDD frequency division duplexing
  • TDD time division duplexing
  • LTE Long Term Evolution
  • FDD Frequency Division Duplex
  • TDD Time Division Duplex
  • LTE Advanced is LTE Advanced and is currently being standardized in 3 GPP Release 10.
  • LTE Advanced is a preliminary mobile communication standard, formally submitted as a candidate 4G system to ITU-T in late 2009, was approved into ITU, International Telecommunications Union, IMT- Advanced and expected to be finalized by 3 GPP in early 201 1. It is standardized by the 3rd Generation Partnership Project (3GPP) as a major enhancement of the 3 GPP Long Term Evolution (LTE) standard.
  • 3GPP 3rd Generation Partnership Project
  • LTE format was first proposed by NTT DoCoMo of Japan and has been adopted as the international standards. LTE standardization has come to a mature state by now where changes in the specification are limited to corrections and bug fixes. The first commercial services were launched in Scandinavia in December 2009 followed by the United States and Japan in 2010. More first release LTE networks are expected to be deployed globally during 2010 as a natural evolution of several 2G and 3G systems, including Global system for mobile communications (GSM) and Universal Mobile Telecommunications System (UMTS) (3 GPP as well as 3GPP2).
  • GSM Global system for mobile communications
  • UMTS Universal Mobile Telecommunications System
  • the first release LTE does not meet the IMT Advanced requirements for 4G also called IMT Advanced as defined by the International Telecommunication Union such as peak data rates up to 1 Gbit/s.
  • IMT Advanced as defined by the International Telecommunication Union such as peak data rates up to 1 Gbit/s.
  • the ITU has invited the submission of candidate Radio Interface Technologies (RITs) following their requirements as mentioned in a circular letter.
  • RITs Radio Interface Technologies
  • LTE-Advanced 3GPP Technical Report (TR) 36.913, "Requirements for Further Advancements for E-UTRA (LTE-Advanced)." These requirements are based on the ITU requirements for 4G and on 3GPP operators' own requirements for advancing LTE.
  • Major technical considerations include the following:
  • WiMAX 2 has been approved by ITU into the IMT Advanced family. WiMAX 2 is designed to be backward compatible with WiMAX 1/1.5 devices. Most vendors now support ease of conversion of earlier 'pre-4G', pre- advanced versions and some support software defined upgrades of core base station equipment from 3G.
  • LTE Advanced Long Term Evolution
  • ITU-R Long Term Evolution

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Studio Devices (AREA)

Abstract

L'invention concerne des appareils mobiles comprenant un capteur de lumière, un écran, et une source de lumière infrarouge conçue pour éclairer un champ de vue du capteur de lumière. Elle concerne également des procédés et des produits de programme informatique associés à de tels dispositifs. On décrit un appareil mobile comportant une caméra, un écran, et une source de lumière infrarouge conçue pour éclairer un champ de vue de la caméra.
PCT/RU2012/000027 2011-01-21 2012-01-23 Appareil mobile muni d'un dispositif d'éclairage WO2012099505A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW101134923A TW201332336A (zh) 2011-10-03 2012-09-24 具有顯示幕的裝置

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
GB1101083.2 2011-01-21
GBGB1101083.2A GB201101083D0 (en) 2011-01-21 2011-01-21 Mobile device camera lighting
GBGB1112458.3A GB201112458D0 (en) 2010-09-28 2011-07-20 device with display screen
GB1112458.3 2011-07-20
PCT/RU2011/000817 WO2012053940A2 (fr) 2010-10-20 2011-10-20 Caméra pour réunions
RUPCT/RU2011/000817 2011-10-20

Publications (1)

Publication Number Publication Date
WO2012099505A1 true WO2012099505A1 (fr) 2012-07-26

Family

ID=43769430

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/RU2012/000027 WO2012099505A1 (fr) 2011-01-21 2012-01-23 Appareil mobile muni d'un dispositif d'éclairage

Country Status (2)

Country Link
GB (1) GB201101083D0 (fr)
WO (1) WO2012099505A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9084288B2 (en) 2013-03-14 2015-07-14 Qualcomm Incorporated Dual-SIM wireless communications device and method for mitigating receiver desense in dual-active operation
FR3023111A1 (fr) * 2014-06-30 2016-01-01 Safel Systeme de vision
US9525811B2 (en) 2013-07-01 2016-12-20 Qualcomm Incorporated Display device configured as an illumination source
WO2017196692A1 (fr) * 2016-05-13 2017-11-16 Microsoft Technology Licensing, Llc Éclairement infrarouge à travers une source d'éclairage de fond

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5650814A (en) 1993-10-20 1997-07-22 U.S. Philips Corporation Image processing system comprising fixed cameras and a system simulating a mobile camera
EP1076446A2 (fr) * 1999-08-09 2001-02-14 Hughes Electronics Corporation Système de collaboration à médiation électronique avec contact visuel
EP1507419A1 (fr) * 2002-05-21 2005-02-16 Sony Corporation Appareil de traitement d'information, systeme de traitement d'information et procede d'affichage de dialogues
US20060083421A1 (en) * 2004-10-14 2006-04-20 Wu Weiguo Image processing apparatus and method
US20090095906A1 (en) * 2007-10-11 2009-04-16 Sony Ericsson Mobile Communications Ab Image capturing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5650814A (en) 1993-10-20 1997-07-22 U.S. Philips Corporation Image processing system comprising fixed cameras and a system simulating a mobile camera
EP1076446A2 (fr) * 1999-08-09 2001-02-14 Hughes Electronics Corporation Système de collaboration à médiation électronique avec contact visuel
EP1507419A1 (fr) * 2002-05-21 2005-02-16 Sony Corporation Appareil de traitement d'information, systeme de traitement d'information et procede d'affichage de dialogues
US20060083421A1 (en) * 2004-10-14 2006-04-20 Wu Weiguo Image processing apparatus and method
US20090095906A1 (en) * 2007-10-11 2009-04-16 Sony Ericsson Mobile Communications Ab Image capturing

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9084288B2 (en) 2013-03-14 2015-07-14 Qualcomm Incorporated Dual-SIM wireless communications device and method for mitigating receiver desense in dual-active operation
US9525811B2 (en) 2013-07-01 2016-12-20 Qualcomm Incorporated Display device configured as an illumination source
US9781321B2 (en) 2013-07-01 2017-10-03 Qualcomm Incorporated Display device configured as an illumination source
US11070710B2 (en) 2013-07-01 2021-07-20 Qualcomm Incorporated Display device configured as an illumination source
US11917234B2 (en) 2013-07-01 2024-02-27 Qualcomm Incorporated Display device configured as an illumination source
FR3023111A1 (fr) * 2014-06-30 2016-01-01 Safel Systeme de vision
WO2017196692A1 (fr) * 2016-05-13 2017-11-16 Microsoft Technology Licensing, Llc Éclairement infrarouge à travers une source d'éclairage de fond

Also Published As

Publication number Publication date
GB201101083D0 (en) 2011-03-09

Similar Documents

Publication Publication Date Title
US11812098B2 (en) Projected audio and video playing method and electronic device
TWI696146B (zh) 影像處理方法、裝置、電腦可讀儲存媒體和行動終端
JP7235871B2 (ja) データ伝送方法と電子デバイス
US20220191313A1 (en) Electronic Device Having Foldable Screen and Display Method
WO2020019356A1 (fr) Procédé de commutation de caméras par un terminal, et terminal
WO2022100610A1 (fr) Procédé et appareil de projection d'écran, ainsi que dispositif électronique et support de stockage lisible par ordinateur
EP3432588B1 (fr) Procédé et système de traitement d'informations d'image
WO2020078273A1 (fr) Procédé de photographie, et dispositif électronique
EP4156660A1 (fr) Terminal mobile qui empêche une fuite de son et procédé de production de son pour terminal mobile
WO2020078330A1 (fr) Procédé de traduction basé sur des appels vocaux et dispositif électronique
CN107040723B (zh) 一种基于双摄像头的成像方法、移动终端及存储介质
CN114610253A (zh) 一种投屏方法及设备
CN115567630B (zh) 一种电子设备的管理方法、电子设备及可读存储介质
WO2012099505A1 (fr) Appareil mobile muni d'un dispositif d'éclairage
CN114063951B (zh) 投屏异常处理方法及电子设备
CN113890745A (zh) 业务接续的决策方法、装置及电子设备
US11743954B2 (en) Augmented reality communication method and electronic device
CN106375787B (zh) 视频播放的方法及装置
CN111835941B (zh) 图像生成方法及装置、电子设备、计算机可读存储介质
CN109361872B (zh) 双面屏辅助拍摄方法、终端和存储介质
WO2022088926A1 (fr) Procédé de capture et dispositif de terminal
WO2022068505A1 (fr) Procédé de prise de photographies et dispositif électronique
KR20150095165A (ko) 이동 단말기 및 그 제어방법
WO2012053940A2 (fr) Caméra pour réunions
US20240223689A1 (en) Mobile terminal for preventing sound leakage and sound output method for mobile terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12714872

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12714872

Country of ref document: EP

Kind code of ref document: A1