WO2020201240A1 - Dynamically controlling light settings for a video call based on a spatial location of a mobile device - Google Patents

Dynamically controlling light settings for a video call based on a spatial location of a mobile device Download PDF

Info

Publication number
WO2020201240A1
WO2020201240A1 PCT/EP2020/059025 EP2020059025W WO2020201240A1 WO 2020201240 A1 WO2020201240 A1 WO 2020201240A1 EP 2020059025 W EP2020059025 W EP 2020059025W WO 2020201240 A1 WO2020201240 A1 WO 2020201240A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile device
lighting devices
lighting
settings
processor
Prior art date
Application number
PCT/EP2020/059025
Other languages
French (fr)
Inventor
Ahamed Rafik AJMEER
Jagadish DHANAMJAYAM
Original Assignee
Signify Holding B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Signify Holding B.V. filed Critical Signify Holding B.V.
Publication of WO2020201240A1 publication Critical patent/WO2020201240A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • H04N2007/145Handheld terminals

Definitions

  • the invention relates to a system for dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place.
  • the invention further relates to a method of dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place.
  • the invention also relates to a computer program product enabling a computer system to perform such a method.
  • US 2017/0324933 A1 discloses a video-enabled communication system that includes a camera to acquire an image of a local participant during a video communication session and a control unit that selects a lighting configuration for the local participant to be captured by the camera for provision to a remote endpoint for display to another participant.
  • the lighting configuration selection is based on information describing a local participant or context of the video communication session.
  • a microphone is used to detect an active speaker and face detection is used to approximately locate spatially the active speaker, thereby enabling the control unit to select appropriate lighting elements for optimal lighting.
  • a drawback of the lighting system of US 2017/0324933 A1 is that the video- enabled communication system is not well suited for video calls involving a mobile device held by a person.
  • the camera of a mobile device cannot be used to locate the active speaker and a map showing relative spatial locations of endpoints compared to lighting element locations is not enough when an endpoint is moving around.
  • a system for dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place comprises at least one input interface, at least one control interface, and at least one processor configured to detect that a mobile device is being used for video calling, use said at least one input interface to determine at least one parameter for each of said plurality of lighting devices from at least one input signal, said at least one parameter depending on a spatial location of said lighting device relative to a spatial location of said mobile device, determine settings for said plurality of lighting devices based on said determined parameters, and use said at least one control interface to control said plurality of lighting devices according to said determined settings upon detecting that said mobile device is being used for video calling.
  • the lighting conditions are usually not conducive to provide best video quality.
  • the lighting conditions e.g. uniform brightness
  • the light settings of the lighting devices based on parameter(s) depending on the spatial locations of the lighting devices relative to the spatial location of the mobile device, good light conditions can be created even if the video call involves a mobile device held by a person.
  • the lighting conditions are automatically and dynamically adjusted to meet current requirements.
  • making the lighting system of the room aware of video conferencing and the participant’s location can help it adjust the lighting to enhance the video capture quality.
  • Said system may comprise said mobile device and said at least one processor may be configured to receive said at least one input signal from each of said plurality of lighting devices.
  • said system may comprise said plurality of lighting devices and said at least one processor may be configured to receive said at least one input signal from said mobile device.
  • said at least one processor may be configured to receive a first input signal of said at least one input signal from said mobile device and further input signals of said at least one input signal from said plurality of lighting devices.
  • Said at least one parameter may comprise a distance parameter which represents a distance between said lighting device and said mobile device. By determining light settings in dependence on such a distance parameter, light conditions can be improved.
  • Said at least one processor may be configured to determine a set of said plurality of lighting devices with a distance smaller than a threshold based on said determined distance parameters and determine a dimmed light setting for said set of lighting devices.
  • overhead lights may be dimmed, e.g. to 50%, to prevent facial shadows caused by intense overhead lights.
  • the dimmed light setting may correspond to a light output between 150 - 300 lumens, for example.
  • Said at least one processor may be configured to determine a first set of said plurality of lighting devices with a distance smaller than a threshold and a second set of said plurality of lighting devices with a distance larger than said threshold based on said determined distance parameters and determine a first set of light output levels for said first set and a second set of light output levels for said second set, each of said light output levels in said first set being higher than each of said light output levels in said second set.
  • the first set of light output levels may be higher than 450 lumens, for example.
  • the second set of light output levels may be lower than 50 lumens, for example.
  • Said at least one parameter may comprise a direction parameter which indicates whether said lighting device is located in front of a person holding said mobile device or behind a person holding said mobile device. By determining light settings in dependence on such a direction parameter, light conditions can be improved.
  • Said at least one processor may be configured to determine said direction parameter based on an orientation of said mobile device, for example.
  • Said at least one processor may be configured to determine a first set of said plurality of lighting devices in front of said person and a second set of said plurality of lighting devices behind said person based on said determined direction parameters and determine a first set of light output levels for said first set and a second set of light output levels for said second set, each of said light output levels in said first set being higher than each of said light output levels in said second set.
  • the light output behind the participant i.e. the background lighting
  • the light output in front of the participant should preferably be high, e.g. set to 90% of its maximum, to get light reflecting of the face of the participant, thereby making it easier to see features of the face of the participant.
  • Said at least one processor may be configured to detect that said mobile device is no longer being used for video calling and use said at least one control interface to control said plurality of lighting devices according to different settings upon detecting that said mobile device is no longer being used for video calling.
  • Said different settings may be settings used by said plurality of lighting devices before said plurality of lighting devices were controlled according to said determined settings, for example.
  • Said at least one processor may be configured to determine a spatial location of said lighting device relative to a spatial location of said mobile device based on radio frequency transmissions by multiple devices. As GPS typically does not work indoors, RF beacons may be used to determine spatial locations indoors.
  • a method of dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place comprises detecting that a mobile device is being used for video calling, determining at least one parameter for each of said plurality of lighting devices from at least one input signal, said at least one parameter depending on a spatial location of said lighting device relative to a spatial location of said mobile device, determining settings for said plurality of lighting devices based on said determined parameters, and controlling said plurality of lighting devices according to said determined settings upon detecting that said mobile device is being used for video calling.
  • Said method may be performed by software running on a
  • This software may be provided as a computer program product.
  • a computer program for carrying out the methods described herein, as well as a non-transitory computer readable storage-medium storing the computer program are provided.
  • a computer program may, for example, be downloaded by or uploaded to an existing device or be stored upon manufacturing of these systems.
  • a non-transitory computer-readable storage medium stores at least one software code portion, the software code portion, when executed or processed by a computer, being configured to perform executable operations for dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place.
  • the executable operations comprise detecting that a mobile device is being used for video calling, determining at least one parameter for each of said plurality of lighting devices from at least one input signal, said at least one parameter depending on a spatial location of said lighting device relative to a spatial location of said mobile device, determining settings for said plurality of lighting devices based on said determined parameters, and controlling said plurality of lighting devices according to said determined settings upon detecting that said mobile device is being used for video calling.
  • aspects of the present invention may be embodied as a device, a method or a computer program product.
  • aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", “module” or “system.”
  • Functions described in this disclosure may be implemented as an algorithm executed by a processor/microprocessor of a computer.
  • aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied, e.g., stored, thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may include, but are not limited to, the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store, a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may be provided to a processor, in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • a processor in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • Fig. l is a block diagram of a first embodiment of the system
  • Fig. 2 is a block diagram of a second embodiment of the system
  • Fig. 3 is a block diagram of a third embodiment of the system.
  • Fig. 4 is a block diagram of a fourth embodiment of the system.
  • Fig. 5 is a block diagram of a fifth embodiment of the system;
  • Fig. 6 shows an example of uniform lighting;
  • Fig. 7 is a flow diagram of a first embodiment of the method
  • Fig. 8 is a flow diagram of a second embodiment of the method.
  • Fig. 9 is a flow diagram of a third embodiment of the method.
  • Fig. 10 shows an example of lighting rendered with the method of Fig. 9;
  • Fig. 11 depicts a top view of an extension of the example of Fig. 10;
  • Fig. 12 depicts a top view of an extension of the example of Fig. 11 in which multiple video calls take place;
  • Fig. 13 is a block diagram of an exemplary data processing system for performing the method of the invention.
  • Fig. 1 shows a first embodiment of the system for dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place.
  • the plurality of lighting devices comprises lighting devices 15-17.
  • the system is a mobile device 1.
  • the mobile device 1 is connected to a wireless LAN access point 12.
  • a bridge 13, e.g. a Philips Hue bridge, is also connected to the wireless LAN access point 12, e.g. via Ethernet.
  • the bridge 13 communicates with the lighting devices 15-17, e.g. using Zigbee technology.
  • the lighting devices 15-17 may be Philips Hue lights, for example.
  • a lighting system 10 comprises the bridge 13 and the lighting devices 15-17.
  • the mobile device 1 comprises a receiver 3, a transmitter 4, a processor 5, memory 7, a face-oriented camera 8 and a display 9.
  • the processor 5 is configured to detect that the mobile device 1 is being used for video calling and use the receiver 3 to determine at least one parameter for each of the lighting devices 15-17 from at least one input signal.
  • the at least one parameter depends on a spatial location of the lighting device relative to a spatial location of the mobile device 1.
  • the processor 5 is further configured to determine settings for the lighting devices 15-17 based on the determined parameters and use the transmitter 4 to control the lighting devices 15-17 according to the determined settings upon detecting that the mobile device 1 is being used for video calling.
  • each set of at least one parameter comprises a distance parameter and a direction parameter.
  • the distance parameter represents a distance between the applicable lighting device and the mobile device 1.
  • the direction parameter indicates whether the lighting device is located in front of a person holding the mobile device 1 or behind a person holding the mobile device 1.
  • the direction parameter may specify one of“in front of user”,“behind user”,“above user”, or“next to user”, for example.
  • the processor 5 is configured to determine a spatial location of each of the lighting devices 15-17 relative to the mobile device 1 based on radio frequency transmissions by multiple devices.
  • the lighting devices 15-17 (and optionally the bridge 13) transmit, e.g. broadcast, messages from which the processor 5 is able to determine a Received Signal Strength Indicator (RRSI) and which comprise RSSIs of messages received by the device in question from other lighting devices.
  • RRSI Received Signal Strength Indicator
  • the processor 5 receives an input signal from each of the lighting devices 15-17.
  • the processor 5 is able to determine the distances between the different devices sufficiently accurately and thus spatial locations of the lighting devices 15-17 relative the mobile device 1.
  • the processor 5 is further configured to determine the direction parameter based on the orientation of the mobile device 1.
  • the mobile device 1 may comprise a magnetometer for determining the orientation of the mobile device 1 and the processor 5 may be configured to determine the orientation of the display 9 and the face-oriented camera 8 of the mobile device 1 using these sensors, for example.
  • the lighting devices 15-17 may identify in their messages in which part of the building they have been installed, e.g.“North”,“North-West”, or “South”. This may be configured in the lighting devices 15-17 during commissioning.
  • the spatial locations of the lighting devices 15-17 relative to the mobile device 1 comprise coarse absolute spatial locations in this case.
  • the processor 5 may be configured to analyze an image captured by the face-oriented camera 8 to recognize one or more of the lighting devices 15-17 (above or behind the person holding the mobile device 1) in the image, e.g. based on identifiers transmitted using Visible Light Communication (VLC) or by using object recognition, and determine the orientation of the mobile device 1 relative to the lighting devices 15-17 based on this analysis.
  • VLC Visible Light Communication
  • the processor 5 uses the distances between the mobile device 1 and each of the lighting devices 15-17 and the determined orientation of the face-oriented camera 8 of the mobile device 1 to determine the setting for the respective lighting device, e.g. dim up the luminaire(s) in front of the person holding the mobile device 1 and dim down the
  • the person making the video call will typically move while making the video call, e.g. turn and/or walk around, and/or change the way in which he holds his mobile device 1.
  • the lighting conditions may be kept good while the video call is going on.
  • the user may be provided an option to disable the adaptive lighting behavior.
  • a user may be able to manually override the adaptive lighting behavior using a manual light switch, for example.
  • the adaptive lighting control could then be turned off until the manual override is revoked and/or until a timer expires. If the lighting devices are controlled based on input from a light sensor, this behavior may be overridden while the adaptive lighting is being used.
  • the mobile device 1 comprises one processor 5.
  • the mobile device 1 comprises multiple processors.
  • the processor 5 of the mobile device 1 may be a general-purpose processor, e.g. from ARM or Qualcomm or an application-specific processor.
  • the processor 5 of the mobile device 1 may run an Android or iOS operating system for example.
  • the display 9 may comprise an LCD or OLED display panel, for example.
  • the display 9 may be a touch screen, for example.
  • the processor 5 may use this touch screen to provide a user interface, for example.
  • the memory 7 may comprise one or more memory units.
  • the memory 7 may comprise solid state memory, for example.
  • the receiver 3 and the transmitter 4 may use one or more wireless communication technologies such as Wi-Fi (IEEE 802.11) to communicate with the wireless LAN access point 12, for example.
  • Wi-Fi IEEE 802.11
  • multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter.
  • a separate receiver and a separate transmitter are used.
  • the receiver 3 and the transmitter 4 are combined into a transceiver.
  • the camera 8 may comprise a CMOS or CCD sensor, for example.
  • the mobile device 1 may comprise other components typical for a mobile device such as a battery and a power connector.
  • the invention may be implemented using a computer program running on one or more processors.
  • a bridge is used to control light devices 15-17.
  • light devices 15-17 are controlled without using a bridge.
  • the system is a mobile device.
  • the system of the invention is a different device, e.g. a bridge.
  • the system of the invention comprises a single device.
  • the system of the invention comprises a plurality of devices.
  • Fig. 2 shows a second embodiment of the system of the invention: a bridge 21, e.g. a Philips Hue bridge.
  • the bridge 21 and the lighting devices 15-17 are part of the lighting system 20.
  • the bridge 21 comprises a receiver 23, a transmitter 24, a processor 25, and a memory 27.
  • the processor 25 is configured to detect that a mobile device 11 is being used for video calling and use the receiver 21 to determine at least one parameter for each of the lighting devices 15-17 from at least one input signal.
  • the at least one parameter depends on a spatial location of the lighting device relative to a spatial location of the mobile device.
  • the processor 25 is configured to determine settings for the lighting devices 15-17 based on the determined parameters and use the transmitter 4 to control the lighting devices 15-17 according to the determined settings upon detecting that the mobile device 11 is being used for video calling.
  • the processor 25 is configured to receive a first input signal of the at least one input signal from the mobile device 11 and further input signals of the at least one input signal from the lighting devices 15-17. In the embodiment of Fig. 2, the processor 25 is configured to determine a spatial location of each of the lighting devices 15-17 relative to a spatial location of the mobile device 11 based on radio frequency transmissions by multiple devices.
  • the input signals may be the messages described in relation to Fig. 1.
  • the mobile device 11 also transmits such messages.
  • the bridge 21 may then determine the spatial locations of the lighting devices 15-17 relative to the spatial location of the mobile device 11 in a similar manner as the mobile device 1 of Fig. 1 determines the spatial locations the lighting devices 15-17 relative to the spatial location of the mobile device 1.
  • the mobile device 11 further includes its orientation in the messages broadcast or transmitted to the bridge 21 and also informs the bridge 21 when a video call has started and stopped. These messages may be transmitted by an app running on the mobile device 11, for example.
  • the bridge 21 comprises one processor 25.
  • the bridge 21 comprises multiple processors.
  • the processor 25 of the bridge 21 may be a general-purpose processor, e.g. ARM-based, or an application-specific processor.
  • the processor 25 of the bridge 21 may run a Unix-based operating system for example.
  • the memory 27 may comprise one or more memory units.
  • the memory 27 may comprise one or more hard disks and/or solid-state memory, for example.
  • the memory 27 may be used to store a table of connected lights, for example.
  • the receiver 23 and the transmitter 24 may use one or more wired or wireless communication technologies such as Ethernet to communicate with the wireless LAN access point 12, for example.
  • multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter.
  • a separate receiver and a separate transmitter are used.
  • the receiver 23 and the transmitter 24 are combined into a transceiver.
  • the bridge 21 may comprise other components typical for a network device such as a power connector.
  • the invention may be implemented using a computer program running on one or more processors.
  • Fig. 3 shows a third embodiment of the system of the invention: a lighting system 40.
  • the lighting system 40 comprises three lighting devices 45-47.
  • the lighting devices 45-47 each comprise a receiver 53, a transmitter 54, a processor 55, a memory 57, and a light source 59.
  • the processors 55 are configured to detect that a mobile device 11 is being used for video calling and use the receivers 53 to determine at least one parameter for each of the lighting devices 45-47 from at least one input signal.
  • the at least one parameter depends on a spatial location of the lighting device relative to a spatial location of the mobile device 11
  • the processors 25 are further configured to determine settings for the lighting devices 45-47 based on the determined parameters and use their internal control interfaces to control the lighting devices 45-47, i.e. light sources 59, according to the determined settings upon detecting that the mobile device 11 is being used for video calling.
  • the invention is implemented in a distributed manner.
  • each processor 55 is configured to receive an input signal from the mobile device 11. In the embodiment of Fig. 3, each processor 55 is configured to determine a spatial location of the lighting device relative to a spatial location of the mobile device 11 based on radio frequency transmissions by multiple devices, e.g. the mobile device 11, the bridge 13 and the other lighting devices.
  • the input signals may be the messages described in relation to Fig. 1.
  • the mobile device 11 also transmits such messages.
  • Each of the lighting devices 45-47 may then determine its spatial location relative to the spatial location of the mobile device 11, i.e. determine the spatial location of the mobile device 11 relative to its spatial location, in a similar manner as the mobile device 1 of Fig. 1 determines the spatial locations of the lighting devices 15-17 relative to the spatial location of the mobile device 1.
  • the mobile device 11 further includes its orientation in the messages broadcast or transmitted to the lighting devices 45-47 and also informs the lighting devices 45-47 when a video call has started and stopped. These messages may be transmitted by an app running on the mobile device 11, for example.
  • the lighting devices 45-47 each comprise one processor 55.
  • one or more of the lighting devices 45-47 comprise multiple processors.
  • the processors 55 of the lighting devices 45-47 may be general-purpose processors, e.g. ARM-based, or application-specific processors, for example.
  • the processors 55 of the lighting devices 45-47 may run a Unix- based operating system for example.
  • the memory 57 may comprise one or more memory units.
  • the memory 57 may comprise solid-state memory, for example.
  • the memory 57 may be used to store light settings associated with light scene identifiers, for example.
  • the light sources 59 may each comprise one or more LED diodes, for example.
  • the receivers 53 and the transmitters 54 may use one or more wired or wireless communication technologies such as Zigbee to communicate with the bridge 13, for example.
  • multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter.
  • a separate receiver and a separate transmitter are used.
  • the receiver 53 and the transmitter 54 are combined into a transceiver.
  • the lighting devices 45-47 may comprise other components typical for a lighting device such as a power connector.
  • the invention may be implemented using a computer program running on one or more processors.
  • the mobile device and lighting devices communicate via a wireless LAN access point and a bridge.
  • a wireless LAN access point and a bridge.
  • the mobile device and the lighting devices communicate directly, e.g. using Bluetooth Low Energy (BLE).
  • BLE Bluetooth Low Energy
  • the video call may be performed using a mobile communication network such as LTE, for example.
  • a first variant on the lighting system 40 is shown in Fig. 4.
  • a lighting system 200 comprise three lighting devices 205-207.
  • the mobile device 11 connects to one of the lighting devices, lighting device 205, which forwards messages to and from the other lighting devices, lighting devices 206 and 207, e.g. using wireless mesh technology such as Zigbee.
  • a second variant on the lighting system 40 is shown in Fig. 5.
  • a lighting system 210 comprise three lighting devices 215-217.
  • the mobile device 11 connects to each of the lighting devices 215-217, e.g. using BLE or Zigbee.
  • Fig. 6 shows an example of uniform lighting.
  • Person 71 is holding a tablet 73 and is sitting below lighting device 16.
  • a lighting device 15 is located in front of the person 71 and a lighting device 17 is located behind the person 71.
  • Lighting devices 15,16 and 17 generate light effects 65, 66 and 67, respectively.
  • Light effects 65, 66, and 67 have the same light output levels, e.g. 75%.
  • a first embodiment of the method of dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place is shown in Fig. 7.
  • a step 101 comprises detecting that a mobile device is being used for video calling.
  • a step 103 comprises determining at least one parameter for each of the plurality of lighting devices from at least one input signal. The at least one parameter depends on a spatial location of the lighting device relative to a spatial location of the mobile device.
  • step 103 comprises a sub step 121.
  • Step 121 comprises determining distance parameter which represents a distance between the lighting device and the mobile device.
  • a step 123 comprises determining a first set of the plurality of lighting devices with a distance smaller than a threshold and a second set of the plurality of lighting devices with a distance larger than the threshold based on the determined distance parameters
  • a step 105 comprises determining settings for the plurality of lighting devices based on the determined parameters.
  • step 105 comprises a sub step 125.
  • Step 125 comprises determining a first set of light output levels for the first set and a second set of light output levels for the second set. Each of the light output levels in the first set are higher than each of the light output levels in the second set.
  • a step 107 comprises controlling the plurality of lighting devices according to the determined settings upon detecting that the mobile device is being used for video calling.
  • a step 127 comprises detecting that the mobile device is no longer being used for video calling.
  • a step 129 comprises controlling the plurality of lighting devices according to different settings upon detecting that the mobile device is no longer being used for video calling. The different settings may be settings used by the plurality of lighting devices before the plurality of lighting devices were controlled according to the determined settings. Step 101 is repeated after step 129.
  • FIG. 1 A second embodiment of the method of dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place is shown in Fig.
  • step 103 comprises a sub step 141 instead of sub step 121
  • the method comprises a step 143 instead of step 123
  • step 105 comprises a sub step 145 instead of sub step 125.
  • Step 141 comprises determining a direction parameter which indicates whether the lighting device is located in front of a person holding the mobile device or behind a person holding the mobile device.
  • Step 143 comprises determining a first set of the plurality of lighting devices in front of the person and a second set of the plurality of lighting devices behind the person based on the determined direction parameters.
  • Step 145 comprises determining a first set of light output levels for the first set and a second set of light output levels for the second set. Each of the light output levels in the first set is higher than each of the light output levels in the second set.
  • FIG. 1 A third embodiment of the method of dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place is shown in Fig.
  • step 103 comprises sub step 141 of the second embodiment in addition to sub step 121 of the first embodiment, the method comprises a step 161 instead of step 123 and step 105 comprises a sub step 163 instead of sub step 125.
  • Step 161 comprises determining a first set of the plurality of lighting devices with a distance smaller than a threshold based on the distance parameters determined in step 121 and a second set of the plurality of lighting devices with a distance larger than the threshold. Step 161 further comprises determining two subsets of the second set of lighting devices based on the direction parameters determined in step 141.
  • the first subset comprises the lighting devices of the second set which are in front of the person holding the mobile device.
  • the second subset comprises the lighting devices of the second set which are behind the person holding the mobile device.
  • Step 163 comprises determining a first set of light output levels for the first subset of the second set and a second set of light output levels for the second subset of the second set. Each of the light output levels in the first set of light output levels (i.e. of the first subset) is higher than each of the light output levels in the second set of light output levels (i.e. of the second subset). Step 163 further comprises determining a third set of light output levels for the first set of lighting devices. The third set of light output levels comprise dimmed light settings.
  • Fig. 10 shows an example of lighting rendered with the method of Fig. 9.
  • the lighting devices 15-17 now generate light effects 85-87, respectively.
  • As lighting device 15 is located more than a threshold distance (e.g. 1 a meter; this may depend on the type of lighting device) away from the person 71 and is located in front of the person 71, it is included in the first subset of the second set of lighting devices.
  • As lighting device 16 is located less than the threshold distance away from the person 71, it is included in the first set of lighting devices.
  • As lighting device 17 is located more than the threshold distance away from the person 71 and is located behind the person 71, it is included in the second subset of the second set of lighting devices.
  • Each of the light output levels in the first set of light output levels is higher than each of the light output levels in the second set of light output levels and therefore the light output level of lighting device 15 is higher, e.g. 90% of its maximum, than the light output level of lighting device 17, which may be 10% of its maximum, for example.
  • the third set of light output levels comprise dimmed light settings and therefore the light output level of lighting device 16 is dimmed, i.e. less than 100% of its maximum, e.g. 50% of its maximum. This provides good lighting conditions for capturing video of the person 71 with the tablet 73.
  • the lighting devices 15, 16 and 17 are assumed to be the same type of lighting devices with a normal maximum light output and the example therefore indicates percentages of their maximum as possible setting. If lighting devices 15 and 17 are different types of lighting devices, then one of the lighting devices may be dimmed more, but still have a higher light output than the other lighting device. However, it is the light output level that is important and not the dim level.
  • the light output level of lighting device 16 typically also depends on the type of lighting devices. For example, a lighting device with a high maximum light output is typically dimmed more than a lighting device with a low maximum light output.
  • steps 127 and 129 of Fig. 7 have been omitted and step 101 is not repeated after step 129.
  • steps 127 and 129 of Fig. 7 are included as well and step 101 is repeated after step 129 as well.
  • two subsets of the second set of lighting devices are determined based on the direction parameters.
  • three subsets of the second set of lighting devices are determined based on the direction parameters.
  • the first subset comprises the lighting devices of the second set which are in front of the person holding the mobile device.
  • the second subset comprises the lighting devices of the second set which are behind the person holding the mobile device.
  • the third subset comprises the lighting devices of the second set which are left and right of the person holding the mobile device.
  • Fig. 11 the example of Fig. 10 has been extended to illustrate this extension.
  • Fig.l 1 depicts a top view of the light effects 85, 86 and 87 of Fig. 10.
  • Light effect 85 is relatively brightest to have the face well lit.
  • Light effect 86 is a dimmed light effect to avoid facial shadows.
  • Light effect 87 is relatively dimmest to focus less on the background.
  • the dimmest level is preferably chosen in such a way that it respects the minimum levels required in the room considering safety and other lighting requirements of the target environment.
  • further lighting devices render a light effect 232 left from the tablet 73 (and the person holding the tablet) and a light effect 234 right from the tablet 73 (and the person holding the tablet).
  • Fig. 12 depicts a top view of an extension of the example of Fig. 11 in which multiple video calls take place. Often, two users simultaneously involved in video calls maintain a certain distance to avoid audio disturbances. However, it may happen that these two users come in close proximity of each other and adaptive lighting may also be beneficial in those situations.
  • a tablet 251 with a +90-degree orientation relative to tablet 73 and a tablet 253 with a -90- degree orientation relative to tablet 73 are used to make video calls.
  • the lighting can be optimally adapted for all tablets.
  • the light effect 85 is not only in front of tablet 73 and the person holding tablet 73, but also in front of tablet 251 and the person holding tablet 251.
  • the light effect 87 is not only behind tablet 73 and the person holding tablet 73, but also behind tablet 253 and the person holding tablet 253.
  • the light effect 234 is not only at a side of tablet 73 and the person holding tablet 73, but also at a side of tablets 251 and 253 and the persons holding them.
  • the light effects 261 and 263 are above tablets 251 and 253, respectively, and are dimmed light effects to avoid facial shadows.
  • the light effect 262 is behind tablet 251 and the person holding tablet 251 and is relatively dimmest to focus less on the background.
  • the light effect 264 is in front of tablet 253 and the person holding tablet 253 and is relatively brightest to have the face well lit.
  • the orientations of the tablets 73, 251 and 253 are such that the lighting can be optimally adapted for all tablets.
  • an orientation hint/tip may be provided to one or more of the tablets as a pop-up on the tablet prompting the user to orient the device to have the best video call experience.
  • Fig. 13 depicts a block diagram illustrating an exemplary data processing system that may perform the method as described with reference to Figs. 7 to 9.
  • the data processing system 300 may include at least one processor 302 coupled to memory elements 304 through a system bus 306. As such, the data processing system may store program code within memory elements 304. Further, the processor 302 may execute the program code accessed from the memory elements 304 via a system bus 306. In one aspect, the data processing system may be implemented as a computer that is suitable for storing and/or executing program code. It should be appreciated, however, that the data processing system 300 may be implemented in the form of any system including a processor and a memory that is capable of performing the functions described within this specification.
  • the memory elements 304 may include one or more physical memory devices such as, for example, local memory 308 and one or more bulk storage devices 310.
  • the local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code.
  • a bulk storage device may be implemented as a hard drive or other persistent data storage device.
  • the processing system 300 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the quantity of times program code must be retrieved from the bulk storage device 310 during execution.
  • the processing system 300 may also be able to use memory elements of another processing system, e.g. if the processing system 300 is part of a cloud-computing platform.
  • I/O devices depicted as an input device 312 and an output device 314 optionally can be coupled to the data processing system.
  • input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a microphone (e.g. for voice and/or speech recognition), or the like.
  • output devices may include, but are not limited to, a monitor or a display, speakers, or the like.
  • Input and/or output devices may be coupled to the data processing system either directly or through intervening EO controllers.
  • the input and the output devices may be implemented as a combined input/output device (illustrated in Fig. 13 with a dashed line surrounding the input device 312 and the output device 314).
  • a touch sensitive display also sometimes referred to as a“touch screen display” or simply“touch screen”.
  • input to the device may be provided by a movement of a physical object, such as e.g. a stylus or a finger of a user, on or near the touch screen display.
  • a network adapter 316 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks.
  • the network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 300, and a data transmitter for transmitting data from the data processing system 300 to said systems, devices and/or networks.
  • Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 300.
  • the memory elements 304 may store an application 318.
  • the application 318 may be stored in the local memory 308, the one or more bulk storage devices 310, or separate from the local memory and the bulk storage devices.
  • the data processing system 300 may further execute an operating system (not shown in Fig. 13) that can facilitate execution of the application 318.
  • the application 318 being implemented in the form of executable program code, can be executed by the data processing system 300, e.g., by the processor 302.
  • the data processing system 300 may be configured to perform one or more operations or method steps described herein.
  • Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein).
  • the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression“non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal.
  • the program(s) can be contained on a variety of transitory computer-readable storage media.
  • Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored.
  • the computer program may be run on the processor 302 described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Circuit Arrangement For Electric Light Sources In General (AREA)

Abstract

A method of dynamically controlling settings of a plurality of lighting devices (15-17) in a spatial area in which a video call takes place comprises detecting that a mobile device (73) is being used for video calling and determining at least one parameter for each of the plurality of lighting devices from at least one input signal. The at least one parameter depends on a spatial location of the lighting device relative to a spatial location of the mobile device. The method further comprises determining settings for the plurality of lighting devices based on the determined parameters and controlling the plurality of lighting devices according to the determined settings upon detecting that the mobile device is being used for video calling.

Description

DYNAMICALLY CONTROLLING LIGHT SETTINGS FOR A VIDEO CALL BASED ON A SPATIAL LOCATION OF A MOBILE DEVICE
FIELD OF THE INVENTION
The invention relates to a system for dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place.
The invention further relates to a method of dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place.
The invention also relates to a computer program product enabling a computer system to perform such a method.
BACKGROUND OF THE INVENTION
Lighting plays a crucial role in a video conference/call. Most of the times, not every participant of a video call would be equipped with a professional video conferencing grade lighting environment and camera and this may lead to a poor presentation of a participant. It is desirable to improve this participant’s presentation in the video call.
US 2017/0324933 A1 discloses a video-enabled communication system that includes a camera to acquire an image of a local participant during a video communication session and a control unit that selects a lighting configuration for the local participant to be captured by the camera for provision to a remote endpoint for display to another participant. The lighting configuration selection is based on information describing a local participant or context of the video communication session. In an embodiment, a microphone is used to detect an active speaker and face detection is used to approximately locate spatially the active speaker, thereby enabling the control unit to select appropriate lighting elements for optimal lighting.
A drawback of the lighting system of US 2017/0324933 A1 is that the video- enabled communication system is not well suited for video calls involving a mobile device held by a person. For example, the camera of a mobile device cannot be used to locate the active speaker and a map showing relative spatial locations of endpoints compared to lighting element locations is not enough when an endpoint is moving around. SUMMARY OF THE INVENTION
It is a first object of the invention to provide a system, which is able to control light settings in relation to a video call involving a mobile device held by a person.
It is a second object of the invention to provide a method, which can be used to control light settings in relation to a video call involving a mobile device held by a person.
In a first aspect of the invention, a system for dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place comprises at least one input interface, at least one control interface, and at least one processor configured to detect that a mobile device is being used for video calling, use said at least one input interface to determine at least one parameter for each of said plurality of lighting devices from at least one input signal, said at least one parameter depending on a spatial location of said lighting device relative to a spatial location of said mobile device, determine settings for said plurality of lighting devices based on said determined parameters, and use said at least one control interface to control said plurality of lighting devices according to said determined settings upon detecting that said mobile device is being used for video calling.
In a video call with indoor participants using various devices, especially handhelds, the lighting conditions (e.g. uniform brightness) are usually not conducive to provide best video quality. By adjusting the light settings of the lighting devices based on parameter(s) depending on the spatial locations of the lighting devices relative to the spatial location of the mobile device, good light conditions can be created even if the video call involves a mobile device held by a person. Furthermore, by controlling the lighting devices upon detecting that the mobile device is being used for video calling, the lighting conditions are automatically and dynamically adjusted to meet current requirements. Thus, making the lighting system of the room aware of video conferencing and the participant’s location can help it adjust the lighting to enhance the video capture quality.
Said system may comprise said mobile device and said at least one processor may be configured to receive said at least one input signal from each of said plurality of lighting devices. Alternatively, said system may comprise said plurality of lighting devices and said at least one processor may be configured to receive said at least one input signal from said mobile device. Alternatively, said at least one processor may be configured to receive a first input signal of said at least one input signal from said mobile device and further input signals of said at least one input signal from said plurality of lighting devices. The latter may be beneficial if the system is a (light) bridge, for example. Said at least one parameter may comprise a distance parameter which represents a distance between said lighting device and said mobile device. By determining light settings in dependence on such a distance parameter, light conditions can be improved.
Said at least one processor may be configured to determine a set of said plurality of lighting devices with a distance smaller than a threshold based on said determined distance parameters and determine a dimmed light setting for said set of lighting devices. For example, overhead lights may be dimmed, e.g. to 50%, to prevent facial shadows caused by intense overhead lights. The dimmed light setting may correspond to a light output between 150 - 300 lumens, for example.
Said at least one processor may be configured to determine a first set of said plurality of lighting devices with a distance smaller than a threshold and a second set of said plurality of lighting devices with a distance larger than said threshold based on said determined distance parameters and determine a first set of light output levels for said first set and a second set of light output levels for said second set, each of said light output levels in said first set being higher than each of said light output levels in said second set. For example, by making sure that the lighting above the mobile device/participant is brighter than the background lighting, it becomes easier to see the participant. A camera’s auto ISO typically lowers due to high background lighting, thereby making the participant less clear. The first set of light output levels may be higher than 450 lumens, for example. The second set of light output levels may be lower than 50 lumens, for example.
Said at least one parameter may comprise a direction parameter which indicates whether said lighting device is located in front of a person holding said mobile device or behind a person holding said mobile device. By determining light settings in dependence on such a direction parameter, light conditions can be improved. Said at least one processor may be configured to determine said direction parameter based on an orientation of said mobile device, for example.
Said at least one processor may be configured to determine a first set of said plurality of lighting devices in front of said person and a second set of said plurality of lighting devices behind said person based on said determined direction parameters and determine a first set of light output levels for said first set and a second set of light output levels for said second set, each of said light output levels in said first set being higher than each of said light output levels in said second set. The light output behind the participant, i.e. the background lighting, should preferably be low, e.g. dimmed to 10% of its maximum, as a camera’s auto ISO typically lowers due to high background lighting, thereby making the participant less clear. However, the light output in front of the participant should preferably be high, e.g. set to 90% of its maximum, to get light reflecting of the face of the participant, thereby making it easier to see features of the face of the participant.
Said at least one processor may be configured to detect that said mobile device is no longer being used for video calling and use said at least one control interface to control said plurality of lighting devices according to different settings upon detecting that said mobile device is no longer being used for video calling. Said different settings may be settings used by said plurality of lighting devices before said plurality of lighting devices were controlled according to said determined settings, for example. By controlling the lighting devices upon detecting that the mobile device is no longer being used for video calling, the lighting conditions are automatically and dynamically optimized to meet current requirements more often.
Said at least one processor may be configured to determine a spatial location of said lighting device relative to a spatial location of said mobile device based on radio frequency transmissions by multiple devices. As GPS typically does not work indoors, RF beacons may be used to determine spatial locations indoors.
In a second aspect of the invention, a method of dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place comprises detecting that a mobile device is being used for video calling, determining at least one parameter for each of said plurality of lighting devices from at least one input signal, said at least one parameter depending on a spatial location of said lighting device relative to a spatial location of said mobile device, determining settings for said plurality of lighting devices based on said determined parameters, and controlling said plurality of lighting devices according to said determined settings upon detecting that said mobile device is being used for video calling. Said method may be performed by software running on a
programmable device. This software may be provided as a computer program product.
Moreover, a computer program for carrying out the methods described herein, as well as a non-transitory computer readable storage-medium storing the computer program are provided. A computer program may, for example, be downloaded by or uploaded to an existing device or be stored upon manufacturing of these systems.
A non-transitory computer-readable storage medium stores at least one software code portion, the software code portion, when executed or processed by a computer, being configured to perform executable operations for dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place. The executable operations comprise detecting that a mobile device is being used for video calling, determining at least one parameter for each of said plurality of lighting devices from at least one input signal, said at least one parameter depending on a spatial location of said lighting device relative to a spatial location of said mobile device, determining settings for said plurality of lighting devices based on said determined parameters, and controlling said plurality of lighting devices according to said determined settings upon detecting that said mobile device is being used for video calling.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a device, a method or a computer program product.
Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system." Functions described in this disclosure may be implemented as an algorithm executed by a processor/microprocessor of a computer. Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied, e.g., stored, thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer readable storage medium may include, but are not limited to, the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store, a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any
combination of one or more programming languages, including an object oriented programming language such as Java(TM), Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor, in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention are apparent from and will be further elucidated, by way of example, with reference to the drawings, in which:
Fig. l is a block diagram of a first embodiment of the system;
Fig. 2 is a block diagram of a second embodiment of the system;
Fig. 3 is a block diagram of a third embodiment of the system;
Fig. 4 is a block diagram of a fourth embodiment of the system;
Fig. 5 is a block diagram of a fifth embodiment of the system; Fig. 6 shows an example of uniform lighting;
Fig. 7 is a flow diagram of a first embodiment of the method;
Fig. 8 is a flow diagram of a second embodiment of the method;
Fig. 9 is a flow diagram of a third embodiment of the method; Fig. 10 shows an example of lighting rendered with the method of Fig. 9;
Fig. 11 depicts a top view of an extension of the example of Fig. 10;
Fig. 12 depicts a top view of an extension of the example of Fig. 11 in which multiple video calls take place; and
Fig. 13 is a block diagram of an exemplary data processing system for performing the method of the invention.
Corresponding elements in the drawings are denoted by the same reference numeral.
DETAILED DESCRIPTION OF THE EMBODIMENTS
Fig. 1 shows a first embodiment of the system for dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place. In the example of Fig. 1, the plurality of lighting devices comprises lighting devices 15-17.
In the embodiment of Fig. 1, the system is a mobile device 1. The mobile device 1 is connected to a wireless LAN access point 12. A bridge 13, e.g. a Philips Hue bridge, is also connected to the wireless LAN access point 12, e.g. via Ethernet. The bridge 13 communicates with the lighting devices 15-17, e.g. using Zigbee technology. The lighting devices 15-17 may be Philips Hue lights, for example. A lighting system 10 comprises the bridge 13 and the lighting devices 15-17.
The mobile device 1 comprises a receiver 3, a transmitter 4, a processor 5, memory 7, a face-oriented camera 8 and a display 9. The processor 5 is configured to detect that the mobile device 1 is being used for video calling and use the receiver 3 to determine at least one parameter for each of the lighting devices 15-17 from at least one input signal. The at least one parameter depends on a spatial location of the lighting device relative to a spatial location of the mobile device 1.
The processor 5 is further configured to determine settings for the lighting devices 15-17 based on the determined parameters and use the transmitter 4 to control the lighting devices 15-17 according to the determined settings upon detecting that the mobile device 1 is being used for video calling.
In the embodiment of Fig. 1, each set of at least one parameter comprises a distance parameter and a direction parameter. The distance parameter represents a distance between the applicable lighting device and the mobile device 1. The direction parameter indicates whether the lighting device is located in front of a person holding the mobile device 1 or behind a person holding the mobile device 1. The direction parameter may specify one of“in front of user”,“behind user”,“above user”, or“next to user”, for example.
In the embodiment of Fig. 1, the processor 5 is configured to determine a spatial location of each of the lighting devices 15-17 relative to the mobile device 1 based on radio frequency transmissions by multiple devices. In the embodiment of Fig. 1, the lighting devices 15-17 (and optionally the bridge 13) transmit, e.g. broadcast, messages from which the processor 5 is able to determine a Received Signal Strength Indicator (RRSI) and which comprise RSSIs of messages received by the device in question from other lighting devices. Thus, the processor 5 receives an input signal from each of the lighting devices 15-17.
With the help of the RSSIs determined by the processor 5 and the RSSIs received from the lighting devices 15-17, the processor 5 is able to determine the distances between the different devices sufficiently accurately and thus spatial locations of the lighting devices 15-17 relative the mobile device 1. In the embodiment of Fig. 1, the processor 5 is further configured to determine the direction parameter based on the orientation of the mobile device 1.
The mobile device 1 may comprise a magnetometer for determining the orientation of the mobile device 1 and the processor 5 may be configured to determine the orientation of the display 9 and the face-oriented camera 8 of the mobile device 1 using these sensors, for example. In this case, the lighting devices 15-17 may identify in their messages in which part of the building they have been installed, e.g.“North”,“North-West”, or “South”. This may be configured in the lighting devices 15-17 during commissioning. Thus, the spatial locations of the lighting devices 15-17 relative to the mobile device 1 comprise coarse absolute spatial locations in this case.
Alternatively, the processor 5 may be configured to analyze an image captured by the face-oriented camera 8 to recognize one or more of the lighting devices 15-17 (above or behind the person holding the mobile device 1) in the image, e.g. based on identifiers transmitted using Visible Light Communication (VLC) or by using object recognition, and determine the orientation of the mobile device 1 relative to the lighting devices 15-17 based on this analysis.
The processor 5 then uses the distances between the mobile device 1 and each of the lighting devices 15-17 and the determined orientation of the face-oriented camera 8 of the mobile device 1 to determine the setting for the respective lighting device, e.g. dim up the luminaire(s) in front of the person holding the mobile device 1 and dim down the
luminaire(s) in the background. The person making the video call will typically move while making the video call, e.g. turn and/or walk around, and/or change the way in which he holds his mobile device 1. By repeatedly determining the spatial location of the mobile device 1 relative to the lighting devices 15-17 and the orientation of the mobile device 1 and repeatedly determining settings for the lighting devices 15-17, the lighting conditions may be kept good while the video call is going on.
The user may be provided an option to disable the adaptive lighting behavior. A user may be able to manually override the adaptive lighting behavior using a manual light switch, for example. The adaptive lighting control could then be turned off until the manual override is revoked and/or until a timer expires. If the lighting devices are controlled based on input from a light sensor, this behavior may be overridden while the adaptive lighting is being used.
In the embodiment of the mobile device 1 shown in Fig. 1, the mobile device 1 comprises one processor 5. In an alternative embodiment, the mobile device 1 comprises multiple processors. The processor 5 of the mobile device 1 may be a general-purpose processor, e.g. from ARM or Qualcomm or an application-specific processor. The processor 5 of the mobile device 1 may run an Android or iOS operating system for example. The display 9 may comprise an LCD or OLED display panel, for example. The display 9 may be a touch screen, for example. The processor 5 may use this touch screen to provide a user interface, for example. The memory 7 may comprise one or more memory units. The memory 7 may comprise solid state memory, for example.
The receiver 3 and the transmitter 4 may use one or more wireless communication technologies such as Wi-Fi (IEEE 802.11) to communicate with the wireless LAN access point 12, for example. In an alternative embodiment, multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter. In the embodiment shown in Fig. 1, a separate receiver and a separate transmitter are used. In an alternative embodiment, the receiver 3 and the transmitter 4 are combined into a transceiver. The camera 8 may comprise a CMOS or CCD sensor, for example. The mobile device 1 may comprise other components typical for a mobile device such as a battery and a power connector. The invention may be implemented using a computer program running on one or more processors.
In the embodiment of Fig. 1, a bridge is used to control light devices 15-17. In an alternative embodiment, light devices 15-17 are controlled without using a bridge. In the embodiment of Fig. 1, the system is a mobile device. In an alternative embodiment, the system of the invention is a different device, e.g. a bridge. In the embodiment of Fig. 1, the system of the invention comprises a single device. In an alternative embodiment, the system of the invention comprises a plurality of devices.
Fig. 2 shows a second embodiment of the system of the invention: a bridge 21, e.g. a Philips Hue bridge. The bridge 21 and the lighting devices 15-17 are part of the lighting system 20. The bridge 21 comprises a receiver 23, a transmitter 24, a processor 25, and a memory 27.
The processor 25 is configured to detect that a mobile device 11 is being used for video calling and use the receiver 21 to determine at least one parameter for each of the lighting devices 15-17 from at least one input signal. The at least one parameter depends on a spatial location of the lighting device relative to a spatial location of the mobile device.
The processor 25 is configured to determine settings for the lighting devices 15-17 based on the determined parameters and use the transmitter 4 to control the lighting devices 15-17 according to the determined settings upon detecting that the mobile device 11 is being used for video calling.
In the embodiment of Fig. 2, the processor 25 is configured to receive a first input signal of the at least one input signal from the mobile device 11 and further input signals of the at least one input signal from the lighting devices 15-17. In the embodiment of Fig. 2, the processor 25 is configured to determine a spatial location of each of the lighting devices 15-17 relative to a spatial location of the mobile device 11 based on radio frequency transmissions by multiple devices.
The input signals may be the messages described in relation to Fig. 1. In the embodiment of Fig. 2, the mobile device 11 also transmits such messages. The bridge 21 may then determine the spatial locations of the lighting devices 15-17 relative to the spatial location of the mobile device 11 in a similar manner as the mobile device 1 of Fig. 1 determines the spatial locations the lighting devices 15-17 relative to the spatial location of the mobile device 1.
The mobile device 11 further includes its orientation in the messages broadcast or transmitted to the bridge 21 and also informs the bridge 21 when a video call has started and stopped. These messages may be transmitted by an app running on the mobile device 11, for example.
In the embodiment of the bridge 21 shown in Fig. 2, the bridge 21 comprises one processor 25. In an alternative embodiment, the bridge 21 comprises multiple processors. The processor 25 of the bridge 21 may be a general-purpose processor, e.g. ARM-based, or an application-specific processor. The processor 25 of the bridge 21 may run a Unix-based operating system for example. The memory 27 may comprise one or more memory units. The memory 27 may comprise one or more hard disks and/or solid-state memory, for example. The memory 27 may be used to store a table of connected lights, for example.
The receiver 23 and the transmitter 24 may use one or more wired or wireless communication technologies such as Ethernet to communicate with the wireless LAN access point 12, for example. In an alternative embodiment, multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter. In the embodiment shown in Fig. 2, a separate receiver and a separate transmitter are used. In an alternative embodiment, the receiver 23 and the transmitter 24 are combined into a transceiver. The bridge 21 may comprise other components typical for a network device such as a power connector. The invention may be implemented using a computer program running on one or more processors.
Fig. 3 shows a third embodiment of the system of the invention: a lighting system 40. The lighting system 40 comprises three lighting devices 45-47. The lighting devices 45-47 each comprise a receiver 53, a transmitter 54, a processor 55, a memory 57, and a light source 59.
The processors 55 are configured to detect that a mobile device 11 is being used for video calling and use the receivers 53 to determine at least one parameter for each of the lighting devices 45-47 from at least one input signal. The at least one parameter depends on a spatial location of the lighting device relative to a spatial location of the mobile device 11
The processors 25 are further configured to determine settings for the lighting devices 45-47 based on the determined parameters and use their internal control interfaces to control the lighting devices 45-47, i.e. light sources 59, according to the determined settings upon detecting that the mobile device 11 is being used for video calling. Thus, the invention is implemented in a distributed manner.
In the embodiment of Fig. 3, each processor 55 is configured to receive an input signal from the mobile device 11. In the embodiment of Fig. 3, each processor 55 is configured to determine a spatial location of the lighting device relative to a spatial location of the mobile device 11 based on radio frequency transmissions by multiple devices, e.g. the mobile device 11, the bridge 13 and the other lighting devices.
The input signals may be the messages described in relation to Fig. 1. In the embodiment of Fig. 3, the mobile device 11 also transmits such messages. Each of the lighting devices 45-47 may then determine its spatial location relative to the spatial location of the mobile device 11, i.e. determine the spatial location of the mobile device 11 relative to its spatial location, in a similar manner as the mobile device 1 of Fig. 1 determines the spatial locations of the lighting devices 15-17 relative to the spatial location of the mobile device 1.
The mobile device 11 further includes its orientation in the messages broadcast or transmitted to the lighting devices 45-47 and also informs the lighting devices 45-47 when a video call has started and stopped. These messages may be transmitted by an app running on the mobile device 11, for example.
In the embodiment of the lighting devices 45-47 shown in Fig. 3, the lighting devices 45-47 each comprise one processor 55. In an alternative embodiment, one or more of the lighting devices 45-47 comprise multiple processors. The processors 55 of the lighting devices 45-47 may be general-purpose processors, e.g. ARM-based, or application-specific processors, for example. The processors 55 of the lighting devices 45-47 may run a Unix- based operating system for example. The memory 57 may comprise one or more memory units. The memory 57 may comprise solid-state memory, for example. The memory 57 may be used to store light settings associated with light scene identifiers, for example. The light sources 59 may each comprise one or more LED diodes, for example.
The receivers 53 and the transmitters 54 may use one or more wired or wireless communication technologies such as Zigbee to communicate with the bridge 13, for example. In an alternative embodiment, multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter. In the embodiment shown in Fig. 3, a separate receiver and a separate transmitter are used. In an alternative embodiment, the receiver 53 and the transmitter 54 are combined into a transceiver. The lighting devices 45-47 may comprise other components typical for a lighting device such as a power connector. The invention may be implemented using a computer program running on one or more processors.
In the embodiment of Figs. 1 and 3, the mobile device and lighting devices communicate via a wireless LAN access point and a bridge. In variants on these
embodiments, the mobile device and the lighting devices communicate directly, e.g. using Bluetooth Low Energy (BLE). In these alternative embodiments, the video call may be performed using a mobile communication network such as LTE, for example.
A first variant on the lighting system 40 is shown in Fig. 4. A lighting system 200 comprise three lighting devices 205-207. In this first variant, the mobile device 11 connects to one of the lighting devices, lighting device 205, which forwards messages to and from the other lighting devices, lighting devices 206 and 207, e.g. using wireless mesh technology such as Zigbee.
A second variant on the lighting system 40 is shown in Fig. 5. A lighting system 210 comprise three lighting devices 215-217. In this second variant, the mobile device 11 connects to each of the lighting devices 215-217, e.g. using BLE or Zigbee.
Fig. 6 shows an example of uniform lighting. Person 71 is holding a tablet 73 and is sitting below lighting device 16. A lighting device 15 is located in front of the person 71 and a lighting device 17 is located behind the person 71. Lighting devices 15,16 and 17 generate light effects 65, 66 and 67, respectively. Light effects 65, 66, and 67 have the same light output levels, e.g. 75%.
A first embodiment of the method of dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place is shown in Fig. 7. A step 101 comprises detecting that a mobile device is being used for video calling. A step 103 comprises determining at least one parameter for each of the plurality of lighting devices from at least one input signal. The at least one parameter depends on a spatial location of the lighting device relative to a spatial location of the mobile device.
In the embodiment of Fig. 7, step 103 comprises a sub step 121. Step 121 comprises determining distance parameter which represents a distance between the lighting device and the mobile device. A step 123 comprises determining a first set of the plurality of lighting devices with a distance smaller than a threshold and a second set of the plurality of lighting devices with a distance larger than the threshold based on the determined distance parameters
A step 105 comprises determining settings for the plurality of lighting devices based on the determined parameters. In the embodiment of Fig. 7, step 105 comprises a sub step 125. Step 125 comprises determining a first set of light output levels for the first set and a second set of light output levels for the second set. Each of the light output levels in the first set are higher than each of the light output levels in the second set. A step 107 comprises controlling the plurality of lighting devices according to the determined settings upon detecting that the mobile device is being used for video calling.
A step 127 comprises detecting that the mobile device is no longer being used for video calling. A step 129 comprises controlling the plurality of lighting devices according to different settings upon detecting that the mobile device is no longer being used for video calling. The different settings may be settings used by the plurality of lighting devices before the plurality of lighting devices were controlled according to the determined settings. Step 101 is repeated after step 129.
A second embodiment of the method of dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place is shown in Fig.
8. In the second embodiment, compared to the first embodiment, step 103 comprises a sub step 141 instead of sub step 121, the method comprises a step 143 instead of step 123 and step 105 comprises a sub step 145 instead of sub step 125.
Step 141 comprises determining a direction parameter which indicates whether the lighting device is located in front of a person holding the mobile device or behind a person holding the mobile device. Step 143 comprises determining a first set of the plurality of lighting devices in front of the person and a second set of the plurality of lighting devices behind the person based on the determined direction parameters.
Step 145 comprises determining a first set of light output levels for the first set and a second set of light output levels for the second set. Each of the light output levels in the first set is higher than each of the light output levels in the second set.
A third embodiment of the method of dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place is shown in Fig.
9. In the third embodiment, compared to the first embodiment, step 103 comprises sub step 141 of the second embodiment in addition to sub step 121 of the first embodiment, the method comprises a step 161 instead of step 123 and step 105 comprises a sub step 163 instead of sub step 125.
Step 161 comprises determining a first set of the plurality of lighting devices with a distance smaller than a threshold based on the distance parameters determined in step 121 and a second set of the plurality of lighting devices with a distance larger than the threshold. Step 161 further comprises determining two subsets of the second set of lighting devices based on the direction parameters determined in step 141. The first subset comprises the lighting devices of the second set which are in front of the person holding the mobile device. The second subset comprises the lighting devices of the second set which are behind the person holding the mobile device.
Step 163 comprises determining a first set of light output levels for the first subset of the second set and a second set of light output levels for the second subset of the second set. Each of the light output levels in the first set of light output levels (i.e. of the first subset) is higher than each of the light output levels in the second set of light output levels (i.e. of the second subset). Step 163 further comprises determining a third set of light output levels for the first set of lighting devices. The third set of light output levels comprise dimmed light settings.
Fig. 10 shows an example of lighting rendered with the method of Fig. 9. The lighting devices 15-17 now generate light effects 85-87, respectively. As lighting device 15 is located more than a threshold distance (e.g. 1 a meter; this may depend on the type of lighting device) away from the person 71 and is located in front of the person 71, it is included in the first subset of the second set of lighting devices. As lighting device 16 is located less than the threshold distance away from the person 71, it is included in the first set of lighting devices. As lighting device 17 is located more than the threshold distance away from the person 71 and is located behind the person 71, it is included in the second subset of the second set of lighting devices.
Each of the light output levels in the first set of light output levels is higher than each of the light output levels in the second set of light output levels and therefore the light output level of lighting device 15 is higher, e.g. 90% of its maximum, than the light output level of lighting device 17, which may be 10% of its maximum, for example. The third set of light output levels comprise dimmed light settings and therefore the light output level of lighting device 16 is dimmed, i.e. less than 100% of its maximum, e.g. 50% of its maximum. This provides good lighting conditions for capturing video of the person 71 with the tablet 73.
In the example of Fig. 10, the lighting devices 15, 16 and 17 are assumed to be the same type of lighting devices with a normal maximum light output and the example therefore indicates percentages of their maximum as possible setting. If lighting devices 15 and 17 are different types of lighting devices, then one of the lighting devices may be dimmed more, but still have a higher light output than the other lighting device. However, it is the light output level that is important and not the dim level. The light output level of lighting device 16 typically also depends on the type of lighting devices. For example, a lighting device with a high maximum light output is typically dimmed more than a lighting device with a low maximum light output.
In the embodiments of Figs. 8 and 9, steps 127 and 129 of Fig. 7 have been omitted and step 101 is not repeated after step 129. In variants of these embodiments, steps 127 and 129 of Fig. 7 are included as well and step 101 is repeated after step 129 as well.
In the embodiment of Fig. 9, two subsets of the second set of lighting devices are determined based on the direction parameters. In an extension of this embodiment, three subsets of the second set of lighting devices are determined based on the direction parameters. The first subset comprises the lighting devices of the second set which are in front of the person holding the mobile device. The second subset comprises the lighting devices of the second set which are behind the person holding the mobile device. The third subset comprises the lighting devices of the second set which are left and right of the person holding the mobile device.
In Fig. 11, the example of Fig. 10 has been extended to illustrate this extension. Fig.l 1 depicts a top view of the light effects 85, 86 and 87 of Fig. 10. Light effect 85 is relatively brightest to have the face well lit. Light effect 86 is a dimmed light effect to avoid facial shadows. Light effect 87 is relatively dimmest to focus less on the background. The dimmest level is preferably chosen in such a way that it respects the minimum levels required in the room considering safety and other lighting requirements of the target environment. In the example of Fig. 11, further lighting devices render a light effect 232 left from the tablet 73 (and the person holding the tablet) and a light effect 234 right from the tablet 73 (and the person holding the tablet). The light effects 232 and 234 have a medium level brightness to enhance side features. Fig. 12 depicts a top view of an extension of the example of Fig. 11 in which multiple video calls take place. Often, two users simultaneously involved in video calls maintain a certain distance to avoid audio disturbances. However, it may happen that these two users come in close proximity of each other and adaptive lighting may also be beneficial in those situations. In Fig. 12, in addition to tablet 73, a tablet 251 with a +90-degree orientation relative to tablet 73 and a tablet 253 with a -90- degree orientation relative to tablet 73 are used to make video calls.
With the orientations of the tablets as depicted in Fig. 12, the lighting can be optimally adapted for all tablets. The light effect 85 is not only in front of tablet 73 and the person holding tablet 73, but also in front of tablet 251 and the person holding tablet 251. The light effect 87 is not only behind tablet 73 and the person holding tablet 73, but also behind tablet 253 and the person holding tablet 253. The light effect 234 is not only at a side of tablet 73 and the person holding tablet 73, but also at a side of tablets 251 and 253 and the persons holding them.
Furthermore, the light effects 261 and 263 are above tablets 251 and 253, respectively, and are dimmed light effects to avoid facial shadows. The light effect 262 is behind tablet 251 and the person holding tablet 251 and is relatively dimmest to focus less on the background. The light effect 264 is in front of tablet 253 and the person holding tablet 253 and is relatively brightest to have the face well lit. In the example of Fig. 12, the orientations of the tablets 73, 251 and 253 are such that the lighting can be optimally adapted for all tablets. If the orientations are such that the lighting for one tablet cannot be optimized without worsening the lighting for another tablet, an orientation hint/tip may be provided to one or more of the tablets as a pop-up on the tablet prompting the user to orient the device to have the best video call experience.
Fig. 13 depicts a block diagram illustrating an exemplary data processing system that may perform the method as described with reference to Figs. 7 to 9.
As shown in Fig. 13, the data processing system 300 may include at least one processor 302 coupled to memory elements 304 through a system bus 306. As such, the data processing system may store program code within memory elements 304. Further, the processor 302 may execute the program code accessed from the memory elements 304 via a system bus 306. In one aspect, the data processing system may be implemented as a computer that is suitable for storing and/or executing program code. It should be appreciated, however, that the data processing system 300 may be implemented in the form of any system including a processor and a memory that is capable of performing the functions described within this specification.
The memory elements 304 may include one or more physical memory devices such as, for example, local memory 308 and one or more bulk storage devices 310. The local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive or other persistent data storage device. The processing system 300 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the quantity of times program code must be retrieved from the bulk storage device 310 during execution. The processing system 300 may also be able to use memory elements of another processing system, e.g. if the processing system 300 is part of a cloud-computing platform.
Input/output (I/O) devices depicted as an input device 312 and an output device 314 optionally can be coupled to the data processing system. Examples of input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a microphone (e.g. for voice and/or speech recognition), or the like. Examples of output devices may include, but are not limited to, a monitor or a display, speakers, or the like. Input and/or output devices may be coupled to the data processing system either directly or through intervening EO controllers. In an embodiment, the input and the output devices may be implemented as a combined input/output device (illustrated in Fig. 13 with a dashed line surrounding the input device 312 and the output device 314). An example of such a combined device is a touch sensitive display, also sometimes referred to as a“touch screen display” or simply“touch screen”. In such an embodiment, input to the device may be provided by a movement of a physical object, such as e.g. a stylus or a finger of a user, on or near the touch screen display.
A network adapter 316 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. The network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 300, and a data transmitter for transmitting data from the data processing system 300 to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 300.
As pictured in Fig. 13, the memory elements 304 may store an application 318. In various embodiments, the application 318 may be stored in the local memory 308, the one or more bulk storage devices 310, or separate from the local memory and the bulk storage devices. It should be appreciated that the data processing system 300 may further execute an operating system (not shown in Fig. 13) that can facilitate execution of the application 318. The application 318, being implemented in the form of executable program code, can be executed by the data processing system 300, e.g., by the processor 302.
Responsive to executing the application, the data processing system 300 may be configured to perform one or more operations or method steps described herein.
Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein). In one embodiment, the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression“non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal. In another embodiment, the program(s) can be contained on a variety of transitory computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. The computer program may be run on the processor 302 described herein.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of embodiments of the present invention has been presented for purposes of illustration, but is not intended to be exhaustive or limited to the implementations in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present invention. The embodiments were chosen and described in order to best explain the principles and some practical applications of the present invention, and to enable others of ordinary skill in the art to understand the present invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims

CLAIMS:
1. A system (1,21,40) for dynamically controlling settings of a plurality of lighting devices (15-17,45-47) in a spatial area in which a video call takes place, said system (1,21,40) comprising:
at least one input interface (3,23,53);
at least one control interface (4,24,54); and
at least one processor (5,25,55) configured to:
- detect that a mobile device (1, 11) is being used for video calling,
- use said at least one input interface (3,23,53) to determine at least one parameter for each of said plurality of lighting devices (15-17,45-47) from at least one input signal, said at least one parameter depending on a spatial location of said lighting device (15- 17,45-47) relative to a spatial location of said mobile device (1,11),
- determine settings, to enhance the quality of the video captured by the mobile device, for said plurality of lighting devices (15-17,45-47) based on said determined parameters,
- use said at least one control interface (4,24,54) to control said plurality of lighting devices (15-17,45-47) according to said determined settings upon detecting that said mobile device (1, 11) is being used for video calling,
- detect that said mobile device (1,11) is no longer being used for video calling, and
- use said at least one control interface (4,24,54) to control said plurality of lighting devices (15-17,45-47) according to different settings upon detecting that said mobile device (1,11) is no longer being used for video calling, wherein said different settings are settings used by said plurality of lighting devices (15-17,45-47) before said plurality of lighting devices (15-17,45-47) were controlled according to said determined settings.
2. A system (1,21,40) as claimed in claim 1, wherein said at least one parameter comprises a distance parameter which represents a distance between said lighting device (15- 17,45-47) and said mobile device (1,11).
3. A system (1,21,40) as claimed in claim 2, wherein said at least one processor
(5,25,55) is configured to determine a set of said plurality of lighting devices (15-17,45-47) with a distance smaller than a threshold based on said determined distance parameters and determine a dimmed light setting for said set of lighting devices.
4. A system (1,21,40) as claimed in claim 2, wherein said at least one processor
(5,25,55) is configured to determine a first set of said plurality of lighting devices (15-17,45- 47) with a distance smaller than a threshold and a second set of said plurality of lighting devices (15-17,45-47) with a distance larger than said threshold based on said determined distance parameters and determine a first set of light output levels for said first set and a second set of light output levels for said second set, each of said light output levels in said first set being higher than each of said light output levels in said second set.
5. A system (1,21,40) as claimed in claim 1 or 2, wherein said at least one parameter comprises a direction parameter which indicates whether said lighting device (15- 17,45-47) is located in front of a person holding said mobile device (1,11) or behind a person holding said mobile device (1,11).
6. A system (1,21,40) as claimed in claim 5, wherein said at least one processor
(5,25,55) is configured to determine said direction parameter based on an orientation of said mobile device (1,11).
7. A system (1,21,40) as claimed in claim 5, wherein said at least one processor
(5,25,55) is configured to determine a first set of said plurality of lighting devices (15-17,45- 47) in front of said person and a second set of said plurality of lighting devices (15-17,45-47) behind said person based on said determined direction parameters and determine a first set of light output levels for said first set and a second set of light output levels for said second set, each of said light output levels in said first set being higher than each of said light output levels in said second set.
8. A system (1) as claimed in claim 1 or 2, wherein said system (1) comprises said mobile device (1) and said at least one processor (5) is configured to receive said at least one input signal from each of said plurality of lighting devices (15-17).
9. A system (40) as claimed in claim 1 or 2, wherein said system (40) comprises said plurality of lighting devices (45-47) and said at least one processor (55) is configured to receive said at least one input signal from said mobile device (11).
10. A system (21) as claimed in claim 1 or 2, wherein said at least one processor (25) is configured to receive a first input signal of said at least one input signal from said mobile device (11) and further input signals of said at least one input signal from said plurality of lighting devices (15-17).
11. A system (1,21,40) as claimed in claim 1 or 2, wherein said at least one processor (5,25,55) is configured to determine a spatial location of said lighting device (15- 17,45-47) relative to a spatial location of said mobile device (1,11) based on radio frequency transmissions by multiple devices.
12. A method of dynamically controlling settings of a plurality of lighting devices in a spatial area in which a video call takes place, said method comprising:
- detecting (101) that a mobile device is being used for video calling;
- determining (103) at least one parameter for each of said plurality of lighting devices from at least one input signal, said at least one parameter depending on a spatial location of said lighting device relative to a spatial location of said mobile device;
- determining (105) settings, to enhance the quality of the video captured by the mobile device, for said plurality of lighting devices based on said determined parameters;
- controlling (107) said plurality of lighting devices according to said determined settings upon detecting that said mobile device is being used for video calling,
- detecting that said mobile device (1,11) is no longer being used for video calling, and
- using said at least one control interface (4,24,54) to control said plurality of lighting devices (15-17,45-47) according to different settings upon detecting that said mobile device (1,11) is no longer being used for video calling, wherein said different settings are settings used by said plurality of lighting devices (15-17,45-47) before said plurality of lighting devices (15-17,45-47) were controlled according to said determined settings.
13. A computer program or suite of computer programs comprising at least one software code portion or a computer program product storing at least one software code portion, the software code portion, when run on a computer system, being configured for enabling the method of claim 12 to be performed.
PCT/EP2020/059025 2019-04-04 2020-03-31 Dynamically controlling light settings for a video call based on a spatial location of a mobile device WO2020201240A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
IN201941013728 2019-04-04
IN201941013728 2019-04-04
EP19175072 2019-05-17
EP19175072.8 2019-05-17

Publications (1)

Publication Number Publication Date
WO2020201240A1 true WO2020201240A1 (en) 2020-10-08

Family

ID=70057148

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/059025 WO2020201240A1 (en) 2019-04-04 2020-03-31 Dynamically controlling light settings for a video call based on a spatial location of a mobile device

Country Status (1)

Country Link
WO (1) WO2020201240A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2851894A1 (en) * 2013-09-20 2015-03-25 EchoStar Technologies L.L.C. Environmental adjustments to perceive true content
US20160295663A1 (en) * 2015-04-02 2016-10-06 Elwha Llc Systems and methods for controlling lighting based on a display
US20170324933A1 (en) 2016-05-06 2017-11-09 Avaya Inc. System and Method for Dynamic Light Adjustment in Video Capture
US20180211440A1 (en) * 2015-07-21 2018-07-26 Dolby Laboratories Licensing Corporation Surround ambient light sensing, processing and adjustment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2851894A1 (en) * 2013-09-20 2015-03-25 EchoStar Technologies L.L.C. Environmental adjustments to perceive true content
US20160295663A1 (en) * 2015-04-02 2016-10-06 Elwha Llc Systems and methods for controlling lighting based on a display
US20180211440A1 (en) * 2015-07-21 2018-07-26 Dolby Laboratories Licensing Corporation Surround ambient light sensing, processing and adjustment
US20170324933A1 (en) 2016-05-06 2017-11-09 Avaya Inc. System and Method for Dynamic Light Adjustment in Video Capture

Similar Documents

Publication Publication Date Title
US20170048077A1 (en) Intelligent electric apparatus and controlling method of same
US10306437B2 (en) Smart device grouping system, method and apparatus
CN104039040B (en) The control method and device of intelligent lamp
EP3145125A1 (en) Method and apparatus for controlling devices
US10178606B2 (en) Message transmission method, message reception method and apparatus and storage medium
WO2017101439A1 (en) Electronic device control method and apparatus
EP3131235A1 (en) Method and apparatus for controlling device
EP3316232A1 (en) Method, apparatus and storage medium for controlling target device
US20210397840A1 (en) Determining a control mechanism based on a surrounding of a remove controllable device
JP2014230199A (en) Control device and program
US10869292B2 (en) Conditionally providing location-based functions
WO2020201240A1 (en) Dynamically controlling light settings for a video call based on a spatial location of a mobile device
US20240064883A1 (en) Requesting a lighting device to control other lighting devices to render light effects from a light script
US20230033157A1 (en) Displaying a light control ui on a device upon detecting interaction with a light control device
CN110945970B (en) Attention dependent distraction storing preferences for light states of light sources
US11412602B2 (en) Receiving light settings of light devices identified from a captured image
US12040913B2 (en) Selecting a destination for a sensor signal in dependence on an active light setting
WO2024046781A1 (en) Rendering entertainment light effects based on preferences of the nearest user
US11044013B2 (en) Selecting from content items associated with different light beacons
WO2023052160A1 (en) Determining spatial offset and direction for pixelated lighting device based on relative position
CN205378099U (en) Intelligent doorbell

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20715371

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20715371

Country of ref document: EP

Kind code of ref document: A1