WO2020007596A1 - Activating one or more light settings associated with an automatically determined name - Google Patents

Activating one or more light settings associated with an automatically determined name Download PDF

Info

Publication number
WO2020007596A1
WO2020007596A1 PCT/EP2019/065828 EP2019065828W WO2020007596A1 WO 2020007596 A1 WO2020007596 A1 WO 2020007596A1 EP 2019065828 W EP2019065828 W EP 2019065828W WO 2020007596 A1 WO2020007596 A1 WO 2020007596A1
Authority
WO
WIPO (PCT)
Prior art keywords
settings
name
words
lights
light
Prior art date
Application number
PCT/EP2019/065828
Other languages
French (fr)
Inventor
Remco MAGIELSE
Original Assignee
Signify Holding B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Signify Holding B.V. filed Critical Signify Holding B.V.
Publication of WO2020007596A1 publication Critical patent/WO2020007596A1/en

Links

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/155Coordinated control of two or more light sources
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B45/00Circuit arrangements for operating light-emitting diodes [LED]
    • H05B45/20Controlling the colour of the light

Definitions

  • the invention relates to a system for activating a setting on a light.
  • the invention further relates to a method of activating a setting on a light.
  • the invention also relates to a computer program product enabling a computer system to perform such a method.
  • Philips Hue is a consumer connected lighting solution.
  • the system consists of a central controller, named a bridge, wirelessly controllable light endpoints and user interfaces in various forms (switches, sensors and mobiles apps).
  • the bridge is connected to the router of the user and communicates to the light points. It is capable of running schedules and home automation rules. In this way it acts as the intelligence of the system. All user interfaces connect to the bridge in order to actuate the lights.
  • WO 2018/037009 Al discloses a controller with a display for displaying a plurality of names, wherein each name is rendered with a text property representative of the respective relevance value of the associated light scene and in a text color that matches a light color of the associated light scene and wherein a name can be selected in order to activate the associated light scene.
  • WO 2011/013035 Al discloses an atmosphere program management system comprising a server, which stores atmosphere programs, and a remote management client for accessing the server and providing a user interface for managing the atmosphere programs stored by the server.
  • the system comprises at least one processor configured to determine one or more settings for one or more lights, said one or more settings comprising one or more chromaticity settings and/or one or more brightness settings, determine one or more words from said one or more settings, determine a name based on a plurality of words, said plurality of words including at least one of said one or more words, output said name, associate said name with said one or more settings, and allow a user to activate said one or more settings on said one or more lights by selecting said name.
  • Said name preferably comprises two words or three words.
  • the inventor has recognized that by automatically determining a name based on a plurality of words of which at least one word is based on the content of the light scene, i.e. of the light effect to be rendered, a name may be determined that is both unique and easy to remember or recognize and therefore suitable for activating the associated light scene.
  • said one or more settings for said one or more lights are
  • Determining one or words from said one or more settings may comprise selecting a subset of said one or more settings from which to determine said one or more words.
  • Determining a name based on a plurality of words may comprise selecting a subset of said plurality of words. The name is output in order to inform the user which name has been determined automatically and to allow the user to select the desired name when he wants to activate the one or more settings on the one or more lights.
  • Said one or more lights may comprise a plurality of lights and said at least one processor may be configured to determine at least one of said one or more words by determining an average of settings of a first type for said plurality of lights and/or by determining one or more differences between settings of said first type or of a second type for said plurality of lights. If a light scene is defined for a plurality of lights with different light settings, determining a word for a setting of each light (e.g.“Bright’’+“Blue”+“Dimmed” ⁇ “Red”) would result in name that may not be that easy to remember or recognize. In this case, it is preferable to determine an average of (the same type of) settings for these lights and/or one or more differences between (the same type of) settings for these lights, which might result in the name“Chaotic Magenta”, for example.
  • determining a word for a setting of each light e.g.“Bright’’+“Blue”+“Dimmed” ⁇ “Red”
  • Said at least one processor may be configured to determine said one or more differences between said settings of said first type or of said second type by determining a standard deviation of said settings of said first type or of said second type. For example, a standard deviation between brightness settings (min. 0% and max. 100%) in a light scene may be determined to be an X% difference in brightness and based on the value of X, the difference may be determined to be large or small (or optionally, neither small nor large).
  • Said one or more lights may comprise a plurality of lights and said at least one processor may be configured to determine a measure of one or more differences between settings of a first type for said plurality of lights and decide not to determine any of said one or more words based on an average of settings of said first type upon determining that said measure exceeds a certain threshold. If there are significant differences in settings of a certain type, then it is still possible to determine an average of these settings, but a word determined based on this average will not be a good representation of the light scene as experienced by the user and may therefore be more difficult to remember or recognize.
  • Said plurality of words may include a further word and said at least one processor may be configured to determine at least one of said one or more settings and said further word by analyzing an image.
  • This is an easy way of creating one or more settings e.g. a yellowish chromaticity setting may be determined from an image of a beach and a blueish chromaticity setting may be determined from an image of a sea.
  • Said plurality of words may include a further word and said at least one processor may be configured to determine said further word based on a location of at least one of said one or more lights.
  • a word determined based on a location e.g. name of a room or zone
  • said plurality of words may include one or more further words which are based on one or more characteristics of at least one of said lights and/or at least one lighting device comprising said one or more lights, e.g. a luminaire comprising a light bulb.
  • luminaires may each have their own identity.
  • Certain luminaires may be more modem, whereas others may be retro/vintage, and yet others may have a contemporary design. These designs may come with associated adjectives that can be chosen if the luminaire is used in a scene. These one or more characteristics may be obtained from said one or more lights and/or said one or more light devices or may be determined based on information obtained from said one or more lights and/or said one or more light devices. As an example of the latter, a model number may be obtained from a luminaire and a corresponding characteristic may be obtained from a database.
  • Said at least one processor may be configured to allow said user to modify said name before associating said name with said one or more settings. This allows the user to modify the generated light scene name if it not to his liking.
  • Said at least one processor may be configured to linguistically analyze said name and improve said name based on said linguistic analysis. For example, the name “dynamic happy reading” may be replaced with“cheerful reading”, because it sounds better.
  • Said at least one processor may be configured to obtain other names associated with light settings and determine said name from said plurality of words such that said name is distinguishing over said other names. Although determining the name of the light scene (one or more light settings) as explained above should normally automatically result in a unique name, a comparison with the names of other light scenes may be performed to verify this.
  • Said at least one processor may be configured to determine said one or more words from said one or more settings using one or more mappings from light setting-derived parameter to word. For example, an (average) yellow tone may be mapped to the word “happy”, an (average) red/orange tone may be mapped to the word“romantic”, a small difference in brightness setting may be mapped to the word“tranquil” or“calm”, and a large difference in brightness settings may be mapped to the word“chaotic”.
  • an (average) yellow tone may be mapped to the word “happy”
  • an (average) red/orange tone may be mapped to the word“romantic”
  • a small difference in brightness setting may be mapped to the word“tranquil” or“calm”
  • a large difference in brightness settings may be mapped to the word“chaotic”.
  • Said one or more mappings may be selected from a plurality of mappings based on at least one of a user location and a user language. Which words are liked by a person may depend on the language he uses and his cultural values, which may be derived from this location and/or his language (setting).
  • the method comprises determining one or more settings for one or more light, said one or more settings comprising one or more chromaticity settings and/or one or more brightness settings, determining one or more words from said one or more settings, determining a name based on a plurality of words, said plurality of words including at least one of said one or more words, outputting said name, associating said name with said one or more settings, and allowing a user to activate said one or more settings on said one or more lights by selecting said name.
  • Said method may be performed by software running on a programmable device. This software may be provided as a computer program product.
  • a computer program for carrying out the methods described herein, as well as a non-transitory computer readable storage-medium storing the computer program are provided.
  • a computer program may, for example, be downloaded by or uploaded to an existing device or be stored upon manufacturing of these systems.
  • a non-transitory computer-readable storage medium stores at least one software code portion, the software code portion, when executed or processed by a computer, being configured to perform executable operations comprising: determining one or more settings for one or more light, said one or more settings comprising one or more chromaticity settings and/or one or more brightness settings, determining one or more words from said one or more settings, determining a name based on a plurality of words, said plurality of words including at least one of said one or more words, outputting said name, associating said name with said one or more settings, and allowing a user to activate said one or more settings on said one or more lights by selecting said name.
  • aspects of the present invention may be embodied as a device, a method or a computer program product.
  • aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module” or “system.”
  • Functions described in this disclosure may be implemented as an algorithm executed by a processor/microprocessor of a computer.
  • aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied, e.g., stored, thereon. Any combination of one or more computer readable medium(s) may be utilized.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer readable storage medium may include, but are not limited to, the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store, a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (FAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • FAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may be provided to a processor, in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • a processor in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • Fig. 1 shows an example of an environment in which the invention may be used
  • Fig. 2 is a block diagram of an embodiment of the system of the invention.
  • Fig. 3 is a flow diagram of a first embodiment of the method of the invention
  • Fig. 4 shows an example of a user interface for identifying lamps present in an environment
  • Fig. 5 shows a first part of an example of a user interface for defining a light scene
  • Fig. 6 shows a second part of the example of the user interface of Fig. 5;
  • Fig. 7 is a flow diagram of a second embodiment of the method of the invention.
  • Fig. 8 is a flow diagram of a third embodiment of the method of the invention.
  • Fig. 9 is a block diagram of an exemplary data processing system for performing the method of the invention.
  • Fig ⁇ 1 shows a first example of an environment in which the invention may be used: a home 11 with a hall 13, a kitchen 14 and a living room 15.
  • the kitchen 14 comprises a light 29.
  • the living room 15 comprises two lights: a light 27 to the left of a television and a light 28 to the right of the television.
  • a person 19 is standing in the living room 15 holding mobile device 1.
  • a bridge 25, e.g. a Philips Hue bridge, is connected to a wireless LAN access point 17, e.g. via Ethernet.
  • the bridge 25 communicates with the lights 27-29 wirelessly, e.g. using Zigbee technology.
  • the lights 27-29 may be Philips Hue lights, for example.
  • a smart speaker 21 is present as well.
  • the smart speaker 21 and the mobile device 1 are wirelessly connected to the wireless LAN access point 17 as well, e.g., via Wi-Fi (IEEE 802.11).
  • the mobile device 1 is able to control lights 27-29 via the wireless LAN access point 17 and the bridge 25.
  • the smart speaker 21 is able to control the lights 27-29 by transmitting signals to the bridge 25.
  • the mobile device 1 and the smart speaker 21 are able to control lights 27-29 without the use of a bridge.
  • the mobile device 1 comprises a receiver 3, a processor 5, memory 7 and a display 9, see Fig. 2.
  • the processor 5 is configured to determine settings for lights 27 and 28, i.e. to determine a light scene involving lights 27 and 28.
  • the one or more settings comprise one or more chromaticity settings and/or one or more brightness settings.
  • the processor 5 is further configured to determine one or more words from the settings and determine a name based on a plurality of words.
  • the plurality of words includes at least one of the one or more words.
  • the processor 5 is further configured to output the name, associate the name with the settings (i.e. the light scene), and allow a user to activate the settings on the lights 27 and 28 by selecting the name, e.g. by selecting the name from a list of names on the display 9.
  • the mobile device 1 stores the name associated with the settings on a server 31 on the Internet 19 along with identifiers of the lights associated with the settings, i.e. identifiers of the lights 27 and 28, and an identifier of the light scene.
  • this name associated with the settings may be stored on the mobile device 1 itself, e.g. by an app running on the mobile device 1.
  • the settings relevant for light 27 are stored on light 27 along with the identifier of the light scene.
  • the settings relevant for light 28 are stored on light 28 along with the identifier of the light scene.
  • the mobile device 1 and the smart speaker 21 are able to activate the light scene by transmitting a command comprising the identifier of the light scene to the lights 27 and 28 via the bridge 25.
  • the system of the invention comprises the mobile device 1, the Internet server 31, the smart speaker 21, the bridge 25 and the lights 27- 29.
  • the system may comprise a subset of these devices, for example.
  • the mobile device 1 comprises one processor 5.
  • the mobile device 1 comprises multiple processors.
  • the processor 5 of the mobile device 1 may be a general-purpose processor, e.g. from ARM or Qualcomm, or an application-specific processor.
  • the processor 5 of the mobile device 1 may run an iOS, Windows or Android operating system for example.
  • the transceiver 3 may use one or more wireless communication technologies to communicate with the wireless LAN access point 17, for example.
  • multiple transceivers are used instead of a single transceiver.
  • a receiver and a transmitter have been combined into a transceiver 3.
  • one or more separate receiver components and one or more separate transmitter components are used.
  • the memory 7 may comprise one or more memory units.
  • the memory 7 may comprise solid state memory, for example.
  • the memory 7 may be used to store apps and data, for example.
  • the display 9 may comprise a LCD or OLED display panel, for example.
  • the display 9 may be a touch screen, for example.
  • the processor 5 may use this touch screen to provide a user interface, for example.
  • the mobile device 1 may comprise other components typical for a mobile device such as a battery and a power connector.
  • the invention may be implemented using a computer program running on one or more processors.
  • a first embodiment of the method of the invention is shown in Fig. 3.
  • a step 101 comprises determining one or more settings for one or more lights.
  • the one or more settings comprises one or more chromaticity settings and/or one or more brightness settings.
  • a step 103 comprises determining one or more words from the one or more settings.
  • a step 105 comprises determining a name based on a plurality of words.
  • the plurality of words includes at least one of the one or more words.
  • each suitable word that can be determined from the one or more settings is determined from the one or more settings. Since the name preferably comprises two or three words, typically a selection of a subset of the determined words needs to be made in step 105. In an alternative embodiment, only a subset of the suitable words that can be determined from the one or more settings is determined in step 103.
  • a step 107 comprises outputting the name.
  • a step 109 comprises associating the name with the one or more settings.
  • a step 111 comprises allowing a user to activate the one or more settings on the one or more lights by selecting the name.
  • the name comprises two words or three words. Since the name is determined automatically in step 105, outputting the name in step 107 allows the user to know which name corresponds to the light scene and to select the desired name in step 111 when he wants to activate the light scene.
  • Figs. 5 and 6 show an example of a user interface for defining a light scene comprising one or more light settings for one or more lights. In this example, these one or more lights have been identified previously, e.g. with the user interface of Fig. 4.
  • Fig. 5 and 6 show an example of a user interface for defining a light scene comprising one or more light settings for one or more lights. In this example, these one or more lights have been identified previously, e.g. with the user interface of Fig. 4.
  • FIG. 4 depicts a“Setup” screen 51 being displayed on display 9 of the mobile device 1.
  • Lights can be added to the user’s configuration, in this example for his home, using the button 56.
  • This button has been used to add three lights to the configuration.
  • the light 27 of Fig. 1 has been placed in the group 53 titled“Living Room”, has been named“Hue Bloom- 1”, as shown in sub-area 62, and has been assigned icon 61.
  • the light 28 of Fig. 1 has also been placed in the group 53 titled“Living Room”, has been named“Hue Bloom-2”, as shown in sub-area 65, and has been assigned icon 64.
  • the light 29 of Fig. 1 has also been placed in the group 54 titled“Kitchen”, has been named“Hue Go”, as shown in sub-area 68, and has been assigned icon 67.
  • a scan for lights that are not listed in the user interface yet may be performed. This scan may involve contacting the bridge 25. New lights may automatically be added to a group title“Unassigned” (not shown).
  • FIG. 5 A first part of the example of the user interface for defining a light scene is shown in Fig. 5.
  • Screen 71 allows the user to place icons of lights that he wants to use in the light scene in a color circle 73.
  • icons 61 and 64 of lights 27 and 28, respectively, have been placed in the color circle 73, while icon 64 of light 29 has not been placed in the color circle 73 and is therefore not controlled as part of the light scene.
  • the position where an icon is placed in the color circle 73 determines which chromaticity setting is selected for the light to which the icon corresponds.
  • icon 61 has been placed at a position representing the color red
  • icon 64 has been placed at a position representing the color orange.
  • FIG. 6 A second part of the example of the user interface for defining a light scene is shown in Fig. 6.
  • Screen 81 allows the user to place icons of lights that he wants to use in the light scene on a bar 83.
  • icons 61 and 64 of lights 27 and 28, respectively, have been placed on bar 83, while icon 64 of light 29 has not been placed on bar 83 and is therefore not controlled as part of the light scene.
  • the position where an icon on the bar 83 determines which brightness setting from 0% to 100% is selected for the light to which the icon corresponds.
  • icon 61 has been placed at a position representing a brightness of 70% and icon 64 has been placed at a position representing a brightness of 30%.
  • settings are defined per light. In an alternative embodiment, one or more of the settings may defined for a group of lights, e.g. for all the lights involved in the scene.
  • step 101 of Fig. 3 comprises a step 121 and step 103 of Fig. 3 comprises a step 123.
  • Step 121 comprises determining at least one of the one or more settings by analyzing an image.
  • Step 123 comprises determining the one or more words from the one or more settings using one or more mappings from light setting-derived parameter to word.
  • the one or more mappings are selected from a plurality of mappings based on at least one of a user location and a user language.
  • Step 124 comprises determining at least one of the further words by analyzing the image that was analyzed in step 121 (e.g. by extracting key features) or based on the analysis already performed in step 121.
  • Microsoft uses such technology in certain versions of Office applications to
  • Step 125 comprises determining at least one of the further words based on a location of at least one of the one or more lights.
  • step 105 the name is determined based on the plurality of words.
  • this step comprises a sub step 127 of linguistically analyzing the name and improving the name based on the linguistic analysis.
  • the proposed scene name may be modified to ensure that the recommendation does not contain any spelling or syntactical errors and/or a natural language engine may be used to enhance the combination of words.
  • This natural language engine may use artificial intelligence or may simply map a phrase (e.g. two words) to a better phrase (e.g. one or two better words), for example. For example,“dynamic happy reading” may not sound well and may therefore be adjusted to “dynamic joyful reading” or dynamic happy may even be combined to“cheerful”. Result: “Cheerful reading”.
  • a grammar check may also be performed.
  • pronounceability of the name may also be checked and improved if necessary and possible.
  • Step 105 further comprise a sub step 129.
  • Step 129 comprises obtaining other names associated with light settings and determining the name from the plurality of words such that the name is distinguishing over the other names.
  • step 107 of outputting the name is performed, followed by an additional step 131 of allowing the user to modify the name before associating the name with the one or more settings.
  • steps 109 and 111 are performed.
  • several enhancements to the embodiment of Fig. 3 are being used in combination. In an alternative embodiment, only a subset of these enhancements is used.
  • step 101 of Fig. 3 comprises four sub steps 151-154.
  • steps 123 and 125 are also present in this third embodiment.
  • Step 123 comprises four sub steps 161-164.
  • a setting of a first type is determined for a first light, e.g. light 27 of Fig. 1.
  • this setting may be the chromaticity setting of the first light, e.g. a color with a red tone.
  • a setting of a first type is determined for a second light, e.g. light 28 of Fig. 1.
  • this setting may be the chromaticity setting of the second light, e.g. a color with an orange tone.
  • a setting of a second type is determined for the first light.
  • this setting may be the brightness setting of the first light, e.g. 70%.
  • a setting of a second type is determined for the second light.
  • this setting may be the brightness setting of the second light, e.g. 30%.
  • Step 161 comprises determining a word by determining an average of the settings determined in steps 151 and 152 by using a first mapping from light setting-derived parameter to word.
  • the chromaticity setting may be mapped, for example, to a mood: e.g. happy (yellow tones), romantic (red/orange tones), angry (bright red tones), energetic (yellow/green tones) or to a description of the color palette used: e.g. vibrant, pastel, dark, light, vivid, warm, cool.
  • the first mapping comprises mappings from range of chromaticity values to word. For example, an (average) color with an orange-red tone may fall in a range that has been mapped to the word“Romantic”.
  • HTML color names are used to map an (average) chromaticity value to a word.
  • Step 162 comprises determining differences between the settings determined in steps 151 and 152 by using a second mapping from light setting-derived parameter to word. Since there are only two lights and therefore two chromaticity settings, a measure of the difference is relatively easy to determine. For example, a distance between two vectors in color space (e.g. CIE lab or RGB color space) can be determined and compared to a maximum distance in this color space to determine a measure of the difference. A difference of 0-10% may be mapped to the word“Harmonic”, a difference higher than 50% may be mapped to the word“Chaotic” and a difference between 10% and 50% may not be mapped to any word, for example.
  • a measure of the difference is relatively easy to determine. For example, a distance between two vectors in color space (e.g. CIE lab or RGB color space) can be determined and compared to a maximum distance in this color space to determine a measure of the difference.
  • a difference of 0-10% may be mapped to the word“Harmonic”, a
  • the differences between settings of the same type may be determined by determining a standard deviation of these settings.
  • a standard deviation between brightness settings (min. 0% and max. 100%) in a light scene may be determined to be an X% difference in brightness and based on the value of X, the difference may be determined to be large or small (or optionally, neither small nor large).
  • the average of the settings of the same type is determined. This average may be the mean, mode or median of the settings, for example.
  • Step 164 comprises determining differences between the settings determined in steps 153 and 154 by using a fourth mapping from light setting-derived parameter to word. Since there are only two lights and therefore two brightness settings, a measure of the difference is relatively easy to determine. For example, the absolute value of a difference between the two brightness values may be determined, e.g. the absolute value of 70% minus 30%. A difference of 0-10% may be mapped to the word“Tranquil”,“Easy” or“Mellow”, a difference higher than 30% may be mapped to the word“Chaotic” or“Dynamic” and a difference between 10% and 30% may not be mapped to any word, for example. These mappings may be determined by a vendor or manufacturer of a light system (e.g. including lights and an app), for example, and may even differ per language or culture. In an alternative embodiment, the differences between settings of the same type may be determined by determining a standard deviation of these settings.
  • the difference measure determined in step 162 may be used to decide in step 161 whether or not a word should be determined.
  • a certain threshold e.g. 50%
  • a decision not to determine a word based on the average chromaticity may be made.
  • the difference measure determined in step 164 may be used to decide in step 163 whether or not a word should be determined.
  • a decision not to determine a word based on the average brightness may be made.
  • Step 163 comprises determining a word by determining an average of the settings determined in steps 151 and 152 by using a third mapping from light setting-derived parameter to word.
  • the third mapping maps ranges of brightness values to words. For example, a brightness of 0-30% map be mapped to the word“dimmed”, a brightness of 80-100% may be mapped to the word“bright” and a brightness between 30% and 80% may not be mapped to any word. If the above-mentioned threshold is 30%, the brightness setting for the first light is 70% and the brightness setting for the second light is 30%, no word is output in step 163.
  • a further word is determined based on a location of at least one of the first and second lights.
  • the location may be a room or zone and may be mapped to an activity that takes place there.
  • the location of the first light and the second light may have been defined as“Living Room” with the user interface of Fig. 4.
  • the room “Living Room” may be mapped to the word“Reading” (or alternatively“Watching TV” or “Relaxing”).
  • the height of the lights may be defined or determined and a word may be determined based on this height. This height may be used to distinguish between functional lighting and ambient lighting, for example. If only ceiling lights are used, the settings are probably mostly functional. If a lot of standing lamps and wall washers are used, the settings are most likely ambient. This distinction between functional lighting and ambient lighting may also be made in a different manner.
  • step 105 a subset of the words determined in step 125 and steps 161-165 is selected and a (scene) name is determined by combining the selected words.
  • a (scene) name is determined by combining the selected words.
  • step 125 If a word has been determined based on the location of the lights (step 125), e.g.“Reading”, then this word is included in the name.
  • some of the words that have been determined from the settings (in steps 161-164) are included in the name. This selection may be performed randomly or in a specified order, e.g. first the difference(s) between brightness settings, e.g. “Dynamic”, then the average/overall brightness setting (not used if there are relatively large differences between the brightness settings), then the average chromaticity setting, e.g.
  • a further word is determined by analyzing an image (e.g. step 124 of Fig. 7) and this word is then preferred over words that have been determined from settings that have not been determined by analyzing an image.
  • the name determined in step 105 may be,“Dynamic Romantic Reading”, for example.
  • Fig. 9 depicts a block diagram illustrating an exemplary data processing system that may perform the method as described with reference to Figs. 3, 7 and 8.
  • the data processing system 300 may include at least one processor 302 coupled to memory elements 304 through a system bus 306. As such, the data processing system may store program code within memory elements 304. Further, the processor 302 may execute the program code accessed from the memory elements 304 via a system bus 306. In one aspect, the data processing system may be implemented as a computer that is suitable for storing and/or executing program code. It should be appreciated, however, that the data processing system 300 may be implemented in the form of any system including a processor and a memory that is capable of performing the functions described within this specification.
  • the memory elements 304 may include one or more physical memory devices such as, for example, local memory 308 and one or more bulk storage devices 310.
  • the local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code.
  • a bulk storage device may be implemented as a hard drive or other persistent data storage device.
  • the processing system 300 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the quantity of times program code must be retrieved from the bulk storage device 310 during execution.
  • the processing system 300 may also be able to use memory elements of another processing system, e.g. if the processing system 300 is part of a cloud-computing platform.
  • I/O devices depicted as an input device 312 and an output device 314 optionally can be coupled to the data processing system.
  • input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a microphone (e.g. for voice and/or speech recognition), or the like.
  • output devices may include, but are not limited to, a monitor or a display, speakers, or the like. Input and/or output devices may be coupled to the data processing system either directly or through intervening I/O controllers.
  • the input and the output devices may be implemented as a combined input/output device (illustrated in Fig. 9 with a dashed line surrounding the input device 312 and the output device 314).
  • a combined device is a touch sensitive display, also sometimes referred to as a“touch screen display” or simply“touch screen”.
  • input to the device may be provided by a movement of a physical object, such as e.g. a stylus or a finger of a user, on or near the touch screen display.
  • a network adapter 316 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks.
  • the network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 300, and a data transmitter for transmitting data from the data processing system 300 to said systems, devices and/or networks.
  • Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 300.
  • the memory elements 304 may store an application 318.
  • the application 318 may be stored in the local memory 308, the one or more bulk storage devices 310, or separate from the local memory and the bulk storage devices.
  • the data processing system 300 may further execute an operating system (not shown in Fig. 10) that can facilitate execution of the application 318.
  • the application 318 being implemented in the form of executable program code, can be executed by the data processing system 300, e.g., by the processor 302. Responsive to executing the application, the data processing system 300 may be configured to perform one or more operations or method steps described herein.
  • Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein).
  • the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression“non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal.
  • the program(s) can be contained on a variety of transitory computer-readable storage media.
  • Illustrative computer-readable storage media include, but are not limited to: (i) non- writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored.
  • the computer program may be run on the processor 302 described herein.

Landscapes

  • Circuit Arrangement For Electric Light Sources In General (AREA)

Abstract

A method comprises determining (101) one or more settings for one or more lights. The one or more settings comprise one or more chromaticity settings and/or one or more brightness settings. The method further comprises determining (123) one or more words from the one or more settings. The method further comprises determining (105) a name based on a plurality of words that includes at least one of the one or more words. The method further comprises outputting the name, associating the name with the one or more settings, and allowing a user to activate the one or more settings on the one or more lights by selecting the name.

Description

Activating one or more light settings associated with an automatically determined name
FIELD OF THE INVENTION
The invention relates to a system for activating a setting on a light.
The invention further relates to a method of activating a setting on a light.
The invention also relates to a computer program product enabling a computer system to perform such a method.
BACKGROUND OF THE INVENTION
Philips Hue is a consumer connected lighting solution. The system consists of a central controller, named a bridge, wirelessly controllable light endpoints and user interfaces in various forms (switches, sensors and mobiles apps). The bridge is connected to the router of the user and communicates to the light points. It is capable of running schedules and home automation rules. In this way it acts as the intelligence of the system. All user interfaces connect to the bridge in order to actuate the lights.
Within the Philips Hue system users can create their own‘scenes’. A scene directly activates the preferred light settings. Scenes are typically created by selecting colors for the lights from a color wheel, or through an image. Each scene is typically associated with a name and this name can be used to activate the scene, i.e. to activate the light settings in the scene on one or more lights. For example, WO 2018/037009 Al discloses a controller with a display for displaying a plurality of names, wherein each name is rendered with a text property representative of the respective relevance value of the associated light scene and in a text color that matches a light color of the associated light scene and wherein a name can be selected in order to activate the associated light scene.
Currently when the user creates a new scene, it is given a default name“New scene”,“Scene X” or“Custom scene”. This creates various problems and annoyances: users are forced to rename the scene, if users are not renaming the scene they may end up with multiple scenes with the same name, scenes may not be recognizable by their name and when using voice control (currently highly popular), users may have too many scenes that are alike, which may make voice recognition difficult. US 9839089 Bl discloses a method in which a user can manually input a name for a lighting scene in an app, which allows the lighting scene to be recalled.
WO 2011/013035 Al discloses an atmosphere program management system comprising a server, which stores atmosphere programs, and a remote management client for accessing the server and providing a user interface for managing the atmosphere programs stored by the server.
SUMMARY OF THE INVENTION
It is a first object of the invention to provide a system for activating a setting on a light, which can be used to generate a suitable name for a light scene and to activate said light scene upon selection of said name.
It is a second object of the invention to provide a method of determining a light script, which can be used to generate a suitable name for a light scene and to activate said light scene upon selection of said name.
In a first aspect of the invention, the system comprises at least one processor configured to determine one or more settings for one or more lights, said one or more settings comprising one or more chromaticity settings and/or one or more brightness settings, determine one or more words from said one or more settings, determine a name based on a plurality of words, said plurality of words including at least one of said one or more words, output said name, associate said name with said one or more settings, and allow a user to activate said one or more settings on said one or more lights by selecting said name. Said name preferably comprises two words or three words.
The inventor has recognized that by automatically determining a name based on a plurality of words of which at least one word is based on the content of the light scene, i.e. of the light effect to be rendered, a name may be determined that is both unique and easy to remember or recognize and therefore suitable for activating the associated light scene.
Preferably, said one or more settings for said one or more lights are
determined based on user input. For example, a user selects the one or more settings directly (e.g. by selecting brightness and chromaticity values) or indirectly (e.g. by selecting an image). Typically, different functions are used to determine different words from the settings. Determining one or words from said one or more settings may comprise selecting a subset of said one or more settings from which to determine said one or more words. Determining a name based on a plurality of words may comprise selecting a subset of said plurality of words. The name is output in order to inform the user which name has been determined automatically and to allow the user to select the desired name when he wants to activate the one or more settings on the one or more lights.
Said one or more lights may comprise a plurality of lights and said at least one processor may be configured to determine at least one of said one or more words by determining an average of settings of a first type for said plurality of lights and/or by determining one or more differences between settings of said first type or of a second type for said plurality of lights. If a light scene is defined for a plurality of lights with different light settings, determining a word for a setting of each light (e.g.“Bright’’+“Blue”+“Dimmed”† “Red”) would result in name that may not be that easy to remember or recognize. In this case, it is preferable to determine an average of (the same type of) settings for these lights and/or one or more differences between (the same type of) settings for these lights, which might result in the name“Chaotic Magenta”, for example.
Said at least one processor may be configured to determine said one or more differences between said settings of said first type or of said second type by determining a standard deviation of said settings of said first type or of said second type. For example, a standard deviation between brightness settings (min. 0% and max. 100%) in a light scene may be determined to be an X% difference in brightness and based on the value of X, the difference may be determined to be large or small (or optionally, neither small nor large).
Said one or more lights may comprise a plurality of lights and said at least one processor may be configured to determine a measure of one or more differences between settings of a first type for said plurality of lights and decide not to determine any of said one or more words based on an average of settings of said first type upon determining that said measure exceeds a certain threshold. If there are significant differences in settings of a certain type, then it is still possible to determine an average of these settings, but a word determined based on this average will not be a good representation of the light scene as experienced by the user and may therefore be more difficult to remember or recognize.
Said plurality of words may include a further word and said at least one processor may be configured to determine at least one of said one or more settings and said further word by analyzing an image. This is an easy way of creating one or more settings, e.g. a yellowish chromaticity setting may be determined from an image of a beach and a blueish chromaticity setting may be determined from an image of a sea.
Said plurality of words may include a further word and said at least one processor may be configured to determine said further word based on a location of at least one of said one or more lights. By including a word determined based on a location (e.g. name of a room or zone) of at least one of the one or more lights, the name of the light scene becomes easier to remember or recognize and allows a larger variation in light scene names to be generated. Alternatively or additionally, said plurality of words may include one or more further words which are based on one or more characteristics of at least one of said lights and/or at least one lighting device comprising said one or more lights, e.g. a luminaire comprising a light bulb. For example, luminaires may each have their own identity. Certain luminaires may be more modem, whereas others may be retro/vintage, and yet others may have a contemporary design. These designs may come with associated adjectives that can be chosen if the luminaire is used in a scene. These one or more characteristics may be obtained from said one or more lights and/or said one or more light devices or may be determined based on information obtained from said one or more lights and/or said one or more light devices. As an example of the latter, a model number may be obtained from a luminaire and a corresponding characteristic may be obtained from a database.
Said at least one processor may be configured to allow said user to modify said name before associating said name with said one or more settings. This allows the user to modify the generated light scene name if it not to his liking.
Said at least one processor may be configured to linguistically analyze said name and improve said name based on said linguistic analysis. For example, the name “dynamic happy reading” may be replaced with“cheerful reading”, because it sounds better.
Said at least one processor may be configured to obtain other names associated with light settings and determine said name from said plurality of words such that said name is distinguishing over said other names. Although determining the name of the light scene (one or more light settings) as explained above should normally automatically result in a unique name, a comparison with the names of other light scenes may be performed to verify this.
Said at least one processor may be configured to determine said one or more words from said one or more settings using one or more mappings from light setting-derived parameter to word. For example, an (average) yellow tone may be mapped to the word “happy”, an (average) red/orange tone may be mapped to the word“romantic”, a small difference in brightness setting may be mapped to the word“tranquil” or“calm”, and a large difference in brightness settings may be mapped to the word“chaotic”.
Said one or more mappings may be selected from a plurality of mappings based on at least one of a user location and a user language. Which words are liked by a person may depend on the language he uses and his cultural values, which may be derived from this location and/or his language (setting).
In a second aspect of the invention, the method comprises determining one or more settings for one or more light, said one or more settings comprising one or more chromaticity settings and/or one or more brightness settings, determining one or more words from said one or more settings, determining a name based on a plurality of words, said plurality of words including at least one of said one or more words, outputting said name, associating said name with said one or more settings, and allowing a user to activate said one or more settings on said one or more lights by selecting said name. Said method may be performed by software running on a programmable device. This software may be provided as a computer program product.
Moreover, a computer program for carrying out the methods described herein, as well as a non-transitory computer readable storage-medium storing the computer program are provided. A computer program may, for example, be downloaded by or uploaded to an existing device or be stored upon manufacturing of these systems.
A non-transitory computer-readable storage medium stores at least one software code portion, the software code portion, when executed or processed by a computer, being configured to perform executable operations comprising: determining one or more settings for one or more light, said one or more settings comprising one or more chromaticity settings and/or one or more brightness settings, determining one or more words from said one or more settings, determining a name based on a plurality of words, said plurality of words including at least one of said one or more words, outputting said name, associating said name with said one or more settings, and allowing a user to activate said one or more settings on said one or more lights by selecting said name.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a device, a method or a computer program product.
Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system." Functions described in this disclosure may be implemented as an algorithm executed by a processor/microprocessor of a computer. Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied, e.g., stored, thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer readable storage medium may include, but are not limited to, the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store, a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any
combination of one or more programming languages, including an object oriented
programming language such as Java(TM), Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (FAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor, in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention are apparent from and will be further elucidated, by way of example, with reference to the drawings, in which:
Fig. 1 shows an example of an environment in which the invention may be used;
Fig. 2 is a block diagram of an embodiment of the system of the invention;
Fig. 3 is a flow diagram of a first embodiment of the method of the invention; Fig. 4 shows an example of a user interface for identifying lamps present in an environment;
Fig. 5 shows a first part of an example of a user interface for defining a light scene;
Fig. 6 shows a second part of the example of the user interface of Fig. 5;
Fig. 7 is a flow diagram of a second embodiment of the method of the invention;
Fig. 8 is a flow diagram of a third embodiment of the method of the invention; and
Fig. 9 is a block diagram of an exemplary data processing system for performing the method of the invention.
Corresponding elements in the drawings are denoted by the same reference numeral.
DETAIFED DESCRIPTION OF THE EMBODIMENTS
Fig· 1 shows a first example of an environment in which the invention may be used: a home 11 with a hall 13, a kitchen 14 and a living room 15. The kitchen 14 comprises a light 29. The living room 15 comprises two lights: a light 27 to the left of a television and a light 28 to the right of the television. A person 19 is standing in the living room 15 holding mobile device 1. A bridge 25, e.g. a Philips Hue bridge, is connected to a wireless LAN access point 17, e.g. via Ethernet. The bridge 25 communicates with the lights 27-29 wirelessly, e.g. using Zigbee technology. The lights 27-29 may be Philips Hue lights, for example.
A smart speaker 21 is present as well. The smart speaker 21 and the mobile device 1 are wirelessly connected to the wireless LAN access point 17 as well, e.g., via Wi-Fi (IEEE 802.11). The mobile device 1 is able to control lights 27-29 via the wireless LAN access point 17 and the bridge 25. The smart speaker 21 is able to control the lights 27-29 by transmitting signals to the bridge 25. In an alternative embodiment, the mobile device 1 and the smart speaker 21 are able to control lights 27-29 without the use of a bridge.
The mobile device 1 comprises a receiver 3, a processor 5, memory 7 and a display 9, see Fig. 2. The processor 5 is configured to determine settings for lights 27 and 28, i.e. to determine a light scene involving lights 27 and 28. The one or more settings comprise one or more chromaticity settings and/or one or more brightness settings. The processor 5 is further configured to determine one or more words from the settings and determine a name based on a plurality of words. The plurality of words includes at least one of the one or more words. The processor 5 is further configured to output the name, associate the name with the settings (i.e. the light scene), and allow a user to activate the settings on the lights 27 and 28 by selecting the name, e.g. by selecting the name from a list of names on the display 9.
In the embodiment of Fig. 2, the mobile device 1 stores the name associated with the settings on a server 31 on the Internet 19 along with identifiers of the lights associated with the settings, i.e. identifiers of the lights 27 and 28, and an identifier of the light scene. Alternatively or additionally, this name associated with the settings may be stored on the mobile device 1 itself, e.g. by an app running on the mobile device 1. The settings relevant for light 27 are stored on light 27 along with the identifier of the light scene. The settings relevant for light 28 are stored on light 28 along with the identifier of the light scene. The mobile device 1 and the smart speaker 21 are able to activate the light scene by transmitting a command comprising the identifier of the light scene to the lights 27 and 28 via the bridge 25. In the embodiment of Fig. 2, the system of the invention comprises the mobile device 1, the Internet server 31, the smart speaker 21, the bridge 25 and the lights 27- 29. In an alternative embodiment, the system may comprise a subset of these devices, for example.
In the embodiment shown in Fig. 2, the mobile device 1 comprises one processor 5. In an alternative embodiment, the mobile device 1 comprises multiple processors. The processor 5 of the mobile device 1 may be a general-purpose processor, e.g. from ARM or Qualcomm, or an application-specific processor. The processor 5 of the mobile device 1 may run an iOS, Windows or Android operating system for example.
The transceiver 3 may use one or more wireless communication technologies to communicate with the wireless LAN access point 17, for example. In an alternative embodiment, multiple transceivers are used instead of a single transceiver. In the
embodiment shown in Fig. 2, a receiver and a transmitter have been combined into a transceiver 3. In an alternative embodiment, one or more separate receiver components and one or more separate transmitter components are used.
The memory 7 may comprise one or more memory units. The memory 7 may comprise solid state memory, for example. The memory 7 may be used to store apps and data, for example. The display 9 may comprise a LCD or OLED display panel, for example. The display 9 may be a touch screen, for example. The processor 5 may use this touch screen to provide a user interface, for example. The mobile device 1 may comprise other components typical for a mobile device such as a battery and a power connector. The invention may be implemented using a computer program running on one or more processors.
A first embodiment of the method of the invention is shown in Fig. 3. A step 101 comprises determining one or more settings for one or more lights. The one or more settings comprises one or more chromaticity settings and/or one or more brightness settings. A step 103 comprises determining one or more words from the one or more settings. A step 105 comprises determining a name based on a plurality of words. The plurality of words includes at least one of the one or more words. In the embodiment of Fig. 3, each suitable word that can be determined from the one or more settings is determined from the one or more settings. Since the name preferably comprises two or three words, typically a selection of a subset of the determined words needs to be made in step 105. In an alternative embodiment, only a subset of the suitable words that can be determined from the one or more settings is determined in step 103.
A step 107 comprises outputting the name. A step 109 comprises associating the name with the one or more settings. A step 111 comprises allowing a user to activate the one or more settings on the one or more lights by selecting the name. Preferably, the name comprises two words or three words. Since the name is determined automatically in step 105, outputting the name in step 107 allows the user to know which name corresponds to the light scene and to select the desired name in step 111 when he wants to activate the light scene. Figs. 5 and 6 show an example of a user interface for defining a light scene comprising one or more light settings for one or more lights. In this example, these one or more lights have been identified previously, e.g. with the user interface of Fig. 4. Fig. 4 depicts a“Setup” screen 51 being displayed on display 9 of the mobile device 1. Lights can be added to the user’s configuration, in this example for his home, using the button 56. This button has been used to add three lights to the configuration. The light 27 of Fig. 1 has been placed in the group 53 titled“Living Room”, has been named“Hue Bloom- 1”, as shown in sub-area 62, and has been assigned icon 61.
The light 28 of Fig. 1 has also been placed in the group 53 titled“Living Room”, has been named“Hue Bloom-2”, as shown in sub-area 65, and has been assigned icon 64. The light 29 of Fig. 1 has also been placed in the group 54 titled“Kitchen”, has been named“Hue Go”, as shown in sub-area 68, and has been assigned icon 67. When the user presses button 56, a scan for lights that are not listed in the user interface yet may be performed. This scan may involve contacting the bridge 25. New lights may automatically be added to a group title“Unassigned” (not shown).
A first part of the example of the user interface for defining a light scene is shown in Fig. 5. Screen 71 allows the user to place icons of lights that he wants to use in the light scene in a color circle 73. In this example, icons 61 and 64 of lights 27 and 28, respectively, have been placed in the color circle 73, while icon 64 of light 29 has not been placed in the color circle 73 and is therefore not controlled as part of the light scene. The position where an icon is placed in the color circle 73 determines which chromaticity setting is selected for the light to which the icon corresponds. In this example, icon 61 has been placed at a position representing the color red and icon 64 has been placed at a position representing the color orange.
A second part of the example of the user interface for defining a light scene is shown in Fig. 6. Screen 81 allows the user to place icons of lights that he wants to use in the light scene on a bar 83. In this example, icons 61 and 64 of lights 27 and 28, respectively, have been placed on bar 83, while icon 64 of light 29 has not been placed on bar 83 and is therefore not controlled as part of the light scene. The position where an icon on the bar 83 determines which brightness setting from 0% to 100% is selected for the light to which the icon corresponds. In this example, icon 61 has been placed at a position representing a brightness of 70% and icon 64 has been placed at a position representing a brightness of 30%. In the examples of Figs. 5 and 6, settings are defined per light. In an alternative embodiment, one or more of the settings may defined for a group of lights, e.g. for all the lights involved in the scene.
A second embodiment of the method of the invention is shown in Fig. 7. In Fig. 7, step 101 of Fig. 3 comprises a step 121 and step 103 of Fig. 3 comprises a step 123. Step 121 comprises determining at least one of the one or more settings by analyzing an image. Step 123 comprises determining the one or more words from the one or more settings using one or more mappings from light setting-derived parameter to word. In an extended version of this embodiment, the one or more mappings are selected from a plurality of mappings based on at least one of a user location and a user language.
Further words are determined in steps 124 and 125, which are performed before the name is determined from the plurality of words in step 105. Step 124 comprises determining at least one of the further words by analyzing the image that was analyzed in step 121 (e.g. by extracting key features) or based on the analysis already performed in step 121. Microsoft uses such technology in certain versions of Office applications to
automatically suggest an image caption. For example: if the user selects an image with a sunset, the term“Sunset” could be used. Other examples are“Forest”,“Beach”,“Traffic”, and“Trees”. Step 125 comprises determining at least one of the further words based on a location of at least one of the one or more lights.
In step 105, the name is determined based on the plurality of words. In the embodiment of Fig. 7, this step comprises a sub step 127 of linguistically analyzing the name and improving the name based on the linguistic analysis. For example, the proposed scene name may be modified to ensure that the recommendation does not contain any spelling or syntactical errors and/or a natural language engine may be used to enhance the combination of words. This natural language engine may use artificial intelligence or may simply map a phrase (e.g. two words) to a better phrase (e.g. one or two better words), for example. For example,“dynamic happy reading” may not sound well and may therefore be adjusted to “dynamic joyful reading” or dynamic happy may even be combined to“cheerful”. Result: “Cheerful reading”. A grammar check may also be performed. In step 127, the
pronounceability of the name may also be checked and improved if necessary and possible.
Step 105 further comprise a sub step 129. Step 129 comprises obtaining other names associated with light settings and determining the name from the plurality of words such that the name is distinguishing over the other names. Next, step 107 of outputting the name is performed, followed by an additional step 131 of allowing the user to modify the name before associating the name with the one or more settings. Finally, steps 109 and 111, as described in relation to Fig. 3, are performed. In the embodiment of Fig. 7, several enhancements to the embodiment of Fig. 3 are being used in combination. In an alternative embodiment, only a subset of these enhancements is used.
A third embodiment of the method of the invention is shown in Fig. 8. In this third embodiment, step 101 of Fig. 3 comprises four sub steps 151-154. Of the additional steps shown in Fig. 6, steps 123 and 125 are also present in this third embodiment. Step 123 comprises four sub steps 161-164.
In step 151, a setting of a first type is determined for a first light, e.g. light 27 of Fig. 1. For example, this setting may be the chromaticity setting of the first light, e.g. a color with a red tone. In step 152, a setting of a first type is determined for a second light, e.g. light 28 of Fig. 1. For example, this setting may be the chromaticity setting of the second light, e.g. a color with an orange tone. In step 153, a setting of a second type is determined for the first light. For example, this setting may be the brightness setting of the first light, e.g. 70%. In step 154, a setting of a second type is determined for the second light. For example, this setting may be the brightness setting of the second light, e.g. 30%.
Step 161 comprises determining a word by determining an average of the settings determined in steps 151 and 152 by using a first mapping from light setting-derived parameter to word. In general, the chromaticity setting may be mapped, for example, to a mood: e.g. happy (yellow tones), romantic (red/orange tones), angry (bright red tones), energetic (yellow/green tones) or to a description of the color palette used: e.g. vibrant, pastel, dark, light, vivid, warm, cool. In this embodiment, the first mapping comprises mappings from range of chromaticity values to word. For example, an (average) color with an orange-red tone may fall in a range that has been mapped to the word“Romantic”. In an alternative embodiment, HTML color names are used to map an (average) chromaticity value to a word.
Step 162 comprises determining differences between the settings determined in steps 151 and 152 by using a second mapping from light setting-derived parameter to word. Since there are only two lights and therefore two chromaticity settings, a measure of the difference is relatively easy to determine. For example, a distance between two vectors in color space (e.g. CIE lab or RGB color space) can be determined and compared to a maximum distance in this color space to determine a measure of the difference. A difference of 0-10% may be mapped to the word“Harmonic”, a difference higher than 50% may be mapped to the word“Chaotic” and a difference between 10% and 50% may not be mapped to any word, for example.
In an alternative embodiment, the differences between settings of the same type may be determined by determining a standard deviation of these settings. For example, a standard deviation between brightness settings (min. 0% and max. 100%) in a light scene may be determined to be an X% difference in brightness and based on the value of X, the difference may be determined to be large or small (or optionally, neither small nor large). Before the standard deviation is determined, first the average of the settings of the same type is determined. This average may be the mean, mode or median of the settings, for example.
Step 164 comprises determining differences between the settings determined in steps 153 and 154 by using a fourth mapping from light setting-derived parameter to word. Since there are only two lights and therefore two brightness settings, a measure of the difference is relatively easy to determine. For example, the absolute value of a difference between the two brightness values may be determined, e.g. the absolute value of 70% minus 30%. A difference of 0-10% may be mapped to the word“Tranquil”,“Easy” or“Mellow”, a difference higher than 30% may be mapped to the word“Chaotic” or“Dynamic” and a difference between 10% and 30% may not be mapped to any word, for example. These mappings may be determined by a vendor or manufacturer of a light system (e.g. including lights and an app), for example, and may even differ per language or culture. In an alternative embodiment, the differences between settings of the same type may be determined by determining a standard deviation of these settings.
The difference measure determined in step 162 may be used to decide in step 161 whether or not a word should be determined. When the chromaticity difference exceeds a certain threshold, e.g. 50%, a decision not to determine a word based on the average chromaticity may be made. The difference measure determined in step 164 may be used to decide in step 163 whether or not a word should be determined. When the brightness difference exceeds a certain threshold, a decision not to determine a word based on the average brightness may be made.
Step 163 comprises determining a word by determining an average of the settings determined in steps 151 and 152 by using a third mapping from light setting-derived parameter to word. In this embodiment, the third mapping maps ranges of brightness values to words. For example, a brightness of 0-30% map be mapped to the word“dimmed”, a brightness of 80-100% may be mapped to the word“bright” and a brightness between 30% and 80% may not be mapped to any word. If the above-mentioned threshold is 30%, the brightness setting for the first light is 70% and the brightness setting for the second light is 30%, no word is output in step 163.
In step 125, a further word is determined based on a location of at least one of the first and second lights. The location may be a room or zone and may be mapped to an activity that takes place there. The location of the first light and the second light may have been defined as“Living Room” with the user interface of Fig. 4. For example, the room “Living Room” may be mapped to the word“Reading” (or alternatively“Watching TV” or “Relaxing”). In an alternative embodiment, the height of the lights may be defined or determined and a word may be determined based on this height. This height may be used to distinguish between functional lighting and ambient lighting, for example. If only ceiling lights are used, the settings are probably mostly functional. If a lot of standing lamps and wall washers are used, the settings are most likely ambient. This distinction between functional lighting and ambient lighting may also be made in a different manner.
In step 105, a subset of the words determined in step 125 and steps 161-165 is selected and a (scene) name is determined by combining the selected words. In the embodiment of Fig. 8, the following rules are used for selecting two or three of these words:
1. If a word has been determined based on the location of the lights (step 125), e.g.“Reading”, then this word is included in the name.
2. As remaining words, some of the words that have been determined from the settings (in steps 161-164) are included in the name. This selection may be performed randomly or in a specified order, e.g. first the difference(s) between brightness settings, e.g. “Dynamic”, then the average/overall brightness setting (not used if there are relatively large differences between the brightness settings), then the average chromaticity setting, e.g.
“Romantic” and then difference(s) between the chromaticity settings, e.g.“Harmonic”.
In an alternative embodiment, a further word is determined by analyzing an image (e.g. step 124 of Fig. 7) and this word is then preferred over words that have been determined from settings that have not been determined by analyzing an image.
The name determined in step 105 may be,“Dynamic Romantic Reading”, for example.
Fig. 9 depicts a block diagram illustrating an exemplary data processing system that may perform the method as described with reference to Figs. 3, 7 and 8.
As shown in Fig. 9, the data processing system 300 may include at least one processor 302 coupled to memory elements 304 through a system bus 306. As such, the data processing system may store program code within memory elements 304. Further, the processor 302 may execute the program code accessed from the memory elements 304 via a system bus 306. In one aspect, the data processing system may be implemented as a computer that is suitable for storing and/or executing program code. It should be appreciated, however, that the data processing system 300 may be implemented in the form of any system including a processor and a memory that is capable of performing the functions described within this specification.
The memory elements 304 may include one or more physical memory devices such as, for example, local memory 308 and one or more bulk storage devices 310. The local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive or other persistent data storage device. The processing system 300 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the quantity of times program code must be retrieved from the bulk storage device 310 during execution. The processing system 300 may also be able to use memory elements of another processing system, e.g. if the processing system 300 is part of a cloud-computing platform.
Input/output (I/O) devices depicted as an input device 312 and an output device 314 optionally can be coupled to the data processing system. Examples of input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a microphone (e.g. for voice and/or speech recognition), or the like. Examples of output devices may include, but are not limited to, a monitor or a display, speakers, or the like. Input and/or output devices may be coupled to the data processing system either directly or through intervening I/O controllers.
In an embodiment, the input and the output devices may be implemented as a combined input/output device (illustrated in Fig. 9 with a dashed line surrounding the input device 312 and the output device 314). An example of such a combined device is a touch sensitive display, also sometimes referred to as a“touch screen display” or simply“touch screen”. In such an embodiment, input to the device may be provided by a movement of a physical object, such as e.g. a stylus or a finger of a user, on or near the touch screen display.
A network adapter 316 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. The network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 300, and a data transmitter for transmitting data from the data processing system 300 to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 300.
As pictured in Fig. 9, the memory elements 304 may store an application 318. In various embodiments, the application 318 may be stored in the local memory 308, the one or more bulk storage devices 310, or separate from the local memory and the bulk storage devices. It should be appreciated that the data processing system 300 may further execute an operating system (not shown in Fig. 10) that can facilitate execution of the application 318. The application 318, being implemented in the form of executable program code, can be executed by the data processing system 300, e.g., by the processor 302. Responsive to executing the application, the data processing system 300 may be configured to perform one or more operations or method steps described herein.
Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein). In one embodiment, the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression“non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal. In another embodiment, the program(s) can be contained on a variety of transitory computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non- writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. The computer program may be run on the processor 302 described herein.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of embodiments of the present invention has been presented for purposes of illustration, but is not intended to be exhaustive or limited to the implementations in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present invention. The embodiments were chosen and described in order to best explain the principles and some practical applications of the present invention, and to enable others of ordinary skill in the art to understand the present invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims

CLAIMS:
1. A method of activating a setting on a light via a system comprising at least one processor, the method comprising:
determining (101), by the processor, one or more settings for one or more lights, said one or more settings comprising one or more chromaticity settings and/or one or more brightness settings;
deriving (103), by the processor, one or more words from said one or more settings;
determining (105), by the processor, a name based on a plurality of words, said plurality of words including at least one of said one or more words;
- outputting (107), by the processor, said name;
associating (109), by the processor, said name with said one or more settings; and
allowing (111), by the processor, a user to activate said one or more settings on said one or more lights by selecting said name.
2. A method as claimed in claim 1, wherein said name comprises two words or three words.
3. A method as claimed in claim 1, wherein said one or more lights comprise a plurality of lights and at least one of said one or more words is determined by determining an average of settings of a first type for said plurality of lights and/or by determining one or more differences between settings of said first type or of a second type for said plurality of lights.
4. A method as claimed in claim 3, wherein said one or more differences between said settings of said first type or of said second type are determined by determining a standard deviation of said settings of said first type or of said second type.
5. A method as claimed in claim 1, wherein said one or more lights comprise a plurality of lights and said method comprises determining a measure of one or more differences between settings of a first type for said plurality of lights and deciding not to determine any of said one or more words based on an average of settings of said first type upon determining that said measure exceeds a certain threshold.
6. A method as claimed in claim 1 , wherein said plurality of words includes a further word, and said one or more settings and/or said further word are determined by analyzing an image.
7. A method as claimed in claim 1 , wherein said plurality of words includes a further word and said further word is determined based on a location of at least one of said one or more lights.
8. A method as claimed in claim 1, further comprising allowing said user to modify said name before associating said name with said one or more settings.
9. A method as claimed in claim 1, wherein further comprising linguistically analyzing said name and improving said name based on said linguistic analysis.
10. A method as claimed in claim 1, further comprising obtaining other names associated with light settings, wherein said name is determined from said plurality of words such that said name is distinguishing over said other names.
11. A method as claimed in claim 1 , wherein said one or more words are determined from said one or more settings using one or more mappings from light setting- derived parameter to word.
12. A method as claimed in claim 11 , wherein said one or more mappings are selected from a plurality of mappings based on at least one of a user location and a user language.
13. A computer program or suite of computer programs comprising at least one software code portion or a computer program product storing at least one software code portion, the software code portion, when run on a computer system, being configured for enabling the method of any one of claims 1 to 12 to be performed.
14. A system comprising at least one processor configured to:
- determine one or more settings for one or more lights, said one or more settings comprising one or more chromaticity settings and/or one or more brightness settings, derive one or more words from said one or more settings,
determine a name based on a plurality of words, said plurality of words including at least one of said one or more words,
- output said name,
associate said name with said one or more settings, and
allow a user to activate said one or more settings on said one or more lights by selecting said name.
PCT/EP2019/065828 2018-07-02 2019-06-17 Activating one or more light settings associated with an automatically determined name WO2020007596A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP18181160.5 2018-07-02
EP18181160.5A EP3592110A1 (en) 2018-07-02 2018-07-02 Activating one or more light settings associated with an automatically determined name

Publications (1)

Publication Number Publication Date
WO2020007596A1 true WO2020007596A1 (en) 2020-01-09

Family

ID=62841973

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/065828 WO2020007596A1 (en) 2018-07-02 2019-06-17 Activating one or more light settings associated with an automatically determined name

Country Status (2)

Country Link
EP (1) EP3592110A1 (en)
WO (1) WO2020007596A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011013035A1 (en) 2009-07-29 2011-02-03 Koninklijke Philips Electronics N.V. Managing atmosphere programs for atmosphere creation systems
US9839089B1 (en) 2016-08-24 2017-12-05 DXY Technology Co., Limited Control method for smart light
WO2018037009A1 (en) 2016-08-26 2018-03-01 Philips Lighting Holding B.V. Controller for controlling a lighting device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8324826B2 (en) * 2006-09-29 2012-12-04 Koninklijke Philips Electronics N.V. Method and device for composing a lighting atmosphere from an abstract description and lighting atmosphere composition system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011013035A1 (en) 2009-07-29 2011-02-03 Koninklijke Philips Electronics N.V. Managing atmosphere programs for atmosphere creation systems
US9839089B1 (en) 2016-08-24 2017-12-05 DXY Technology Co., Limited Control method for smart light
WO2018037009A1 (en) 2016-08-26 2018-03-01 Philips Lighting Holding B.V. Controller for controlling a lighting device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JEFFREY HEER ET AL: "Color naming models for color selection, image editing and palette design", PROCEEDINGS OF THE 2012 ACM ANNUAL CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI '12, 5 May 2012 (2012-05-05), New York, New York, USA, pages 1007 - 1016, XP055212933, ISBN: 978-1-45-031015-4, DOI: 10.1145/2207676.2208547 *

Also Published As

Publication number Publication date
EP3592110A1 (en) 2020-01-08

Similar Documents

Publication Publication Date Title
US12014117B2 (en) Grouping devices for voice control
US10403280B2 (en) Lamp device for inputting or outputting voice signal and method of driving the same
CN103959374B (en) System and method for voice actuated configuration of a controlling device
EP3152981B1 (en) Light scene creation or modification by means of lighting device usage data
US10224034B2 (en) Voice recognition system and construction method thereof
CN111869330A (en) Rendering dynamic light scenes based on one or more light settings
KR102517219B1 (en) Electronic apparatus and the control method thereof
US10959315B2 (en) System and method for operation of multiple lighting units in a building
EP3721682B1 (en) A lighting control system for controlling a plurality of light sources based on a source image and a method thereof
EP3592110A1 (en) Activating one or more light settings associated with an automatically determined name
WO2020011694A1 (en) Determining light effects to be rendered simultaneously with a content item
EP4055997B1 (en) Configuring a bridge with groups after addition of said bridge to a lighting system
WO2021219493A1 (en) Cuttable light strip comprising individually addressable segments
CN114114936A (en) Grouping method and grouping device for intelligent lamps, intelligent equipment and storage medium
US20220151046A1 (en) Enhancing a user's recognition of a light scene
EP3878243A1 (en) Adapting a lighting control interface based on an analysis of conversational input
EP3912435B1 (en) Receiving light settings of light devices identified from a captured image
US20220377869A1 (en) Defining one or more groups in a configurable system based on device name similarity
JP2024502843A (en) Selecting a better input modality for user commands for light control
WO2024002810A1 (en) Determining access rights for a component based on username inclusion in the component's name
CN117882036A (en) Camera interface for interacting with IOT devices
CN117897676A (en) Building augmented reality experiences with IOT devices
CN117916692A (en) Bi-directional control of IOT devices using AR cameras
WO2023033971A1 (en) Controlling iot devices through ar object interaction
EP4396652A1 (en) Controlling iot devices through ar object interaction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19729784

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19729784

Country of ref document: EP

Kind code of ref document: A1