WO2020144196A1 - Determining a light effect based on a light effect parameter specified by a user for other content taking place at a similar location - Google Patents

Determining a light effect based on a light effect parameter specified by a user for other content taking place at a similar location Download PDF

Info

Publication number
WO2020144196A1
WO2020144196A1 PCT/EP2020/050245 EP2020050245W WO2020144196A1 WO 2020144196 A1 WO2020144196 A1 WO 2020144196A1 EP 2020050245 W EP2020050245 W EP 2020050245W WO 2020144196 A1 WO2020144196 A1 WO 2020144196A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
segment
light effects
user
effects
Prior art date
Application number
PCT/EP2020/050245
Other languages
French (fr)
Inventor
Dzmitry Viktorovich Aliakseyeu
Tobias BORRA
Dragan Sekulovski
Original Assignee
Signify Holding B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Signify Holding B.V. filed Critical Signify Holding B.V.
Publication of WO2020144196A1 publication Critical patent/WO2020144196A1/en

Links

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/175Controlling the light source by remote control
    • H05B47/19Controlling the light source by remote control via wireless transmission
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/165Controlling the light source following a pre-assigned programmed sequence; Logic control [LC]
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/175Controlling the light source by remote control
    • H05B47/196Controlling the light source by remote control characterised by user interface arrangements
    • H05B47/1965Controlling the light source by remote control characterised by user interface arrangements using handheld communication devices

Definitions

  • the invention relates to a system for determining one or more light effects to be rendered while media content is being rendered, said one or more light effects being determined based on an analysis of said media content.
  • the invention further relates to a method of determining one or more light effects to be rendered while media content is being rendered, said one or more light effects being determined based on an analysis of said media content.
  • the invention also relates to a computer program product enabling a computer system to perform such a method.
  • Hue entertainment enhances the experience of watching a movie, listening to a music and/or playing a game by using light scripts or by creating light effects based on audio and/or video analysis.
  • Hue entertainment application HueSync which automatically creates light effects using color extraction algorithms. The user can only control how dynamic or intense these effects are.
  • WO 2017/182365A1 discloses creating a light script by processing frame image data of video content and determining at least one color palette of the frame image data.
  • a sequence of light effects that is to be rendered during the outputting of the video content is determined based on the at least one color palette and displayed to the user.
  • the user can then modify the determined sequence and generate a light script to render the modified sequence of light effects.
  • color palettes and sequences of light effects are determined per segment of the video content.
  • a drawback of the method of WO 2017/182365A1 is that the light effects that best match the video content are determined per segment of the video content and that the light script creator in this case needs to check, and possibly modify, the automatically determined sequence for each segment.
  • US 2013/147395 A1 discloses using automated video analysis software to detect a certain event within a video program and applying a predefined light effect to the segment of the program where the event occurs.
  • a system for one or more light effects to be rendered while media content is being rendered comprises at least one input interface, at least one output interface, and at least one processor configured to use said at least one input interface to obtain media content, use said at least one input interface allow a user to specify a light effect parameter for a segment of said media content, determine a location and/or a type of location at which said segment occurs, and determine one or more further segments of said media content, said one or more further segments occurring at said location and/or said type of location.
  • Said at least one processor is further configured to determine one or more light effects to be rendered on one or more light sources while said segment is being rendered, said one or more light effects being determined based on an analysis of said segment and said light effect parameter specified by said user for said segment, determine one or more further light effects to be rendered on said one or more light sources while said one or more further segments are being rendered, said one or more further light effects being determined based on an analysis of said one or more further segments and said light effect parameter specified by said user for said segment, and use said at least one output interface to control said one or more light sources to render said one or more light effects and said one or more further light effects and/or output a light script specifying said one or more light effects and said one or more further light effects.
  • the user at most needs to specify a light effect parameter per set of one or more content segments taking place at a similar location i.e. occurring at the same location or occurring at locations of the same type.
  • the user does not need to specify a light effect parameter for each segment.
  • the preferred light effect parameter may differ per person, use of the same light effect parameter for content segments taking place at a similar location appears to result in light effects that suit all these content segments.
  • the location may be a spatial location or a temporal location, for example.
  • Use of the same light effect parameters for segments taking place at the same spatial location and/or type of spatial location is beneficial for many types of content.
  • use of the same light effect parameters for segments taking place at the same temporal location e.g.“1980s” or“1984” is beneficial for certain types of content, e.g. TV series and movies with multiple time-lines (e.g.“Back to the Future”) and especially TV series and movies with complex time-lines.
  • the user may then be able to determine from the light effects in which temporal location the current segment takes places.
  • the temporal location may be determined from subtitles and/or from a used color palette (as certain movies and TV shows use different color palettes for different temporal locations), for example.
  • Said system may be part of a lighting system which comprises one or more light sources or may be used in a lighting system which comprises one or more light sources, for example.
  • the light effects and further light effects may be determined based on an analysis of an audio portion and/or a video portion of the media content.
  • Said at least one processor may be configured to determine said location and/or said type of location based on one or more user-specified locations and/or location types. Alternatively or additionally, said at least one processor may be configured to determine said location and/or said type of location based on features extracted from said media content and/or based on metadata associated with said media content.
  • the location may be“Martin’s house”,“Alexandria city mall” or“New York City”, for example.
  • the location may be determined from the screenplay (e.g.“Martin’s house”) or by using object recognition (e.g.“New York City”), for example.
  • the location type may be“Inside”,“Outside”,“House”,“Mall”,“City”,“Forest”,“Space”, for example.
  • the location type may be determined using object recognition, by using extracted color palettes (e.g. a color distribution in general or a set of all colors used in a scene in particular) or by using audio analysis (e.g. use the sound of a car horn as indicator of a city scene and the sound of birds as an indicator of a forest scene), for example.
  • the light script may specify start times, durations and colors for each of the light effects, for example.
  • Said at least one processor may be configured to store an association between said light effect parameter specified by said user and said location and/or said type of location in a memory.
  • the light effect parameters may be applied across different media items, e.g. video files. Unless the user’s preferences change, this may reduce the amount of input that the user needs to provide the next time the user creates a light script. For example, if a script is created for (an episode of) a TV series, where the same locations could appear in different episodes, the script creator may specify the light effect parameters for the first episode and then these parameters may be used for all follow up episodes.
  • the same principle may also be applied to episodes of TV shows other than TV series, e.g.“The Voice”, and to different TV shows of the same genre, e.g. National Geographic shows.
  • Said light effect parameter specified by said user for said segment may influence how one or more colors are extracted from said segment and how one or more further (different) colors are extracted from said one or more further segments, said one or more light effects being determined based on one said or more colors and said one or more further light effects being based on said one or more further (different) colors.
  • the segment and the further segment are different (i.e. comprise different media content, such as comprising different video content and/or different audio content)
  • the colors extracted may thus also be different.
  • the algorithm used or aspects of the algorithm use e.g. variables for controlling the algorithm
  • Said one or more light effects may comprise a plurality of light effects, said one or more further light effects may comprise a plurality of further light effects and said light effect parameter specified by said user for said segment may influence a type and/or a speed of one or more transitions between said plurality of light effects and one or more further transitions between said plurality of further light effects.
  • a script creator may define faster transitions for light effects when a scene is taking place in a city and slower transitions for light effects when a scene is taking place in space.
  • said light effect parameter specified by said user for said segment may influence whether said speed of said one or more transitions depends on a presence of fast movement in said segment and said speed of said one or more further transitions depends on a presence of fast movement in said one or more further segments.
  • having the speed of the transitions depend on the presence of fast movement works well, but individual users may have a different preference for certain locations or location types.
  • Said light effect parameter specified by said user for said segment may influence whether said one or more light effects are determined from an audio portion of said segment and whether said one or more further light effects are determined from one or more further audio portions of said one or more further segments.
  • said light effect parameter specified by said user for said segment may influence whether a brightness of said one or more light effects depends on a loudness level of said audio portion and a brightness of said one or more further light effects depends on a loudness level of said one or more further audio portions.
  • varying brightness depending the loudness level of the audio portion provides suitable light effects, although this suitability typically also depends on user preference.
  • a method of determining one or more light effects to be rendered while media content is being rendered, said one or more light effects being determined based on an analysis of said media content comprises obtaining media content, allowing a user to specify a light effect parameter for a segment of said media content, determining a location and/or a type of location at which said segment occurs, determining one or more further segments of said media content, said one or more further segments occurring at said location and/or said type of location, and determining one or more light effects to be rendered on one or more light sources while said segment is being rendered, said one or more light effects being determined based on an analysis of said segment and said light effect parameter specified by said user for said segment.
  • Said method further comprises determining one or more further light effects to be rendered on said one or more light sources while said one or more further segments are being rendered, said one or more further light effects being determined based on an analysis of said one or more further segments and said light effect parameter specified by said user for said segment, and controlling said one or more light sources to render said one or more light effects and said one or more further light effects and/or outputting a light script specifying said one or more light effects and said one or more further light effects.
  • Said method may be performed by software running on a programmable device. This software may be provided as a computer program product.
  • a computer program for carrying out the methods described herein, as well as a non-transitory computer readable storage-medium storing the computer program are provided.
  • a computer program may, for example, be downloaded by or uploaded to an existing device or be stored upon manufacturing of these systems.
  • a non-transitory computer-readable storage medium stores a software code portion, the software code portion, when executed or processed by a computer, being configured to perform executable operations for determining one or more light effects to be rendered while media content is being rendered, said one or more light effects being determined based on an analysis of said media content.
  • the executable operations comprise obtaining media content, allowing a user to specify a light effect parameter for a segment of said media content, determining a location and/or a type of location at which said segment occurs, determining one or more further segments of said media content, said one or more further segments occurring at said location and/or said type of location, and determining one or more light effects to be rendered on one or more light sources while said segment is being rendered, said one or more light effects being determined based on an analysis of said segment and said light effect parameter specified by said user for said segment.
  • the executable operations further comprise determining one or more further light effects to be rendered on said one or more light sources while said one or more further segments are being rendered, said one or more further light effects being determined based on an analysis of said one or more further segments and said light effect parameter specified by said user for said segment, and controlling said one or more light sources to render said one or more light effects and said one or more further light effects and/or outputting a light script specifying said one or more light effects and said one or more further light effects.
  • Said method may be performed by software running on a programmable device. This software may be provided as a computer program product.
  • aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", “module” or “system.”
  • Functions described in this disclosure may be implemented as an algorithm executed by a processor/microprocessor of a computer.
  • aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied, e.g., stored, thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may include, but are not limited to, the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store, a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may be provided to a processor, in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • a processor in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • Fig. l is a block diagram of an embodiment of the system
  • Fig. 2 is a flow diagram of a first embodiment of the method
  • Fig. 3 is a flow diagram of a second embodiment of the method
  • Fig. 4 shows an example of a user interface for creating a light script that depicts a light script being created for a first content item at a first moment
  • Fig. 5 shows an example of the user interface of Fig.4 that depicts a light script being created for the first content item at a second moment;
  • Fig. 6 shows an example of the user interface of Fig.4 that depicts a light script being created for the first content item at a third moment
  • Fig. 7 shows an example of the user interface of Fig.4 that depicts a light script being created for the first content item at an alternative third moment
  • Fig. 8 shows an example of the user interface of Fig.4 that depicts a light script being created for the first content item at a fourth moment
  • Fig. 9 shows an example of the user interface of Fig.4 that depicts a light script being created for a second content item
  • Fig. 10 is a block diagram of an exemplary data processing system for performing the method of the invention.
  • Fig. 1 shows an embodiment of the afore-mentioned system: mobile device 1.
  • Mobile device 1 is connected to a wireless LAN access point 23.
  • a bridge 11 is also connected to the wireless LAN access point 23, e.g. via Ethernet.
  • Light sources 13-17 communicate wirelessly with the bridge 11, e.g. using the Zigbee protocol, and can be controlled via the bridge 11, e.g. by the mobile device 1.
  • the bridge 11 may be a Philips Hue bridge and the light sources 13-17 may be Philips Hue lights, for example. In an alternative embodiment, light devices are controlled without a bridge.
  • a TV 27 is also connected to the wireless LAN access point 23.
  • Media content may be rendered by the mobile device 1 or by the TV 27, for example.
  • the wireless LAN access point 23 is connected to the Internet 24.
  • An Internet server 25 is also connected to the Internet 24.
  • the mobile device 1 may be a mobile phone or a tablet, for example.
  • the mobile device 1 may run the Philips Hue Sync app, for example.
  • the mobile device 1 comprises a processor 5, a receiver 3, a transmitter 4, a memory 7, and a display 9.
  • the display 9 comprises a touchscreen.
  • the mobile device 1, the bridge 11 and the light sources 13-17 are part of lighting system 21.
  • the processor 5 is configured to use the receiver 3 to obtain media content, use the touchscreen display 9 to allow a user to specify a light effect parameter for a segment of the media content, determine a spatial location and/or a type of spatial location at which the segment occurs, and determine one or more further segments of the media content which occur at the same spatial location and/or type of spatial location.
  • the processor 5 is further configured to determine one or more light effects to be rendered on one or more light sources while the segment is being rendered and determine one or more further light effects to be rendered on the one or more light sources while the one or more further segments are being rendered.
  • the one or more light effects are determined based on an analysis of the segment and the light effect parameter specified by the user for the segment.
  • the one or more further light effects are determined based on an analysis of the one or more further segments and the light effect parameter specified by the user for the segment. Multiple light effect parameters may be specified by the user and used to determine the light effects and further light effects.
  • the processor 5 is further configured to use the transmitter 4 to control (via the bridge 11) one or more of the light sources 13-17 to render the one or more light effects and the one or more further light effects and/or output a light script specifying the one or more light effects and the one or more further light effects.
  • the light script may be output to the memory 7 via an internal interface.
  • the invention can be used to improve the real-time creation and rendering of light effects, but it especially beneficial for light scrip creation.
  • the light script creation can be made faster and easier by allowing a script creator to define a set of variables that define how the light effect is auto generated (e.g. the algorithm for color extraction and transition) and then automatically apply to all parts of the movie that occur at the same location and/or location type (e.g. outdoor vs indoor, city vs. forest). Two parts that occur at the same location also have the same location type, but two parts that occur at the same type of location do not necessarily occur at the same location.
  • the location and/or location type may come in the form of metadata (e.g. segmented content with tags for location), may be derived from the content or may be based on user input. After it is determined which properties are available, an auto-indexing of the content may take place, which segments the media content based on (combinations of) these properties.
  • metadata e.g. segmented content with tags for location
  • a script creator can define faster transitions for light effects when events shown happen in the city and slower when events are happening in space.
  • the colors will be extracted depending on the scene colors while the transition will be defined by the rule set by the script creator for this type of environment (e.g. space and city) and would not depend on the presence or absence of fast movements/events in the scene.
  • a script creator might define different color extraction algorithms depending on the environment e.g. outdoor scenes might use algorithms that are able to assess the brightness and color temperature of the light source, while for indoor scenes, a more straightforward color extraction (e.g. determination of an average color from colors in a certain analysis area) could be used.
  • these rules can be automatically applied and potentially used in real-time cases (e.g. HueSync) as well.
  • a (personal) database may be created, assisting the script creator for future content. Based on scene properties set in the past, the system may make an educated guess for new content, speeding up the process even more.
  • An example hereof would be a forest scene that occurs in movie A.
  • the system learns specific scene properties and concomitant lighting properties (e.g. algorithmic and transitional), and then automatically applies these to future forest scenes in movie B.
  • the mobile device 1 comprises one processor 5.
  • the mobile device 1 comprises multiple processors.
  • the processor 5 of the mobile device 1 may be a general-purpose processor, e.g. from Qualcomm or ARM-based, or an application-specific processor.
  • the processor 5 of the mobile device 1 may run an Android or iOS operating system for example.
  • the memory 7 may comprise one or more memory units.
  • the memory 7 may comprise solid- state memory, for example.
  • the memory 7 may be used to store an operating system, applications and application data, for example.
  • the receiver 3 and the transmitter 4 may use one or more wireless communication technologies such as Wi-Fi (IEEE 802.11) to communicate with the wireless LAN access point 23, for example.
  • Wi-Fi IEEE 802.11
  • multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter.
  • a separate receiver and a separate transmitter are used.
  • the receiver 3 and the transmitter 4 are combined into a transceiver.
  • the display 9 may comprise an LCD or OLED panel, for example.
  • the mobile device 1 may comprise other components typical for a mobile device such as a battery and a power connector.
  • the invention may be implemented using a computer program running on one or more processors.
  • the system of the invention is a mobile device.
  • the system of the invention is a different device, e.g. a PC or a video module, or comprises multiple devices.
  • the video module may be a dedicated HDMI module that can be put between the TV and the device providing the HDMI input so that it can analyze the HDMI input, for example.
  • the system of the invention is used in a lighting system to illustrate that the system can be used both for creating light scripts and for real-time rendering of light effects.
  • the system is not necessarily part of a lighting system.
  • the system may be a PC that is only used for creating light scripts.
  • the light effects are typically not created for specific light sources.
  • a light effect may be created for one or more light sources in a certain part of a room (e.g. left of the TV) or for any light source.
  • the light sources in the lighting system may be used for real-time rendering of light effects during normal use of the lighting system or may be used for testing a light script.
  • a light script may also be tested if the system of the invention is not used in a lighting system.
  • the one or more light sources may be virtual/simulated.
  • the bridge and communication between devices may be simulated as well.
  • the rendering of the media content does not require a TV.
  • the media content may be rendered on the PC that is used for creating the light script, e.g. for testing purposes.
  • the PC may, for example, run software like Adobe Premier and the user might get an extra window displaying a virtual environment with lights, or an even simpler representation to show how effects would look like if parameters are adjusted in a certain way.
  • a first embodiment of the method is shown in Fig. 2.
  • the method is used for determining one or more light effects to be rendered while media content is being rendered.
  • the one or more light effects are determined based on an analysis of the media content.
  • a step 101 comprises obtaining media content.
  • the method comprises a step 102 of analyzing the media content.
  • Step 102 may comprise extracting color information, motion information and/or loudness information from the media content, for example. This information may be used in steps 109 and 111. In an alternative embodiment, this information is received from another device.
  • a step 103 comprises allowing a user to specify a light effect parameter for a segment of the media content.
  • a step 109 comprises determining one or more light effects to be rendered on one or more light sources while the segment is being rendered.
  • the one or more light effects are determined based on an analysis of the segment and the light effect parameter specified by the user for the segment.
  • a step 105 comprises determining a spatial location and/or a type of spatial location at which the segment occurs.
  • step 105 comprises sub steps 131, 133 and 135.
  • Step 131 comprises determining the spatial location and/or the type of spatial location based on one or more user-specified spatial locations and/or spatial location types.
  • Step 133 comprises associating the one or more user-specified spatial locations and/or spatial location types with the media content. This prevents that the user needs to specify the same information the next time he creates a light script for the same media content and it may be possible for the user to share this information with other users.
  • Step 135 comprises determining the spatial location and/or the type of spatial location based on features extracted from the media content and/or based on metadata associated with the media content.
  • the metadata may comprise a spatial location and/or type of spatial location that was/were associated with the media content when step 133 was performed in a previous performance of the method.
  • a step 107 comprises determining one or more further segments of the media content which occur at the determined spatial location and/or type of spatial location.
  • a step 111 comprises determining one or more further light effects to be rendered on the one or more light sources while the one or more further segments are being rendered. The one or more further light effects are determined based on an analysis of the one or more further segments and the light effect parameter specified by the user for the segment.
  • the method may be performed by a script creation tool or may be used to create and render light effects in real-time.
  • the script creation tool would perform a step 115.
  • Step 115 comprises outputting a light script specifying the one or more light effects and the one or more further light effects.
  • a step 113 is performed for real time light effects generation, e.g. by the HueSync app.
  • Step 113 comprises controlling the one or more light sources to render the one or more light effects and the one or more further light effects.
  • the user should preferably not be required to give more than a minimal amount of input. For example, the user may only be asked and/or allowed to indicate his light effect preference(s), e.g. deviations from the default settings, for indoor scenes and for outdoor scenes. Asking a user to specify light effect preferences for more location types requires him to give more input. Segmentation may be performed if metadata is available (e.g. segments or chapters in the movie). Automatic segmentation is more challenging if it needs to be performed in real-time. The method need not include both step 113 and step 115.
  • step 152 of Fig. 3 comprises analyzing the media content.
  • Step 152 comprises steps 105 and 107 of Fig.2 as sub steps.
  • Step 152 comprises partitioning the content (video) into segments with the same location and/or location type. This can be done manually, automatically based on metadata or automatically based on video analysis, for example.
  • Steps 153 and 157 are performed after step 152.
  • segments are selected whose location and/or location type have already been associated with a light effect parameter, e.g. in step 163 during a previous performance of the method.
  • the light effects are determined for these segments based on an analysis of these segments, e.g. the analysis performed in step 152, and based on the light effect parameter(s) associated with the location(s) and/or location type(s) determined in step 152.
  • step 157 segments are selected whose location and/or location type have not already been associated with a light effect parameter.
  • step 103 is performed for a first group of segments with the same location and/or location type.
  • Step 103 comprises allowing the user to specify one or more light effect parameters for this selected group of segments, e.g. a color extraction algorithm and a type of light transitions.
  • Steps 161 and 163 are performed after step 103.
  • Step 161 comprises steps 109 and 111 of Fig. 2 as sub steps.
  • Step 161 comprises determining light effects to be rendered on one or more light sources while the segments of the selected group are being rendered. Each of these light effects is determined based on the analysis of one or more frames of these segments and based on the one or more light effect parameters specified in step 103.
  • Step 147 comprises storing an association between the one or more light effect parameters specified by the user in step 103 and the spatial location and/or the type of spatial location of the selected group.
  • Step 165 comprises checking whether there is a further group of segments with the same location and/or location type that has not already been associated with a light effect parameter. If so, then step 103 is repeated for the second group of segments. Steps 103, 161 and 163 may be performed for each group of segments with the same location and/or location type that has not already been associated with a light effect parameter. If light effects have been determined for all segments, step 115 is performed next. Step 115 comprises outputting a light script specifying the one or more light effects and the one or more further light effects
  • steps 103, 161 and 163 are only performed for some of the groups of segments with the same location and/or location type that have not already been associated with a light effect parameter.
  • Default light effect parameters may be used for the other groups of segments with the same location and/or location type that have not already been associated with a light effect parameter.
  • FIG. 4 shows an example of a user interface for creating a light script that depicts a light script being created for a first content item at a first moment.
  • a timeline interface 40 represents eight media segments 41-48.
  • the media segments 41-48 are identified with references“SI” to“S8” in row 51.
  • a video thumbnail is displayed in row 54.
  • These video thumbnails 61-68 may help the user select light effect parameters. At this moment, no light effect parameters have been specified yet and row 52 therefore does not represent any light effect parameter.
  • Fig. 4 shows the time line interface 40 at a moment after a first media content item has been partitioned into multiple segments and a location type has been determined per segment.
  • a location type“City” has been determined for segments 41 and 43 and a location type“Inside” has been determined for segments 42, 44 and 48. These location types are indicated in row 53. It was not possible to determine a location type for segments 45-47, but it was possible to determine that segments 45-47 occur at the same location or location type, so the location type of segments 45-47 is represented as“Ul” (unknown location type 1) in row 53. Since the user only needs to specify one or more light effect parameter(s) for one segment per group of segments, a question mark is indicated in row 52 for each first segment of each group. Each group comprises segments occurring at the same location and/or location type. In an alternative embodiment, default light effect parameters may be chosen and represented in row 52.
  • Fig. 5 shows the time line interface 40 a moment later than depicted in Fig. 4, after the user has specified light effect parameters for segment 41.
  • the light effect parameters may, for example, include:
  • transition type and speed between different light effects e.g. how smooth transitions should be
  • audio or any other content features are considered and applied toward light effects generation (e.g. does audio loudness level impact light effect brightness).
  • a script creator has specified a more sophisticated color extraction algorithm (indicated as CE1 in row 52). This color extraction algorithm identifies properties of scene illumination (instead of simply grabbing color from the screen and using it), which it uses for light effects on light sources that are not in direct view of the user.
  • the script creator has further specified (2) the use of smooth transitions even for a fast-paced scene (indicated as“SMO” in row 52) and (3) that audio input should not be used for light effect generation (indicated as“A:N” in row 52).
  • the script creator has further specified the missing location type of segments 45-47 in row 53 (“Forest”).
  • Fig. 6 shows the time line interface 40 a moment later than depicted in Fig. 5.
  • the light script creation tool automatically copied these light effect parameters to the segment with the same location type: segment 43.
  • Two question marks are still indicated in row 52 for the two remaining groups for which no light effect parameters have been specified.
  • the user might immediately specify the light effect parameters for all three segments with a question mark, i.e. segments 41, 42 and 45. This is depicted in Fig. 7.
  • the user has specified for the“Forest” segment 45 that (1) trimean color extraction (indicated as“CE2” in row 52) is used in order to have more saturated colors; (2) smooth transitions (“indicated as“SMO” in row 52) are used (similar to the city) (3) and audio input is used to create the light effects (indicated as“A: Y” in row 52).
  • the user has further specified for the“Inside” segment 42 that (1) colors are extracted from the dominant object(s) in the scene (indicated as“CE3” in row 52), (2) dynamic/fast transitions (“indicated as“DYN” in row 52) are used, and (3) that audio input should not be used for light effect generation (indicated as“A:N” in row 52).
  • Fig. 8 shows the time line interface 40 a moment later than depicted in Fig. 7.
  • the light script creation tool automatically copied these light effect parameters to the segment(s) with the same location type.
  • the light script creation tool determines the light effects and stores them in a light script.
  • the light script creation tool also stores associations between the location types and the corresponding specified light effect parameters for future use.
  • Fig. 9 shows the time line interface 80 at a moment after a second media content item has been partitioned into multiple segments and a location type has been determined per segment.
  • the timeline interface 80 represents five media segments 81-85.
  • the media segments 81-85 are identified with references“SI” to“S5” in row 51.
  • Video thumbnails 91-95 are displayed in row 54.
  • a location type“City” has been determined for segments 81 and 82, a location type“Inside” has been determined for segment 83, and a location type“Space” has been determined for segments 84 and 85.
  • Fig. 10 depicts a block diagram illustrating an exemplary data processing system that may perform the method as described with reference to Figs. 2 and 3.
  • the data processing system 500 may include at least one processor 502 coupled to memory elements 504 through a system bus 506. As such, the data processing system may store program code within memory elements 504. Further, the processor 502 may execute the program code accessed from the memory elements 504 via a system bus 506. In one aspect, the data processing system may be implemented as a computer that is suitable for storing and/or executing program code. It should be appreciated, however, that the data processing system 500 may be implemented in the form of any system including a processor and a memory that can perform the functions described within this specification.
  • the memory elements 504 may include one or more physical memory devices such as, for example, local memory 508 and one or more bulk storage devices 510.
  • the local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code.
  • a bulk storage device may be implemented as a hard drive or other persistent data storage device.
  • the processing system 500 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the quantity of times program code must be retrieved from the bulk storage device 510 during execution.
  • the processing system 500 may also be able to use memory elements of another processing system, e.g. if the processing system 500 is part of a cloud-computing platform.
  • I/O devices depicted as an input device 512 and an output device 514 optionally can be coupled to the data processing system.
  • input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a microphone (e.g. for voice and/or speech recognition), or the like.
  • output devices may include, but are not limited to, a monitor or a display, speakers, or the like. Input and/or output devices may be coupled to the data processing system either directly or through intervening I/O controllers.
  • the input and the output devices may be implemented as a combined input/output device (illustrated in Fig. 10 with a dashed line surrounding the input device 512 and the output device 514).
  • a combined device is a touch sensitive display, also sometimes referred to as a“touch screen display” or simply“touch screen”.
  • input to the device may be provided by a movement of a physical object, such as e.g. a stylus or a finger of a user, on or near the touch screen display.
  • a network adapter 516 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks.
  • the network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 500, and a data transmitter for transmitting data from the data processing system 500 to said systems, devices and/or networks.
  • Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 300.
  • the memory elements 504 may store an application 518.
  • the application 518 may be stored in the local memory 508, the one or more bulk storage devices 510, or separate from the local memory and the bulk storage devices.
  • the data processing system 500 may further execute an operating system (not shown in Fig.10) that can facilitate execution of the application 518.
  • the application 518 being implemented in the form of executable program code, can be executed by the data processing system 500, e.g., by the processor 502.
  • the data processing system 500 may be configured to perform one or more operations or method steps described herein.
  • Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein).
  • the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression“non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal.
  • the program(s) can be contained on a variety of transitory computer-readable storage media.
  • Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored.
  • the computer program may be run on the processor 502 described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A system is configured to allow a user to specify a light effect parameter (52) for a segment (41) of media content, determine a location and/or a type of location (53) at which the segment occurs, and determine one or more further segments (43) that occur at this location and/or this type of location. The system is further configured to determine one or more light effects and one or more further light effects. The one or more light effects are determined based on an analysis of the segment and the light effect parameter and the one or more further light effects are determined based on an analysis of the one or more further segments and this same light effect parameter. The system is also configured to control one or more light sources to render these light effects and/or output a light script specifying these light effects.

Description

Determining a light effect based on a light effect parameter specified by a user for other content taking place at a similar location
FIELD OF THE INVENTION
The invention relates to a system for determining one or more light effects to be rendered while media content is being rendered, said one or more light effects being determined based on an analysis of said media content.
The invention further relates to a method of determining one or more light effects to be rendered while media content is being rendered, said one or more light effects being determined based on an analysis of said media content.
The invention also relates to a computer program product enabling a computer system to perform such a method.
BACKGROUND OF THE INVENTION
The versatility of connected light systems such as Philips Hue keeps growing, offering more and more features to the users. These new features include context awareness, smart automated behavior, new forms of light usage such as entertainment, and so on. For example, Hue entertainment enhances the experience of watching a movie, listening to a music and/or playing a game by using light scripts or by creating light effects based on audio and/or video analysis. The latter is realized with the Hue entertainment application HueSync, which automatically creates light effects using color extraction algorithms. The user can only control how dynamic or intense these effects are.
The use of hand crafted light scripts to accompany on-screen content can significantly improve viewers experience compared to fully automatic light effects, but they take long to create. One possibility to speed up the creation is to combine automatic and manual creation.
WO 2017/182365A1 discloses creating a light script by processing frame image data of video content and determining at least one color palette of the frame image data. A sequence of light effects that is to be rendered during the outputting of the video content is determined based on the at least one color palette and displayed to the user. The user can then modify the determined sequence and generate a light script to render the modified sequence of light effects. In an embodiment, color palettes and sequences of light effects are determined per segment of the video content.
A drawback of the method of WO 2017/182365A1 is that the light effects that best match the video content are determined per segment of the video content and that the light script creator in this case needs to check, and possibly modify, the automatically determined sequence for each segment.
US 2013/147395 A1 discloses using automated video analysis software to detect a certain event within a video program and applying a predefined light effect to the segment of the program where the event occurs.
SUMMARY OF THE INVENTION
It is a first object of the invention to provide a system, which is able to determine one or more light effects that suit the content which they are intended to accompany while requiring limited effort from a person.
It is a second object of the invention to provide a method, which is able to determine one or more light effects that suit the content which they are intended to accompany while requiring limited effort from a person.
In a first aspect of the invention, a system for one or more light effects to be rendered while media content is being rendered, said one or more light effects being determined based on an analysis of said media content, comprises at least one input interface, at least one output interface, and at least one processor configured to use said at least one input interface to obtain media content, use said at least one input interface allow a user to specify a light effect parameter for a segment of said media content, determine a location and/or a type of location at which said segment occurs, and determine one or more further segments of said media content, said one or more further segments occurring at said location and/or said type of location.
Said at least one processor is further configured to determine one or more light effects to be rendered on one or more light sources while said segment is being rendered, said one or more light effects being determined based on an analysis of said segment and said light effect parameter specified by said user for said segment, determine one or more further light effects to be rendered on said one or more light sources while said one or more further segments are being rendered, said one or more further light effects being determined based on an analysis of said one or more further segments and said light effect parameter specified by said user for said segment, and use said at least one output interface to control said one or more light sources to render said one or more light effects and said one or more further light effects and/or output a light script specifying said one or more light effects and said one or more further light effects.
In this way, the user at most needs to specify a light effect parameter per set of one or more content segments taking place at a similar location i.e. occurring at the same location or occurring at locations of the same type. Thus, the user does not need to specify a light effect parameter for each segment. Although the preferred light effect parameter may differ per person, use of the same light effect parameter for content segments taking place at a similar location appears to result in light effects that suit all these content segments.
The location may be a spatial location or a temporal location, for example. Use of the same light effect parameters for segments taking place at the same spatial location and/or type of spatial location is beneficial for many types of content. Alternatively or additionally use of the same light effect parameters for segments taking place at the same temporal location, e.g.“1980s” or“1984” is beneficial for certain types of content, e.g. TV series and movies with multiple time-lines (e.g.“Back to the Future”) and especially TV series and movies with complex time-lines. The user may then be able to determine from the light effects in which temporal location the current segment takes places. The temporal location may be determined from subtitles and/or from a used color palette (as certain movies and TV shows use different color palettes for different temporal locations), for example.
Said system may be part of a lighting system which comprises one or more light sources or may be used in a lighting system which comprises one or more light sources, for example. The light effects and further light effects may be determined based on an analysis of an audio portion and/or a video portion of the media content. Said at least one processor may be configured to determine said location and/or said type of location based on one or more user-specified locations and/or location types. Alternatively or additionally, said at least one processor may be configured to determine said location and/or said type of location based on features extracted from said media content and/or based on metadata associated with said media content.
The location may be“Martin’s house”,“Alexandria city mall” or“New York City”, for example. The location may be determined from the screenplay (e.g.“Martin’s house”) or by using object recognition (e.g.“New York City”), for example. The location type may be“Inside”,“Outside”,“House”,“Mall”,“City”,“Forest”,“Space”, for example. The location type may be determined using object recognition, by using extracted color palettes (e.g. a color distribution in general or a set of all colors used in a scene in particular) or by using audio analysis (e.g. use the sound of a car horn as indicator of a city scene and the sound of birds as an indicator of a forest scene), for example. The light script may specify start times, durations and colors for each of the light effects, for example.
Said at least one processor may be configured to store an association between said light effect parameter specified by said user and said location and/or said type of location in a memory. Thus, the light effect parameters may be applied across different media items, e.g. video files. Unless the user’s preferences change, this may reduce the amount of input that the user needs to provide the next time the user creates a light script. For example, if a script is created for (an episode of) a TV series, where the same locations could appear in different episodes, the script creator may specify the light effect parameters for the first episode and then these parameters may be used for all follow up episodes. The same principle may also be applied to episodes of TV shows other than TV series, e.g.“The Voice”, and to different TV shows of the same genre, e.g. National Geographic shows.
Said light effect parameter specified by said user for said segment may influence how one or more colors are extracted from said segment and how one or more further (different) colors are extracted from said one or more further segments, said one or more light effects being determined based on one said or more colors and said one or more further light effects being based on said one or more further (different) colors. When the segment and the further segment are different (i.e. comprise different media content, such as comprising different video content and/or different audio content), the colors extracted may thus also be different. However, the algorithm used or aspects of the algorithm use (e.g. variables for controlling the algorithm) will then be the same for the segment and the further segment. Thus, when the segment and the further segment occur at the same (type of) spatial location, light effects will be rendered which are more similar than light effects rendered for another segment and another further segment which occur at another same (type of) spatial location. This enhances the experience of the user, who is experiencing the light effects while watching the media content, as light effects for segments at the same (type of) spatial location are more similar and therefore the sense of the user being at that (type of) spatial location is increased. Thereby the level of immersion that the user experiences is increased. For example, for outdoor scenes, algorithms that assess the brightness and color temperature of a light source might be used, while for indoor scenes, a more straightforward color extraction, e.g. a trimean of the colors in a certain analysis area, might be used.
Said one or more light effects may comprise a plurality of light effects, said one or more further light effects may comprise a plurality of further light effects and said light effect parameter specified by said user for said segment may influence a type and/or a speed of one or more transitions between said plurality of light effects and one or more further transitions between said plurality of further light effects. As a first example, a script creator may define faster transitions for light effects when a scene is taking place in a city and slower transitions for light effects when a scene is taking place in space.
As a second example, said light effect parameter specified by said user for said segment may influence whether said speed of said one or more transitions depends on a presence of fast movement in said segment and said speed of said one or more further transitions depends on a presence of fast movement in said one or more further segments. In general, having the speed of the transitions depend on the presence of fast movement works well, but individual users may have a different preference for certain locations or location types.
Said light effect parameter specified by said user for said segment may influence whether said one or more light effects are determined from an audio portion of said segment and whether said one or more further light effects are determined from one or more further audio portions of said one or more further segments. For example, said light effect parameter specified by said user for said segment may influence whether a brightness of said one or more light effects depends on a loudness level of said audio portion and a brightness of said one or more further light effects depends on a loudness level of said one or more further audio portions. For certain location or types of locations, varying brightness depending the loudness level of the audio portion provides suitable light effects, although this suitability typically also depends on user preference.
In a second aspect of the invention, a method of determining one or more light effects to be rendered while media content is being rendered, said one or more light effects being determined based on an analysis of said media content, comprises obtaining media content, allowing a user to specify a light effect parameter for a segment of said media content, determining a location and/or a type of location at which said segment occurs, determining one or more further segments of said media content, said one or more further segments occurring at said location and/or said type of location, and determining one or more light effects to be rendered on one or more light sources while said segment is being rendered, said one or more light effects being determined based on an analysis of said segment and said light effect parameter specified by said user for said segment.
Said method further comprises determining one or more further light effects to be rendered on said one or more light sources while said one or more further segments are being rendered, said one or more further light effects being determined based on an analysis of said one or more further segments and said light effect parameter specified by said user for said segment, and controlling said one or more light sources to render said one or more light effects and said one or more further light effects and/or outputting a light script specifying said one or more light effects and said one or more further light effects. Said method may be performed by software running on a programmable device. This software may be provided as a computer program product.
Moreover, a computer program for carrying out the methods described herein, as well as a non-transitory computer readable storage-medium storing the computer program are provided. A computer program may, for example, be downloaded by or uploaded to an existing device or be stored upon manufacturing of these systems.
A non-transitory computer-readable storage medium stores a software code portion, the software code portion, when executed or processed by a computer, being configured to perform executable operations for determining one or more light effects to be rendered while media content is being rendered, said one or more light effects being determined based on an analysis of said media content.
The executable operations comprise obtaining media content, allowing a user to specify a light effect parameter for a segment of said media content, determining a location and/or a type of location at which said segment occurs, determining one or more further segments of said media content, said one or more further segments occurring at said location and/or said type of location, and determining one or more light effects to be rendered on one or more light sources while said segment is being rendered, said one or more light effects being determined based on an analysis of said segment and said light effect parameter specified by said user for said segment.
The executable operations further comprise determining one or more further light effects to be rendered on said one or more light sources while said one or more further segments are being rendered, said one or more further light effects being determined based on an analysis of said one or more further segments and said light effect parameter specified by said user for said segment, and controlling said one or more light sources to render said one or more light effects and said one or more further light effects and/or outputting a light script specifying said one or more light effects and said one or more further light effects. Said method may be performed by software running on a programmable device. This software may be provided as a computer program product As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a device, a method or a computer program product.
Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system." Functions described in this disclosure may be implemented as an algorithm executed by a processor/microprocessor of a computer. Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied, e.g., stored, thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer readable storage medium may include, but are not limited to, the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store, a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any
combination of one or more programming languages, including an object oriented programming language such as Java(TM), Smalltalk, C++ or the like, conventional procedural programming languages, such as the "C" programming language or similar programming languages, and functional programming languages such as Scala, Haskel or the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor, in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention are apparent from and will be further elucidated, by way of example, with reference to the drawings, in which:
Fig. l is a block diagram of an embodiment of the system;
Fig. 2 is a flow diagram of a first embodiment of the method;
Fig. 3 is a flow diagram of a second embodiment of the method;
Fig. 4 shows an example of a user interface for creating a light script that depicts a light script being created for a first content item at a first moment;
Fig. 5 shows an example of the user interface of Fig.4 that depicts a light script being created for the first content item at a second moment;
Fig. 6 shows an example of the user interface of Fig.4 that depicts a light script being created for the first content item at a third moment;
Fig. 7 shows an example of the user interface of Fig.4 that depicts a light script being created for the first content item at an alternative third moment;
Fig. 8 shows an example of the user interface of Fig.4 that depicts a light script being created for the first content item at a fourth moment;
Fig. 9 shows an example of the user interface of Fig.4 that depicts a light script being created for a second content item; and Fig. 10 is a block diagram of an exemplary data processing system for performing the method of the invention.
Corresponding elements in the drawings are denoted by the same reference numeral.
DETAILED DESCRIPTION OF THE EMBODIMENTS
Fig. 1 shows an embodiment of the afore-mentioned system: mobile device 1. Mobile device 1 is connected to a wireless LAN access point 23. A bridge 11 is also connected to the wireless LAN access point 23, e.g. via Ethernet. Light sources 13-17 communicate wirelessly with the bridge 11, e.g. using the Zigbee protocol, and can be controlled via the bridge 11, e.g. by the mobile device 1. The bridge 11 may be a Philips Hue bridge and the light sources 13-17 may be Philips Hue lights, for example. In an alternative embodiment, light devices are controlled without a bridge.
A TV 27 is also connected to the wireless LAN access point 23. Media content may be rendered by the mobile device 1 or by the TV 27, for example. The wireless LAN access point 23 is connected to the Internet 24. An Internet server 25 is also connected to the Internet 24. The mobile device 1 may be a mobile phone or a tablet, for example. The mobile device 1 may run the Philips Hue Sync app, for example. The mobile device 1 comprises a processor 5, a receiver 3, a transmitter 4, a memory 7, and a display 9. In the embodiment of Fig. 1, the display 9 comprises a touchscreen. The mobile device 1, the bridge 11 and the light sources 13-17 are part of lighting system 21.
In the embodiment of Fig. 1, the processor 5 is configured to use the receiver 3 to obtain media content, use the touchscreen display 9 to allow a user to specify a light effect parameter for a segment of the media content, determine a spatial location and/or a type of spatial location at which the segment occurs, and determine one or more further segments of the media content which occur at the same spatial location and/or type of spatial location.
The processor 5 is further configured to determine one or more light effects to be rendered on one or more light sources while the segment is being rendered and determine one or more further light effects to be rendered on the one or more light sources while the one or more further segments are being rendered. The one or more light effects are determined based on an analysis of the segment and the light effect parameter specified by the user for the segment. The one or more further light effects are determined based on an analysis of the one or more further segments and the light effect parameter specified by the user for the segment. Multiple light effect parameters may be specified by the user and used to determine the light effects and further light effects.
The processor 5 is further configured to use the transmitter 4 to control (via the bridge 11) one or more of the light sources 13-17 to render the one or more light effects and the one or more further light effects and/or output a light script specifying the one or more light effects and the one or more further light effects. The light script may be output to the memory 7 via an internal interface.
The invention can be used to improve the real-time creation and rendering of light effects, but it especially beneficial for light scrip creation. The light script creation can be made faster and easier by allowing a script creator to define a set of variables that define how the light effect is auto generated (e.g. the algorithm for color extraction and transition) and then automatically apply to all parts of the movie that occur at the same location and/or location type (e.g. outdoor vs indoor, city vs. forest). Two parts that occur at the same location also have the same location type, but two parts that occur at the same type of location do not necessarily occur at the same location.
One of the location and the location type or both the location and the location type may be determined. The location and/or location type may come in the form of metadata (e.g. segmented content with tags for location), may be derived from the content or may be based on user input. After it is determined which properties are available, an auto-indexing of the content may take place, which segments the media content based on (combinations of) these properties.
For example, a script creator can define faster transitions for light effects when events shown happen in the city and slower when events are happening in space. In this case, the colors will be extracted depending on the scene colors while the transition will be defined by the rule set by the script creator for this type of environment (e.g. space and city) and would not depend on the presence or absence of fast movements/events in the scene.
Moreover, a script creator might define different color extraction algorithms depending on the environment e.g. outdoor scenes might use algorithms that are able to assess the brightness and color temperature of the light source, while for indoor scenes, a more straightforward color extraction (e.g. determination of an average color from colors in a certain analysis area) could be used.
Once specified, these rules can be automatically applied and potentially used in real-time cases (e.g. HueSync) as well. A (personal) database may be created, assisting the script creator for future content. Based on scene properties set in the past, the system may make an educated guess for new content, speeding up the process even more. An example hereof would be a forest scene that occurs in movie A. Here, the system learns specific scene properties and concomitant lighting properties (e.g. algorithmic and transitional), and then automatically applies these to future forest scenes in movie B.
In the embodiment of the mobile device 1 shown in Fig. 1, the mobile device 1 comprises one processor 5. In an alternative embodiment, the mobile device 1 comprises multiple processors. The processor 5 of the mobile device 1 may be a general-purpose processor, e.g. from Qualcomm or ARM-based, or an application-specific processor. The processor 5 of the mobile device 1 may run an Android or iOS operating system for example. The memory 7 may comprise one or more memory units. The memory 7 may comprise solid- state memory, for example. The memory 7 may be used to store an operating system, applications and application data, for example.
The receiver 3 and the transmitter 4 may use one or more wireless communication technologies such as Wi-Fi (IEEE 802.11) to communicate with the wireless LAN access point 23, for example. In an alternative embodiment, multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter. In the embodiment shown in Fig. 1, a separate receiver and a separate transmitter are used. In an alternative embodiment, the receiver 3 and the transmitter 4 are combined into a transceiver. The display 9 may comprise an LCD or OLED panel, for example. The mobile device 1 may comprise other components typical for a mobile device such as a battery and a power connector. The invention may be implemented using a computer program running on one or more processors.
In the embodiment of Fig. 1, the system of the invention is a mobile device. In an alternative embodiment, the system of the invention is a different device, e.g. a PC or a video module, or comprises multiple devices. The video module may be a dedicated HDMI module that can be put between the TV and the device providing the HDMI input so that it can analyze the HDMI input, for example.
In the embodiment of Fig. 1, the system of the invention is used in a lighting system to illustrate that the system can be used both for creating light scripts and for real-time rendering of light effects. However, the system is not necessarily part of a lighting system. For example, the system may be a PC that is only used for creating light scripts. In this case, the light effects are typically not created for specific light sources. A light effect may be created for one or more light sources in a certain part of a room (e.g. left of the TV) or for any light source. In the embodiment of Fig. 1, the light sources in the lighting system may be used for real-time rendering of light effects during normal use of the lighting system or may be used for testing a light script. A light script may also be tested if the system of the invention is not used in a lighting system. In this case, the one or more light sources may be virtual/simulated. The bridge and communication between devices may be simulated as well. Furthermore, the rendering of the media content does not require a TV. For example, the media content may be rendered on the PC that is used for creating the light script, e.g. for testing purposes. The PC may, for example, run software like Adobe Premier and the user might get an extra window displaying a virtual environment with lights, or an even simpler representation to show how effects would look like if parameters are adjusted in a certain way.
A first embodiment of the method is shown in Fig. 2. The method is used for determining one or more light effects to be rendered while media content is being rendered. The one or more light effects are determined based on an analysis of the media content. A step 101 comprises obtaining media content. In the embodiment of Fig. 2, the method comprises a step 102 of analyzing the media content. Step 102 may comprise extracting color information, motion information and/or loudness information from the media content, for example. This information may be used in steps 109 and 111. In an alternative embodiment, this information is received from another device. A step 103 comprises allowing a user to specify a light effect parameter for a segment of the media content.
A step 109 comprises determining one or more light effects to be rendered on one or more light sources while the segment is being rendered. The one or more light effects are determined based on an analysis of the segment and the light effect parameter specified by the user for the segment.
A step 105 comprises determining a spatial location and/or a type of spatial location at which the segment occurs. In the embodiment of Fig. 2, step 105 comprises sub steps 131, 133 and 135. Step 131 comprises determining the spatial location and/or the type of spatial location based on one or more user-specified spatial locations and/or spatial location types. Step 133 comprises associating the one or more user-specified spatial locations and/or spatial location types with the media content. This prevents that the user needs to specify the same information the next time he creates a light script for the same media content and it may be possible for the user to share this information with other users.
Step 135 comprises determining the spatial location and/or the type of spatial location based on features extracted from the media content and/or based on metadata associated with the media content. The metadata may comprise a spatial location and/or type of spatial location that was/were associated with the media content when step 133 was performed in a previous performance of the method.
A step 107 comprises determining one or more further segments of the media content which occur at the determined spatial location and/or type of spatial location. A step 111 comprises determining one or more further light effects to be rendered on the one or more light sources while the one or more further segments are being rendered. The one or more further light effects are determined based on an analysis of the one or more further segments and the light effect parameter specified by the user for the segment.
The method may be performed by a script creation tool or may be used to create and render light effects in real-time. The script creation tool would perform a step 115. Step 115 comprises outputting a light script specifying the one or more light effects and the one or more further light effects. On the other hand, a step 113 is performed for real time light effects generation, e.g. by the HueSync app.
Step 113 comprises controlling the one or more light sources to render the one or more light effects and the one or more further light effects. In case of real-time light effects generation, the user should preferably not be required to give more than a minimal amount of input. For example, the user may only be asked and/or allowed to indicate his light effect preference(s), e.g. deviations from the default settings, for indoor scenes and for outdoor scenes. Asking a user to specify light effect preferences for more location types requires him to give more input. Segmentation may be performed if metadata is available (e.g. segments or chapters in the movie). Automatic segmentation is more challenging if it needs to be performed in real-time. The method need not include both step 113 and step 115.
A second embodiment of the method is shown in Fig. 3. In this second embodiment, the method is performed by a script creation tool and the method starts with the same step 101 as in the first embodiment of Fig .2. Like step 102 of Fig. 2, step 152 of Fig. 3 comprises analyzing the media content. Step 152 comprises steps 105 and 107 of Fig.2 as sub steps. Step 152 comprises partitioning the content (video) into segments with the same location and/or location type. This can be done manually, automatically based on metadata or automatically based on video analysis, for example.
Steps 153 and 157 are performed after step 152. In step 153, segments are selected whose location and/or location type have already been associated with a light effect parameter, e.g. in step 163 during a previous performance of the method. In step 155, the light effects are determined for these segments based on an analysis of these segments, e.g. the analysis performed in step 152, and based on the light effect parameter(s) associated with the location(s) and/or location type(s) determined in step 152.
In step 157, segments are selected whose location and/or location type have not already been associated with a light effect parameter. Next, step 103 is performed for a first group of segments with the same location and/or location type. Step 103 comprises allowing the user to specify one or more light effect parameters for this selected group of segments, e.g. a color extraction algorithm and a type of light transitions. Steps 161 and 163 are performed after step 103.
Step 161 comprises steps 109 and 111 of Fig. 2 as sub steps. Step 161 comprises determining light effects to be rendered on one or more light sources while the segments of the selected group are being rendered. Each of these light effects is determined based on the analysis of one or more frames of these segments and based on the one or more light effect parameters specified in step 103. Step 147 comprises storing an association between the one or more light effect parameters specified by the user in step 103 and the spatial location and/or the type of spatial location of the selected group.
After steps 161 and 163, a step 165 is performed. Step 165 comprises checking whether there is a further group of segments with the same location and/or location type that has not already been associated with a light effect parameter. If so, then step 103 is repeated for the second group of segments. Steps 103, 161 and 163 may be performed for each group of segments with the same location and/or location type that has not already been associated with a light effect parameter. If light effects have been determined for all segments, step 115 is performed next. Step 115 comprises outputting a light script specifying the one or more light effects and the one or more further light effects
In an alternative embodiment, steps 103, 161 and 163 are only performed for some of the groups of segments with the same location and/or location type that have not already been associated with a light effect parameter. Default light effect parameters may be used for the other groups of segments with the same location and/or location type that have not already been associated with a light effect parameter.
The method of Fig. 3 is illustrated with the help of examples, which are shown in Figs. 4 to 9. Fig. 4 shows an example of a user interface for creating a light script that depicts a light script being created for a first content item at a first moment. A timeline interface 40 represents eight media segments 41-48. The media segments 41-48 are identified with references“SI” to“S8” in row 51. For each media segment, a video thumbnail is displayed in row 54. These video thumbnails 61-68 may help the user select light effect parameters. At this moment, no light effect parameters have been specified yet and row 52 therefore does not represent any light effect parameter.
Fig. 4 shows the time line interface 40 at a moment after a first media content item has been partitioned into multiple segments and a location type has been determined per segment. A location type“City” has been determined for segments 41 and 43 and a location type“Inside” has been determined for segments 42, 44 and 48. These location types are indicated in row 53. It was not possible to determine a location type for segments 45-47, but it was possible to determine that segments 45-47 occur at the same location or location type, so the location type of segments 45-47 is represented as“Ul” (unknown location type 1) in row 53. Since the user only needs to specify one or more light effect parameter(s) for one segment per group of segments, a question mark is indicated in row 52 for each first segment of each group. Each group comprises segments occurring at the same location and/or location type. In an alternative embodiment, default light effect parameters may be chosen and represented in row 52.
Fig. 5 shows the time line interface 40 a moment later than depicted in Fig. 4, after the user has specified light effect parameters for segment 41. The light effect parameters may, for example, include:
how color is extracted from the scene and how it translated to the light effect;
transition type and speed between different light effects (e.g. how smooth transitions should be); and
is audio or any other content features are considered and applied toward light effects generation (e.g. does audio loudness level impact light effect brightness).
In the example of Fig. 5, for the“City” type of segments, a script creator has specified a more sophisticated color extraction algorithm (indicated as CE1 in row 52). This color extraction algorithm identifies properties of scene illumination (instead of simply grabbing color from the screen and using it), which it uses for light effects on light sources that are not in direct view of the user. The script creator has further specified (2) the use of smooth transitions even for a fast-paced scene (indicated as“SMO” in row 52) and (3) that audio input should not be used for light effect generation (indicated as“A:N” in row 52). In the example of Fig. 5, the script creator has further specified the missing location type of segments 45-47 in row 53 (“Forest”).
Fig. 6 shows the time line interface 40 a moment later than depicted in Fig. 5. After the user specified the light effect parameters for segment 41, the light script creation tool automatically copied these light effect parameters to the segment with the same location type: segment 43. Two question marks are still indicated in row 52 for the two remaining groups for which no light effect parameters have been specified.
Alternatively, the user might immediately specify the light effect parameters for all three segments with a question mark, i.e. segments 41, 42 and 45. This is depicted in Fig. 7. In the example of Fig. 7, the user has specified for the“Forest” segment 45 that (1) trimean color extraction (indicated as“CE2” in row 52) is used in order to have more saturated colors; (2) smooth transitions (“indicated as“SMO” in row 52) are used (similar to the city) (3) and audio input is used to create the light effects (indicated as“A: Y” in row 52). The user has further specified for the“Inside” segment 42 that (1) colors are extracted from the dominant object(s) in the scene (indicated as“CE3” in row 52), (2) dynamic/fast transitions (“indicated as“DYN” in row 52) are used, and (3) that audio input should not be used for light effect generation (indicated as“A:N” in row 52).
Fig. 8 shows the time line interface 40 a moment later than depicted in Fig. 7. After the user specified the light effect parameters for segments 41,42 and 45, the light script creation tool automatically copied these light effect parameters to the segment(s) with the same location type. After the user has approved the light effect parameters, the light script creation tool determines the light effects and stores them in a light script. The light script creation tool also stores associations between the location types and the corresponding specified light effect parameters for future use.
Fig. 9 shows the time line interface 80 at a moment after a second media content item has been partitioned into multiple segments and a location type has been determined per segment. The timeline interface 80 represents five media segments 81-85.
The media segments 81-85 are identified with references“SI” to“S5” in row 51. Video thumbnails 91-95 are displayed in row 54. A location type“City” has been determined for segments 81 and 82, a location type“Inside” has been determined for segment 83, and a location type“Space” has been determined for segments 84 and 85.
Since associations between the location type“City” and light effect parameters “SMO”,“CE1” and A:N” and between the location type“Inside” and light effect parameters ”DYN”,“CE3” and“A:N” were stored when the light script creator created a light script for the first media content, as described in relation to Fig. 8, the user does not need to specify these light effect parameters again, although he is still able to adjust them. The user only needs to specify light effect parameters for segment 84 of location type“Space”. These light effect parameters are then copied to the segment 85 occurring at the same location type. In the embodiments of Figs. 1 to 9, only the spatial location and/or type of spatial location was determined. In variants on these embodiments, the temporal location is determined instead or as well.
Fig. 10 depicts a block diagram illustrating an exemplary data processing system that may perform the method as described with reference to Figs. 2 and 3.
As shown in Fig. 10, the data processing system 500 may include at least one processor 502 coupled to memory elements 504 through a system bus 506. As such, the data processing system may store program code within memory elements 504. Further, the processor 502 may execute the program code accessed from the memory elements 504 via a system bus 506. In one aspect, the data processing system may be implemented as a computer that is suitable for storing and/or executing program code. It should be appreciated, however, that the data processing system 500 may be implemented in the form of any system including a processor and a memory that can perform the functions described within this specification.
The memory elements 504 may include one or more physical memory devices such as, for example, local memory 508 and one or more bulk storage devices 510. The local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive or other persistent data storage device. The processing system 500 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the quantity of times program code must be retrieved from the bulk storage device 510 during execution. The processing system 500 may also be able to use memory elements of another processing system, e.g. if the processing system 500 is part of a cloud-computing platform.
Input/output (I/O) devices depicted as an input device 512 and an output device 514 optionally can be coupled to the data processing system. Examples of input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a microphone (e.g. for voice and/or speech recognition), or the like. Examples of output devices may include, but are not limited to, a monitor or a display, speakers, or the like. Input and/or output devices may be coupled to the data processing system either directly or through intervening I/O controllers.
In an embodiment, the input and the output devices may be implemented as a combined input/output device (illustrated in Fig. 10 with a dashed line surrounding the input device 512 and the output device 514). An example of such a combined device is a touch sensitive display, also sometimes referred to as a“touch screen display” or simply“touch screen”. In such an embodiment, input to the device may be provided by a movement of a physical object, such as e.g. a stylus or a finger of a user, on or near the touch screen display.
A network adapter 516 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. The network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 500, and a data transmitter for transmitting data from the data processing system 500 to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 300.
As pictured in Fig. 10, the memory elements 504 may store an application 518. In various embodiments, the application 518 may be stored in the local memory 508, the one or more bulk storage devices 510, or separate from the local memory and the bulk storage devices. It should be appreciated that the data processing system 500 may further execute an operating system (not shown in Fig.10) that can facilitate execution of the application 518. The application 518, being implemented in the form of executable program code, can be executed by the data processing system 500, e.g., by the processor 502.
Responsive to executing the application, the data processing system 500 may be configured to perform one or more operations or method steps described herein.
Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein). In one embodiment, the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression“non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal. In another embodiment, the program(s) can be contained on a variety of transitory computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. The computer program may be run on the processor 502 described herein.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/ or group s thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of embodiments of the present invention has been presented for purposes of illustration, but is not intended to be exhaustive or limited to the implementations in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present invention. The embodiments were chosen and described in order to best explain the principles and some practical applications of the present invention, and to enable others of ordinary skill in the art to understand the present invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims

CLAIMS:
1. A system (1) for determining one or more light effects to be rendered while media content is being rendered, said one or more light effects being determined based on an analysis of said media content, said system comprising:
at least one input interface (3,9);
at least one output interface (4); and
at least one processor (5) configured to:
- use said at least one input interface (3) to obtain media content,
- use said at least one input interface (9) to allow a user to specify a light effect parameter (52) for a segment (41) of said media content,
- determine a spatial location and/or a type of spatial location (53) at which said segment (41) occurs,
- determine one or more further segments (43) of said media content, said one or more further segments (43) occurring at said spatial location and/or said type of spatial location (53),
- determine one or more light effects to be rendered on one or more light sources (13-17) while said segment (41) is being rendered, said one or more light effects being determined based on an analysis of said segment (41) and said light effect parameter (53) specified by said user for said segment (41),
- determine one or more further light effects to be rendered on said one or more light sources (13-17) while said one or more further segments (43) are being rendered, said one or more further light effects being determined based on an analysis of said one or more further segments (43) and said light effect parameter (52) specified by said user for said segment (41), and
- use said at least one output interface (4) to control said one or more light sources (13-17) to render said one or more light effects and said one or more further light effects and/or output a light script specifying said one or more light effects and said one or more further light effects,
wherein said light effect parameter (52) specified by said user for said segment influences how one or more colors are extracted from said segment and how one or more further colors are extracted from said one or more further segments, said one or more light effects being determined based on one said or more colors and said one or more further light effects being based on said one or more further colors.
2. A system (1) as claimed in claim 1, wherein said at least one processor (5) is configured to determine said spatial location and/or said type of spatial location (53) based on one or more user-specified spatial locations and/or spatial location types.
3. A system (1) as claimed in claim 1 or 2, wherein said at least one processor (5) is configured to determine said spatial location and/or said type of spatial location (53) based on features extracted from said media content and/or based on metadata associated with said media content.
4. A system (1) as claimed in claim 1 or 2, wherein said at least one processor (5) is configured to store an association between said light effect parameter (52) specified by said user and said spatial location and/or said type of spatial location (53) in a memory.
5. A system (1) as claimed in claim 1 or 2, wherein said one or more light effects comprise a plurality of light effects, said one or more further light effects comprise a plurality of further light effects and said light effect parameter (52) specified by said user for said segment influences a type and/or a speed of one or more transitions between said plurality of light effects and one or more further transitions between said plurality of further light effects.
6. A system (1) as claimed in claim 5, wherein said light effect parameter (52) specified by said user for said segment influences whether said speed of said one or more transitions depends on a presence of fast movement in said segment and said speed of said one or more further transitions depends on a presence of fast movement in said one or more further segments.
7. A system (1) as claimed in claim 1 or 2, wherein said light effect parameter (52) specified by said user for said segment influences whether said one or more light effects are determined from an audio portion of said segment and whether said one or more further light effects are determined from one or more further audio portions of said one or more further segments.
8. A system (1) as claimed in claim 7, wherein said light effect parameter (52) specified by said user for said segment influences whether a brightness of said one or more light effects depends on a loudness level of said audio portion and a brightness of said one or more further light effects depends on a loudness level of said one or more further audio portions.
9. A lighting system (21) comprising the system (1) of any one of claims 1 to 8 and one or more light sources (13-17).
10. A method of determining one or more light effects to be rendered while media content is being rendered, said one or more light effects being determined based on an analysis of said media content, said method comprising:
obtaining (101) media content;
allowing (103) a user to specify a light effect parameter (52) for a segment (41) of said media content;
determining (105) a spatial location and/or a type of spatial location at which said segment occurs;
determining (107) one or more further segments of said media content, said one or more further segments occurring at said spatial location and/or said type of spatial location;
determining (109) one or more light effects to be rendered on one or more light sources while said segment is being rendered, said one or more light effects being determined based on an analysis of said segment and said light effect parameter specified by said user for said segment;
determining (111) one or more further light effects to be rendered on said one or more light sources while said one or more further segments are being rendered, said one or more further light effects being determined based on an analysis of said one or more further segments and said light effect parameter specified by said user for said segment; and
controlling (113) said one or more light sources to render said one or more light effects and said one or more further light effects and/or outputting (115) a light script specifying said one or more light effects and said one or more further light effects,
wherein said light effect parameter (52) specified by said user for said segment influences how one or more colors are extracted from said segment and how one or more further colors are extracted from said one or more further segments, said one or more light effects being determined based on one said or more colors and said one or more further light effects being based on said one or more further colors.
11. A computer program or suite of computer programs comprising at least one software code portion or a computer program product storing at least one software code portion, the software code portion, when run on a computer system, being configured to perform the method of claim 10.
PCT/EP2020/050245 2019-01-10 2020-01-08 Determining a light effect based on a light effect parameter specified by a user for other content taking place at a similar location WO2020144196A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP19151146.8 2019-01-10
EP19151146 2019-01-10

Publications (1)

Publication Number Publication Date
WO2020144196A1 true WO2020144196A1 (en) 2020-07-16

Family

ID=65033363

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/050245 WO2020144196A1 (en) 2019-01-10 2020-01-08 Determining a light effect based on a light effect parameter specified by a user for other content taking place at a similar location

Country Status (1)

Country Link
WO (1) WO2020144196A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022144171A1 (en) * 2021-01-04 2022-07-07 Signify Holding B.V. Adjusting light effects based on adjustments made by users of other systems

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130147395A1 (en) 2011-12-07 2013-06-13 Comcast Cable Communications, Llc Dynamic Ambient Lighting
WO2017182365A1 (en) 2016-04-22 2017-10-26 Philips Lighting Holding B.V. Controlling a lighting system
US20170347427A1 (en) * 2014-11-20 2017-11-30 Ambx Uk Limited Light control

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130147395A1 (en) 2011-12-07 2013-06-13 Comcast Cable Communications, Llc Dynamic Ambient Lighting
US20170347427A1 (en) * 2014-11-20 2017-11-30 Ambx Uk Limited Light control
WO2017182365A1 (en) 2016-04-22 2017-10-26 Philips Lighting Holding B.V. Controlling a lighting system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022144171A1 (en) * 2021-01-04 2022-07-07 Signify Holding B.V. Adjusting light effects based on adjustments made by users of other systems

Similar Documents

Publication Publication Date Title
US9786326B2 (en) Method and device of playing multimedia and medium
CN107801096B (en) Video playing control method and device, terminal equipment and storage medium
US9524587B2 (en) Adapting content to augmented reality virtual objects
CN112913330B (en) Method for selecting color extraction from video content to produce light effects
US11543729B2 (en) Systems and methods to transform events and/or mood associated with playing media into lighting effects
EP3804471B1 (en) Selecting one or more light effects in dependence on a variation in delay
WO2018050021A1 (en) Virtual reality scene adjustment method and apparatus, and storage medium
WO2017185584A1 (en) Method and device for playback optimization
US20210243870A1 (en) Rendering a dynamic light scene based on one or more light settings
CN112913331B (en) Determining light effects based on video and audio information according to video and audio weights
CN114339076A (en) Video shooting method and device, electronic equipment and storage medium
US20140104497A1 (en) Video files including ambient light effects
WO2020144196A1 (en) Determining a light effect based on a light effect parameter specified by a user for other content taking place at a similar location
US20220319015A1 (en) Selecting an image analysis area based on a comparison of dynamicity levels
CN111096078A (en) Method and system for creating light script of video
EP3909046B1 (en) Determining a light effect based on a degree of speech in media content
WO2023125393A1 (en) Method and device for controlling smart home appliance, and mobile terminal
WO2019228969A1 (en) Displaying a virtual dynamic light effect
US20230269853A1 (en) Allocating control of a lighting device in an entertainment mode
KR102585777B1 (en) Electronic apparatus and controlling method thereof
CN110945970A (en) Attention dependent distraction storing preferences for light states of light sources
WO2023131498A1 (en) Extracting a color palette from music video for generating light effects

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20700046

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20700046

Country of ref document: EP

Kind code of ref document: A1