WO2023110483A1 - Selecting a luminaire model by analyzing a drawing of a luminaire and an image of a room - Google Patents

Selecting a luminaire model by analyzing a drawing of a luminaire and an image of a room Download PDF

Info

Publication number
WO2023110483A1
WO2023110483A1 PCT/EP2022/084359 EP2022084359W WO2023110483A1 WO 2023110483 A1 WO2023110483 A1 WO 2023110483A1 EP 2022084359 W EP2022084359 W EP 2022084359W WO 2023110483 A1 WO2023110483 A1 WO 2023110483A1
Authority
WO
WIPO (PCT)
Prior art keywords
luminaire
representation
real
drawn
sizes
Prior art date
Application number
PCT/EP2022/084359
Other languages
French (fr)
Inventor
Albertus Adrianus SMITS
Jordy Antonius Johannes LANGEN
Original Assignee
Signify Holding B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Signify Holding B.V. filed Critical Signify Holding B.V.
Publication of WO2023110483A1 publication Critical patent/WO2023110483A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0621Item configuration or customization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0623Item investigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant

Definitions

  • the invention relates to a system for selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user.
  • the invention further relates to a method of selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user.
  • the invention also relates to a computer program product enabling a computer system to perform such a method.
  • filters are normally used to simplify the selection of the right product, as keyword searches would normally only be useful if a user searches for a specific product with a known identifier.
  • Image analysis may be used to help the user find these products faster.
  • US 2019/0012716 Al discloses and information processing device which acquires an image specified by a user, extracts a feature value of the specified image, acquires category information corresponding to an item for sale represented by the specified image, e.g. a piece of clothing, searches for images similar to the specified image based on the extracted feature value, and causes at least one of the found images to be displayed as a search result.
  • a system for selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user comprises at least one input interface, at least one output interface, and at least one processor configured to receive, via said at least one input interface, said drawing made by said user, said drawing representing a luminaire drawn over a representation of a room, analyze an image of said representation of said room to determine one or more representation sizes of one or more elements in said image, determine one or more real-world sizes of said one or more elements, and analyze said drawing to determine a representation size of said drawn luminaire and a shape of said drawn luminaire.
  • the at least one processor is further configured to determine a real-world size of said drawn luminaire based on said representation size of said drawn luminaire and a ratio between said one or more representation sizes and said one or more real-world sizes of said one or more elements, compare said shape and said real-world size of said drawn luminaire with shapes and sizes of said plurality of luminaire models, select said one or more luminaire models from said plurality of luminaire models based on results of said comparison, and output, via said at least one output interface, one or more identifiers of said one or more selected luminaire models.
  • This system uses image analysis to not only determine the shape of the luminaire model the user is looking for but also the size of the luminaire model the user is looking for. Since the luminaire has been drawn over a representation of a room, e.g. the user’s own room, the size of the luminaire model the user is looking for may be determined by also analyzing the image of this representation and combining the results of the image analysis of the drawing and the image analysis of the room representation. This allows the user to search for luminaire models more conveniently.
  • the identifiers of the one or more selected luminaire models may be displayed to the user. If multiple luminaire models have been selected/found, they may be ordered based on a measure of relevance. Alternatively or additionally, the identifiers of the one or more selected luminaire models may be used to commission a new luminaire, i.e. to add the new luminaire to the user’s lighting system and configure the new luminaire, as soon as the new luminaire has been physically installed if the new luminaire is one of the one or more selected models. This makes the commissioning easier.
  • the user may be able to order the selected luminaire and cause the identifier to be provided to the lighting system such that the lighting system will already create a ‘virtual lamp’ which can be added to rooms, scenes, and then once the luminaire arrives, the lighting system replaces the virtual lamp and the configuration steps do not need to be performed again.
  • the identifier and the associated real-world position may be transmitted to the lighting system and the lighting system may then associate this position or a spatial area encompassing this position, e.g. a room, with the luminaire.
  • the identifiers may comprise numbers, text and/or images, for example.
  • the representation sizes of the one or more elements are the sizes of the one or more elements in the image and depend on the size of the image.
  • the representation size of the drawn luminaire is the size of the luminaire in the drawing (which may also be an image) and depends on the size of the drawing.
  • the representation sizes may expressed in pixels, e.g. 100 pixels long and 50 pixels high.
  • the user will draw a 2D drawing. In this case, it is not necessary to determine the depth/width of the luminaire.
  • 3D drawings might be supported as well.
  • the real -world sizes may be expressed in centimeters or inches, for example.
  • Said one or more elements may comprise a surface and said at least one processor may be configured to determine a location on said surface at which said drawn luminaire is connected to said surface and determine said ratio between said one or more representation sizes and said one or more real-world sizes of said one or more elements based on said location on said surface.
  • Said surface may comprise a wall, a ceiling, or a floor, for example.
  • Said one or more elements may further comprise a person and/or an object, for example.
  • Elements in the room that are closer to the camera normally appear larger in the representation of the room. It is therefore beneficial to take the z coordinate, i.e. depth, of the real world position of the drawn luminaire and of the real world positions of the one or more elements into account when determining the ratio.
  • the location on the surface at which the drawn luminaire is connected to the surface may be used to determine the z coordinate of the real world position of the drawn luminaire with respect to the room. This z coordinate may also be used for determining the x (horizontal) coordinate and y (vertical) coordinate from the image.
  • Said at least one processor may be configured to receive a signal indicative of at least one user-specified real-world size of at least one of said one or more elements and determine said one or more real-world sizes of said one or more elements based on said at least one user-specified real -world size of said at least one element.
  • the user may specify that a certain closet has certain dimensions.
  • the real-world size(s) of at least one of the one or more elements may be determined automatically based on the kind of element. For example, the edges of the ceiling and the walls may be used as reference and assigned an average height of 2.5 m. Also, a person may be detected and assigned an average height of 1.8 m, for example. Dimensions per kind of element may be determined based on geo location and country standards/averages.
  • Said at least one processor may be configured to analyze said image to determine perspective features and determine said one or more real-world sizes of said one or more elements based on said one or more representation sizes of said one or more elements and said perspective features. For example, the real -world sizes of elements for which the user has not specified a real-world size and of which the real-world size could not be determined automatically based on the kind of element may be determined by using perspective rules.
  • Said at least one processor may be configured to analyze said image to determine perspective features and determine said ratio between said one or more representation sizes and said one or more real-world sizes of said one or more elements based on said perspective features. For example, the z coordinate (i.e. depth) of the one or more elements may be determined based on the perspective features. A ratio may be determined for each element between the representation size and the real-world size of the element and this ratio may then be adjusted based on the difference between the z coordinate of the real world position of the drawn luminaire and the z coordinate of the real world position of the element. An average ratio may then be calculated based on these ratios.
  • Said at least one processor may be configured to determine a real-world position of said drawn luminaire based on a representation position of said drawn luminaire with respect to said representation of said room.
  • This real-world position may be used for determining the ratio between the one or more representation sizes and the one or more real- world sizes of the one or more elements, for example.
  • the type of the luminaire model that the user is looking for may be determined based on this real- world position and the one or more luminaire models may be selected from the plurality of luminaire models based on this type.
  • the type may indicate whether the luminaire model is ceiling lamp, a floor lamp, or a desk lamp, for example.
  • Said image may be an image selected or provided by user and said at least one processor may be configured to display said image.
  • the user may have previously captured an image of their room and stored it on their device or on a server.
  • Said at least one processor may be configured to display images captured by a camera in real-time, said captured images comprising said image. This enables the system to support augmented reality.
  • the user may be able to draw on top of an image displayed on a touch screen display, for example.
  • Said at least one processor may be configured to receive a signal indicative of user request to resize said drawing of said luminaire and analyze said resized drawing to determine a new representation size of said drawn luminaire. For example, if the drawing is made smaller, the selected/found luminaire model(s) will be smaller.
  • said at least one processor may be configured to receive a signal indicative of user request to resize said image and determine one or more new representation sizes of said one or more elements in said resized image. For example, if the image is made smaller, the selected/found luminaire model(s) will be larger.
  • a method of selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user comprises receiving said drawing made by said user, said drawing representing a luminaire drawn over a representation of a room, analyzing an image of said representation of said room to determine one or more representation sizes of one or more elements in said image, determining one or more real-world sizes of said one or more elements, and analyzing said drawing to determine a representation size of said drawn luminaire and a shape of said drawn luminaire.
  • Said method further comprises determining a real-world size of said drawn luminaire based on said representation size of said drawn luminaire and a ratio between said one or more representation sizes and said one or more real-world sizes of said one or more elements, comparing said shape and said real-world size of said drawn luminaire with shapes and sizes of said plurality of luminaire models, selecting said one or more luminaire models from said plurality of luminaire models based on results of said comparison, and outputting one or more identifiers of said one or more selected luminaire models.
  • Said method may be performed by software running on a programmable device. This software may be provided as a computer program product.
  • a computer program for carrying out the methods described herein, as well as a non-transitory computer readable storage-medium storing the computer program are provided.
  • a computer program may, for example, be downloaded by or uploaded to an existing device or be stored upon manufacturing of these systems.
  • a non-transitory computer-readable storage medium stores at least one software code portion, the software code portion, when executed or processed by a computer, being configured to perform executable operations for selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user.
  • the executable operations comprise receiving said drawing made by said user, said drawing representing a luminaire drawn over a representation of a room, obtaining an image of said representation of said room, analyzing said image of said representation of said room to determine one or more representation sizes of one or more elements in said image, determining one or more real-world sizes of said one or more elements, and analyzing said drawing to determine a representation size of said drawn luminaire and a shape of said drawn luminaire.
  • the image may be captured by a camera and/or uploaded by a user.
  • the executable operations further comprise determining a real-world size of said drawn luminaire based on said representation size of said drawn luminaire and a ratio between said one or more representation sizes and said one or more real-world sizes of said one or more elements, comparing said shape and said real-world size of said drawn luminaire with shapes and sizes of said plurality of luminaire models, selecting said one or more luminaire models from said plurality of luminaire models based on results of said comparison, and outputting one or more identifiers of said one or more selected luminaire models.
  • Said method may be performed by software running on a programmable device. This software may be provided as a computer program product.
  • aspects of the present invention may be embodied as a device, a method or a computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module” or “system.” Functions described in this disclosure may be implemented as an algorithm executed by a processor/microprocessor of a computer. Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied, e.g., stored, thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may include, but are not limited to, the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store, a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java(TM), Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider an Internet Service Provider
  • These computer program instructions may be provided to a processor, in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • a processor in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • Fig. l is a block diagram of a first embodiment of the system
  • Fig. 2 is a block diagram of a second embodiment of the system
  • Fig. 3 is a flow diagram of a first embodiment of the method
  • Fig. 4 shows an example of a drawing of a luminaire superimposed over an image representing a room
  • Fig. 5 is a flow diagram of a second embodiment of the method
  • Fig. 6 is a flow diagram of a third embodiment of the method.
  • Fig. 7 is a flow diagram of a fourth embodiment of the method.
  • Fig. 8 shows the image of Fig. 4 being resized
  • Fig. 9 shows the drawing of Fig. 4 being resized
  • Fig. 10 is a flow diagram of a fifth embodiment of the method.
  • Fig. 11 is a block diagram of an exemplary data processing system for performing the method of the invention.
  • Fig. 1 shows a first embodiment of the system for selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user.
  • the system is a mobile device 1.
  • a lighting system comprises a bridge 16 and lighting devices 31-33.
  • the bridge 16 may be a Philips Hue bridge, for example. Lighting devices 31-33 can be controlled via bridge 16, e.g. using Zigbee technology.
  • the bridge 16 is connected to a wireless LAN access point 17, e.g. via Wi-Fi or Ethernet.
  • the wireless LAN access point 17 is connected to the Internet 11.
  • Mobile device 1 is able to control lighting devices 31-33 via the wireless LAN access point 17 and the bridge 16.
  • the mobile device 1 comprises a receiver 3, a transmitter 4, a processor 5, memory 7, a camera 8, and a touchscreen display 9.
  • the processor 5 is configured to receive, via the touchscreen display 9, a drawing made by the user.
  • the drawing represents a luminaire drawn over a representation of a room.
  • the processor 5 is further configured to analyze an image of the representation of the room to determine one or more representation sizes of one or more elements in the image, determine one or more real-world sizes of the one or more elements, and analyze the drawing to determine a representation size of the drawn luminaire and a shape of the drawn luminaire.
  • the representation size and the shape are detected by using computer vision. Small elements may be removed from the image using filters.
  • the user may be allowed to upload the image of the representation of the room or this image may be captured with camera 8.
  • the processor 5 is further configured to determine a real-world size of the drawn luminaire based on the representation size of the drawn luminaire and a ratio between the one or more representation sizes and the one or more real-world sizes of the one or more elements, compare the shape and the real-world size of the drawn luminaire with shapes and sizes of the plurality of luminaire models, select the one or more luminaire models from the plurality of luminaire models based on results of the comparison, and output one or more identifiers of the one or more selected luminaire models.
  • the plurality of luminaire models may be included in a catalogue stored in the memory 7, for example.
  • the plurality of luminaire models may be obtained from an Internet server 13 before they are stored in the memory 7, for example.
  • the luminaire models are characterized for shape and size.
  • the luminaire models and their shape and size may be stored in a database for fast retrieval and comparison.
  • a graphical representation is also stored to show the luminaire model to the user, e.g. superimposed over the representation of their room.
  • the processor 5 is configured to display a graphical input area on the display 9 where the user can upload the image of their room. This image is then displayed in this input area. On top of this room, the user can draw the luminaire they are looking for. The size of the drawn luminaire is determined based on elements in the uploaded room, as described above.
  • the drawn results are compared with the shape and size of the luminaire models in the catalogue.
  • Computer vision tools are used for the comparison.
  • the identifiers of the one or more selected luminaire models may be displayed to the user via the touch screen display 9 and/or may be transmitted, via the transmitter 4, to the bridge 16 to commission a new luminaire as soon as the new luminaire has been added if the new luminaire is one of the one or more selected models. This makes the commissioning easier.
  • a determined real-world position of the drawn luminaire or a lighting group determined based on this real -world position is transmitted to the bridge 16 to commission the new luminaire.
  • the match level between the drawn luminaire and the luminaire models in the catalogue may be used to order the results.
  • the selected luminaire model(s) may be shown as overlay on the uploaded image of the room. In case the user is not yet satisfied, the user may be able to resize their drawing and the new results may then be shown using the same algorithm.
  • the mobile device 1 comprises one processor 5.
  • the mobile device 1 comprises multiple processors.
  • the processor 5 of the mobile device 1 may be a general-purpose processor, e.g. from ARM or Qualcomm or an application-specific processor.
  • the processor 5 of the mobile device 1 may run an Android or iOS operating system for example.
  • the display 9 may comprise an LCD or OLED display panel, for example.
  • the memory 7 may comprise one or more memory units.
  • the memory 7 may comprise solid state memory, for example.
  • the receiver 3 and the transmitter 4 may use one or more wireless communication technologies, e.g. Wi-Fi (IEEE 802.11) for communicating with the wireless LAN access point 17, for example.
  • Wi-Fi IEEE 802.11
  • multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter.
  • a separate receiver and a separate transmitter are used.
  • the receiver 3 and the transmitter 4 are combined into a transceiver.
  • the mobile device 1 may comprise other components typical for a mobile device such as a battery and a power connector.
  • the invention may be implemented using a computer program running on one or more processors.
  • the lighting devices 31-33 are controlled by the mobile device 1 via the bridge 16.
  • one or more of the lighting devices 31-33 are controlled by the mobile device 1 without a bridge, e.g. via the Internet server 13 and the wireless LAN access point 17 or directly via Bluetooth.
  • the lighting devices 31-33 may be capable of receiving and transmitting Wi-Fi signals, for example. Commissioning information may be stored on another device than the bridge 16.
  • Fig. 2 shows a second embodiment of the system for selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user.
  • the system is a computer 21.
  • the computer 21 is connected to the Internet 11 and acts as a server.
  • the computer 21 comprises a receiver 23, a transmitter 24, a processor 25, and storage means 27.
  • the processor 25 is configured to receive, via receiver 23, a drawing made by the user.
  • the drawing represents a luminaire drawn over a representation of a room.
  • the processor 25 is further configured to analyze an image of the representation of the room to determine one or more representation sizes of one or more elements in the image, determine one or more real-world sizes of the one or more elements, and analyze the drawing to determine a representation size of the drawn luminaire and a shape of the drawn luminaire.
  • the processor 25 is further configured to determine a real-world size of the drawn luminaire based on the representation size of the drawn luminaire and a ratio between the one or more representation sizes and the one or more real-world sizes of the one or more elements, compare the shape and the real-world size of the drawn luminaire with shapes and sizes of the plurality of luminaire models, select the one or more luminaire models from the plurality of luminaire models based on results of the comparison, and output one or more identifiers of the one or more selected luminaire models.
  • the drawing and the image may be received from a mobile device 41, for example.
  • the plurality of luminaire models maybe stored on storage means 27, for example.
  • the identifiers of the one or more selected luminaire models may be transmitted, via the transmitter 24, to the mobile device 41 to be displayed on a display of the mobile device 41 and/or may be transmitted, via the transmitter 24, to the bridge 16 to commission a new luminaire as soon as the new luminaire has been added if the new luminaire is one of the one or more selected models.
  • the computer 21 comprises one processor 25.
  • the computer 21 comprises multiple processors.
  • the processor 25 of the computer 21 may be a general -purpose processor, e.g. from Intel or AMD, or an application-specific processor.
  • the processor 25 of the computer 21 may run a Windows or Unix-based operating system for example.
  • the storage means 27 may comprise one or more memory units.
  • the storage means 27 may comprise one or more hard disks and/or solid-state memory, for example.
  • the storage means 27 may be used to store an operating system, applications and application data, for example.
  • the receiver 23 and the transmitter 24 may use one or more wired and/or wireless communication technologies such as Ethernet and/or Wi-Fi (IEEE 802.11) to communicate with the wireless LAN access point 17, for example.
  • wired and/or wireless communication technologies such as Ethernet and/or Wi-Fi (IEEE 802.11) to communicate with the wireless LAN access point 17, for example.
  • multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter.
  • a separate receiver and a separate transmitter are used.
  • the receiver 23 and the transmitter 24 are combined into a transceiver.
  • the computer 21 may comprise other components typical for a computer such as a power connector.
  • the invention may be implemented using a computer program running on one or more processors.
  • the computer 21 transmits data to the lighting devices 31-33 via the bridge 16. In an alternative embodiment, the computer 21 transmits data to the lighting devices 31-33 without a bridge. Commissioning information may be stored on another device than the bridge 16.
  • the system comprises a single device.
  • the system comprises a plurality of devices.
  • the functionality described above may be partly implemented in the mobile device 1 of Fig. 1 and partly in the computer 21 of Fig. 2.
  • FIG. 3 A first embodiment of the method of selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user is shown in Fig. 3.
  • the method may be performed by the mobile device 1 of Fig. 1 or the cloud computer 21 of Fig. 2, for example.
  • a step 101 comprises receiving a drawing made by a user.
  • the drawing represents a luminaire drawn over a representation of a room.
  • the drawing may, for example, be drawn by the user over an image displayed on a display, e.g. by using a touch screen.
  • the image may be captured by a camera or selected from a locally or remotely stored collection of images by the user, for example.
  • a step 107 comprises analyzing the drawing received in step 101 to determine a representation size of the drawn luminaire and a shape of the drawn luminaire.
  • a step 103 comprises analyzing an image of the representation of the room to determine one or more representation sizes of one or more elements in the image. If the user has drawn the luminaire over an image displayed on a display, this image may be analyzed. If the user uses augmented reality glasses, an image captured by a camera corresponding to the user’s view while the user was drawing the luminaire in the air may be analyzed. Image analysis techniques to determine objects (elements) in a scene and their respective representation size are known in the field of image analysis and computer vision. The one or more elements may comprise a surface and/or a person and/or an object, for example.
  • a step 105 comprises determining one or more real -world sizes of the one or more elements of which the representations sizes are determined in step 103. Steps 101 and 107 and steps 103 and 105 may be performed (partly) in parallel or in sequence.
  • a step 109 is performed after steps 105 and 107 have been performed.
  • Step 109 comprises determining a ratio between the one or more representation sizes and the one or more real -world sizes of the one or more elements, as determined in steps 103 and 105, and determining a real-world size of the drawn luminaire based on the representation size of the drawn luminaire, as determined in step 107, and this ratio.
  • a ratio may be determined for each element between the representation size and the real-world size of the element and an average ratio may be calculated based on these ratios.
  • a step 111 comprises comparing the shape of the drawn luminaire, as determined in step 107, and the real -world size of the drawn luminaire, as determined in step 109, with shapes and sizes of the plurality of luminaire models.
  • a step 113 comprises selecting the one or more luminaire models from the plurality of luminaire models based on results of the comparison in step 111.
  • a step 115 comprises outputting one or more identifiers of the one or more luminaire models selected in step 113.
  • Fig. 4 shows an example of a drawing 61 of a luminaire superimposed over an image 51 representing a room.
  • the image 51 includes multiple elements, including a ceiling 55, a person 57, and a closet 59.
  • a luminaire model with identifier 67 named “LM245” is found.
  • the identifier 67 and a visual representation 69 of the luminaire are shown in the results field shown in Fig. 4.
  • the drawing 61 is replaced by the visual representation 69 when the results are shown.
  • a selected luminaire model may be shown as overlay on the representation of the room as if a luminaire of this model is already present.
  • FIG. 5 A second embodiment of the method of selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user is shown in Fig. 5.
  • the embodiment of Fig. 5 is an extension of the first embodiment of Fig. 3.
  • a step 131 is performed after steps 105 and 107 have been performed and before step 109 is performed and step 109 of Fig. 3 is implemented by a step 133.
  • Step 131 comprises determining a location on a surface at which the drawn luminaire is connected to the surface.
  • This surface is one of the one or more elements of which the representation size(s) are determined in step 103 and of which the real -world size(s) have been determined in step 105.
  • the surface may comprise a wall, a ceiling (e.g. ceiling 55 of Fig. 4), or a floor, for example.
  • Step 133 comprises determining the ratio between the one or more representation sizes and the one or more real-world sizes of the one or more elements, as determined in steps 103 and 105, based on the location on the surface, as determined in step 131.
  • the ratio determined in step 133 depends on the z coordinate (i.e. depth) of the real-world position of the drawn luminaire.
  • a ratio may be determined for each element between the representation size and the real-world size of the element and this ratio may then be adjusted based on the difference between the z coordinate of the real world position of the drawn luminaire and the z coordinate of the real world position of the element. An average ratio may then be calculated based on these ratios.
  • Step 133 further comprises determining a real-world size of the drawn luminaire based on the representation size of the drawn luminaire, as determined in step 107, and this ratio.
  • Step 133 optionally comprises determining the real-world position of the drawn luminaire based on the representation position of the drawn luminaire with respect to the representation of the room and the location on the surface at which the drawn luminaire is connected to the surface.
  • FIG. 6 A third embodiment of the method of selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user is shown in Fig. 6.
  • the embodiment of Fig. 6 is an extension of the second embodiment of Fig. 5.
  • steps 151 and 153 are performed before step 103 is performed and step 105 is implemented by a step 155.
  • a step 157 is performed after steps 105 and 107 have been performed and before step 131 is performed, and step 109 is implemented by a step 159.
  • Step 151 comprises receiving a signal indicative of at least one user-specified real -world size of at least one element in an image of a representation of a room.
  • the user may specify the dimensions of the closet 59 of Fig. 4.
  • Step 153 comprises analyzing the image to determine perspective features.
  • Step 103 comprises analyzing the image to determine one or more representation sizes of one or more elements in the image. These one or more elements includes the element(s) of which the real-world size was specified by the user.
  • Step 155 comprises determining the one or more real-world sizes of the one or more elements based on at least the real-world size(s) received in step 151.
  • the real-world size(s) of one or more elements of which the real-world size was not specified by the user may be determined automatically based on the kind of element. For example, the edges of the ceiling and the walls may be used as reference and assigned an average height of 2.5 m.
  • the one or elements may comprise one or more persons, e.g. person 57 of Fig. 4. If a person is detected in the image, the person may be assumed to have an average length. This average length may vary per region and/or per gender, e.g. 1.8 meters for males.
  • the real-world sizes of elements for which the user has not specified a real-world size and of which the real-world size could not be determined automatically based on the kind of element may be determined by using perspective rules in step 155.
  • Step 159 comprises determining the ratio between the one or more representation sizes and the one or more real-world sizes of the one or more elements, as determined in steps 103 and 105, based on the location on the surface, as determined in step 131, and the perspective features, as determined in step 153. For example, the z coordinate of the one or more elements may be determined based on the perspective features. Step 159 further comprises determining a real-world size of the drawn luminaire based on the representation size of the drawn luminaire, as determined in step 107, and this ratio.
  • step 159 comprises determining the real -world position of the drawn luminaire, this may be done based on the representation position of the drawn luminaire with respect to the representation of the room, the location on the surface at which the drawn luminaire is connected to the surface, and the perspective features.
  • FIG. 7 A fourth embodiment of the method of selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user is shown in Fig. 7.
  • the method may be performed by the mobile device 1 of Fig. 1 or the cloud computer 21 of Fig. 2, for example.
  • a step 191 comprises receiving a drawing made by a user and an image of the representation of the room.
  • Sub step 101 of step 191 comprises receiving the drawing.
  • Sub step 193 of step 191 comprises receiving the image.
  • the drawing and the image may be received in the same way and at the same time or almost the same time.
  • the drawing and the representation of the room may be part of the same image, i.e. combined before received in step 191.
  • Step 195 comprises displaying the drawing and the image received in step 191.
  • a step 197 comprises determining whether a signal indicative of user request to resize the image and/or to resize the drawing has been received. If so, step 195 is repeated and the resized image and/or resized drawing is displayed. In an alternative embodiment, it is only possible to resize the image or only possible to resize the drawing. Steps 103 to 115 are performed as described in relation to Fig. 3 as soon as it is determined in step 195 that the user has provided certain user input to search for matching luminaire models, e.g. by pressing a virtual “search” button. If the user has resized the image, step 105 comprises determining one or more representation sizes of the one or more elements in the resized image.
  • step 197 is repeated after step 115.
  • the user is again provided the opportunity to resize the image and/or the drawing. If the user resizes the image and/or the drawing, step 195 is performed after step 197 and as soon as the user provides the certain user input again, steps 103 to 115 are performed again.
  • Fig. 8 shows the image of Fig. 4 being resized. While the resized image 53 is smaller than the original image 51 of Fig. 4, the drawing 61 still has the same size. As a result, larger luminaire models are searched for.
  • Fig. 9 shows the drawing of Fig. 4 being resized. While the resized drawing 63 is smaller than the original drawing 61 of Fig. 4, the image 51 still has the same size. As a result, smaller luminaire models are searched for.
  • FIG. 9 A fifth embodiment of the method of selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user is shown in Fig. 9.
  • the method may be performed by the mobile device 1 of Fig. 1 or the cloud computer 21 of Fig. 2, for example.
  • Step 101 comprises receiving a drawing made by a user.
  • step 211 comprises receiving an image captured by a camera in real-time.
  • Step 195 comprises displaying the image received in step 211 to create an augmented reality view.
  • a step 215 comprises checking whether the user has provided certain input to search for matching luminaire models, e.g. by pressing a virtual “search” button. If not, step 211 is repeated, i.e. a new camera image is received (and then displayed in step 195). If so, steps 103 to 115 are performed as described in relation to Fig. 3. Step 211 is repeated after step 115, and the method proceeds as shown in Fig. 10 with a new image captured by the camera.
  • Figs. 3, 5 to 7, and 10 differ from each other in multiple aspects, i.e. multiple steps have been added or replaced. In variations on these embodiments, only a subset of these steps is added or replaced and/or one or more steps is omitted.
  • One or more of the embodiments of Figs. 3, 5 to 7, and 10 may be combined.
  • the embodiment of Fig. 6 may be combined with the embodiment of Fig. 7 or Fig. 10.
  • Fig. 11 depicts a block diagram illustrating an exemplary data processing system that may perform the method as described with reference to Figs. 3, 5 to 7, and 10.
  • the data processing system 300 may include at least one processor 302 coupled to memory elements 304 through a system bus 306. As such, the data processing system may store program code within memory elements 304. Further, the processor 302 may execute the program code accessed from the memory elements 304 via a system bus 306.
  • the data processing system may be implemented as a computer that is suitable for storing and/or executing program code. It should be appreciated, however, that the data processing system 300 may be implemented in the form of any system including a processor and a memory that is capable of performing the functions described within this specification.
  • the data processing system may be an Internet/cloud server, for example.
  • the memory elements 304 may include one or more physical memory devices such as, for example, local memory 308 and one or more bulk storage devices 310.
  • the local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code.
  • a bulk storage device may be implemented as a hard drive or other persistent data storage device.
  • the processing system 300 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the quantity of times program code must be retrieved from the bulk storage device 310 during execution.
  • the processing system 300 may also be able to use memory elements of another processing system, e.g. if the processing system 300 is part of a cloud-computing platform.
  • I/O devices depicted as an input device 312 and an output device 314 optionally can be coupled to the data processing system.
  • input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a microphone (e.g. for voice and/or speech recognition), or the like.
  • output devices may include, but are not limited to, a monitor or a display, speakers, or the like. Input and/or output devices may be coupled to the data processing system either directly or through intervening VO controllers.
  • the input and the output devices may be implemented as a combined input/output device (illustrated in Fig. 11 with a dashed line surrounding the input device 312 and the output device 314).
  • a combined device is a touch sensitive display, also sometimes referred to as a “touch screen display” or simply “touch screen”.
  • input to the device may be provided by a movement of a physical object, such as e.g. a stylus or a finger of a user, on or near the touch screen display.
  • a network adapter 316 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks.
  • the network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 300, and a data transmitter for transmitting data from the data processing system 300 to said systems, devices and/or networks.
  • Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 300.
  • the memory elements 304 may store an application 318.
  • the application 318 may be stored in the local memory 308, the one or more bulk storage devices 310, or separate from the local memory and the bulk storage devices. It should be appreciated that the data processing system 300 may further execute an operating system (not shown in Fig. 11) that can facilitate execution of the application 318.
  • the application 318 being implemented in the form of executable program code, can be executed by the data processing system 300, e.g., by the processor 302. Responsive to executing the application, the data processing system 300 may be configured to perform one or more operations or method steps described herein.
  • Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein).
  • the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression “non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal.
  • the program(s) can be contained on a variety of transitory computer-readable storage media.
  • Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored.
  • the computer program may be run on the processor 302 described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Finance (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Circuit Arrangement For Electric Light Sources In General (AREA)

Abstract

A system is configured to receive a drawing (61) made by a user which represents a luminaire drawn over a representation of a room, analyze an image (51) of the representation to determine one or more representation sizes of one or more elements (55,57,59) in the image, determine one or more real-world sizes of the element(s), analyze the drawing to determine a representation size and a shape of the drawn luminaire, determine a real-world size of the drawn luminaire based on the representation size and a ratio between the representation size(s) and the real-world size(s) of the element(s), compare the shape and the real-world size of the drawn luminaire with shapes and sizes of a plurality of luminaire models, select one or more luminaire models from the luminaire models based on results of the comparison, and output one or more identifiers (67) of the selected luminaire model(s).

Description

Selecting a luminaire model by analyzing a drawing of a luminaire and an image of a room
FIELD OF THE INVENTION
The invention relates to a system for selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user.
The invention further relates to a method of selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user.
The invention also relates to a computer program product enabling a computer system to perform such a method.
BACKGROUND OF THE INVENTION
On websites of shops which offer a variety of products, filters are normally used to simplify the selection of the right product, as keyword searches would normally only be useful if a user searches for a specific product with a known identifier. However, even with filters, it might take the user a relatively long time to find the product that meets his requirements. Image analysis may be used to help the user find these products faster.
For example, US 2019/0012716 Al discloses and information processing device which acquires an image specified by a user, extracts a feature value of the specified image, acquires category information corresponding to an item for sale represented by the specified image, e.g. a piece of clothing, searches for images similar to the specified image based on the extracted feature value, and causes at least one of the found images to be displayed as a search result.
Although it may be possible to use the information processing device disclosed in US 2019/0012716 Al to search for luminaire models, only luminaire models with the drawn shape would be found. When searching for clothing, a user might be able to additionally use a filter to specify the size of the clothing the user is looking for. For luminaire models, such a filter is typically not available or not adequate.
SUMMARY OF THE INVENTION
It is a first object of the invention to provide a system, which allows a user to search for luminaire models more conveniently with the help of image analysis. It is a second object of the invention to provide a method, which allows a user to search for luminaire models more conveniently with the help of image analysis.
In a first aspect of the invention, a system for selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user comprises at least one input interface, at least one output interface, and at least one processor configured to receive, via said at least one input interface, said drawing made by said user, said drawing representing a luminaire drawn over a representation of a room, analyze an image of said representation of said room to determine one or more representation sizes of one or more elements in said image, determine one or more real-world sizes of said one or more elements, and analyze said drawing to determine a representation size of said drawn luminaire and a shape of said drawn luminaire.
The at least one processor is further configured to determine a real-world size of said drawn luminaire based on said representation size of said drawn luminaire and a ratio between said one or more representation sizes and said one or more real-world sizes of said one or more elements, compare said shape and said real-world size of said drawn luminaire with shapes and sizes of said plurality of luminaire models, select said one or more luminaire models from said plurality of luminaire models based on results of said comparison, and output, via said at least one output interface, one or more identifiers of said one or more selected luminaire models.
This system uses image analysis to not only determine the shape of the luminaire model the user is looking for but also the size of the luminaire model the user is looking for. Since the luminaire has been drawn over a representation of a room, e.g. the user’s own room, the size of the luminaire model the user is looking for may be determined by also analyzing the image of this representation and combining the results of the image analysis of the drawing and the image analysis of the room representation. This allows the user to search for luminaire models more conveniently.
The identifiers of the one or more selected luminaire models may be displayed to the user. If multiple luminaire models have been selected/found, they may be ordered based on a measure of relevance. Alternatively or additionally, the identifiers of the one or more selected luminaire models may be used to commission a new luminaire, i.e. to add the new luminaire to the user’s lighting system and configure the new luminaire, as soon as the new luminaire has been physically installed if the new luminaire is one of the one or more selected models. This makes the commissioning easier. For example, the user may be able to order the selected luminaire and cause the identifier to be provided to the lighting system such that the lighting system will already create a ‘virtual lamp’ which can be added to rooms, scenes, and then once the luminaire arrives, the lighting system replaces the virtual lamp and the configuration steps do not need to be performed again. If a real-world position of the drawn luminaire is determined, the identifier and the associated real-world position may be transmitted to the lighting system and the lighting system may then associate this position or a spatial area encompassing this position, e.g. a room, with the luminaire. The identifiers may comprise numbers, text and/or images, for example.
The representation sizes of the one or more elements are the sizes of the one or more elements in the image and depend on the size of the image. The representation size of the drawn luminaire is the size of the luminaire in the drawing (which may also be an image) and depends on the size of the drawing. The representation sizes may expressed in pixels, e.g. 100 pixels long and 50 pixels high. Typically, the user will draw a 2D drawing. In this case, it is not necessary to determine the depth/width of the luminaire. However, 3D drawings might be supported as well. The real -world sizes may be expressed in centimeters or inches, for example.
Said one or more elements may comprise a surface and said at least one processor may be configured to determine a location on said surface at which said drawn luminaire is connected to said surface and determine said ratio between said one or more representation sizes and said one or more real-world sizes of said one or more elements based on said location on said surface. Said surface may comprise a wall, a ceiling, or a floor, for example. Said one or more elements may further comprise a person and/or an object, for example.
Elements in the room that are closer to the camera normally appear larger in the representation of the room. It is therefore beneficial to take the z coordinate, i.e. depth, of the real world position of the drawn luminaire and of the real world positions of the one or more elements into account when determining the ratio. The location on the surface at which the drawn luminaire is connected to the surface may be used to determine the z coordinate of the real world position of the drawn luminaire with respect to the room. This z coordinate may also be used for determining the x (horizontal) coordinate and y (vertical) coordinate from the image.
Said at least one processor may be configured to receive a signal indicative of at least one user-specified real-world size of at least one of said one or more elements and determine said one or more real-world sizes of said one or more elements based on said at least one user-specified real -world size of said at least one element. For example, the user may specify that a certain closet has certain dimensions. Alternatively or additionally, the real-world size(s) of at least one of the one or more elements may be determined automatically based on the kind of element. For example, the edges of the ceiling and the walls may be used as reference and assigned an average height of 2.5 m. Also, a person may be detected and assigned an average height of 1.8 m, for example. Dimensions per kind of element may be determined based on geo location and country standards/averages.
Said at least one processor may be configured to analyze said image to determine perspective features and determine said one or more real-world sizes of said one or more elements based on said one or more representation sizes of said one or more elements and said perspective features. For example, the real -world sizes of elements for which the user has not specified a real-world size and of which the real-world size could not be determined automatically based on the kind of element may be determined by using perspective rules.
Said at least one processor may be configured to analyze said image to determine perspective features and determine said ratio between said one or more representation sizes and said one or more real-world sizes of said one or more elements based on said perspective features. For example, the z coordinate (i.e. depth) of the one or more elements may be determined based on the perspective features. A ratio may be determined for each element between the representation size and the real-world size of the element and this ratio may then be adjusted based on the difference between the z coordinate of the real world position of the drawn luminaire and the z coordinate of the real world position of the element. An average ratio may then be calculated based on these ratios.
Said at least one processor may be configured to determine a real-world position of said drawn luminaire based on a representation position of said drawn luminaire with respect to said representation of said room. This real-world position may be used for determining the ratio between the one or more representation sizes and the one or more real- world sizes of the one or more elements, for example. Alternatively or additionally, the type of the luminaire model that the user is looking for may be determined based on this real- world position and the one or more luminaire models may be selected from the plurality of luminaire models based on this type. The type may indicate whether the luminaire model is ceiling lamp, a floor lamp, or a desk lamp, for example.
Said image may be an image selected or provided by user and said at least one processor may be configured to display said image. For example, the user may have previously captured an image of their room and stored it on their device or on a server. Said at least one processor may be configured to display images captured by a camera in real-time, said captured images comprising said image. This enables the system to support augmented reality. The user may be able to draw on top of an image displayed on a touch screen display, for example.
Said at least one processor may be configured to receive a signal indicative of user request to resize said drawing of said luminaire and analyze said resized drawing to determine a new representation size of said drawn luminaire. For example, if the drawing is made smaller, the selected/found luminaire model(s) will be smaller. Alternatively or additionally, said at least one processor may be configured to receive a signal indicative of user request to resize said image and determine one or more new representation sizes of said one or more elements in said resized image. For example, if the image is made smaller, the selected/found luminaire model(s) will be larger.
In a second aspect of the invention, a method of selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user comprises receiving said drawing made by said user, said drawing representing a luminaire drawn over a representation of a room, analyzing an image of said representation of said room to determine one or more representation sizes of one or more elements in said image, determining one or more real-world sizes of said one or more elements, and analyzing said drawing to determine a representation size of said drawn luminaire and a shape of said drawn luminaire.
Said method further comprises determining a real-world size of said drawn luminaire based on said representation size of said drawn luminaire and a ratio between said one or more representation sizes and said one or more real-world sizes of said one or more elements, comparing said shape and said real-world size of said drawn luminaire with shapes and sizes of said plurality of luminaire models, selecting said one or more luminaire models from said plurality of luminaire models based on results of said comparison, and outputting one or more identifiers of said one or more selected luminaire models. Said method may be performed by software running on a programmable device. This software may be provided as a computer program product.
Moreover, a computer program for carrying out the methods described herein, as well as a non-transitory computer readable storage-medium storing the computer program are provided. A computer program may, for example, be downloaded by or uploaded to an existing device or be stored upon manufacturing of these systems. A non-transitory computer-readable storage medium stores at least one software code portion, the software code portion, when executed or processed by a computer, being configured to perform executable operations for selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user.
The executable operations comprise receiving said drawing made by said user, said drawing representing a luminaire drawn over a representation of a room, obtaining an image of said representation of said room, analyzing said image of said representation of said room to determine one or more representation sizes of one or more elements in said image, determining one or more real-world sizes of said one or more elements, and analyzing said drawing to determine a representation size of said drawn luminaire and a shape of said drawn luminaire. The image may be captured by a camera and/or uploaded by a user.
The executable operations further comprise determining a real-world size of said drawn luminaire based on said representation size of said drawn luminaire and a ratio between said one or more representation sizes and said one or more real-world sizes of said one or more elements, comparing said shape and said real-world size of said drawn luminaire with shapes and sizes of said plurality of luminaire models, selecting said one or more luminaire models from said plurality of luminaire models based on results of said comparison, and outputting one or more identifiers of said one or more selected luminaire models. Said method may be performed by software running on a programmable device. This software may be provided as a computer program product.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a device, a method or a computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system." Functions described in this disclosure may be implemented as an algorithm executed by a processor/microprocessor of a computer. Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied, e.g., stored, thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer readable storage medium may include, but are not limited to, the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store, a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java(TM), Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor, in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention are apparent from and will be further elucidated, by way of example, with reference to the drawings, in which:
Fig. l is a block diagram of a first embodiment of the system;
Fig. 2 is a block diagram of a second embodiment of the system;
Fig. 3 is a flow diagram of a first embodiment of the method;
Fig. 4 shows an example of a drawing of a luminaire superimposed over an image representing a room;
Fig. 5 is a flow diagram of a second embodiment of the method;
Fig. 6 is a flow diagram of a third embodiment of the method;
Fig. 7 is a flow diagram of a fourth embodiment of the method;
Fig. 8 shows the image of Fig. 4 being resized;
Fig. 9 shows the drawing of Fig. 4 being resized; and
Fig. 10 is a flow diagram of a fifth embodiment of the method; and
Fig. 11 is a block diagram of an exemplary data processing system for performing the method of the invention.
Corresponding elements in the drawings are denoted by the same reference numeral.
DETAILED DESCRIPTION OF THE EMBODIMENTS
Fig. 1 shows a first embodiment of the system for selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user. In this first embodiment, the system is a mobile device 1. A lighting system comprises a bridge 16 and lighting devices 31-33. The bridge 16 may be a Philips Hue bridge, for example. Lighting devices 31-33 can be controlled via bridge 16, e.g. using Zigbee technology. The bridge 16 is connected to a wireless LAN access point 17, e.g. via Wi-Fi or Ethernet. The wireless LAN access point 17 is connected to the Internet 11. Mobile device 1 is able to control lighting devices 31-33 via the wireless LAN access point 17 and the bridge 16.
The mobile device 1 comprises a receiver 3, a transmitter 4, a processor 5, memory 7, a camera 8, and a touchscreen display 9. The processor 5 is configured to receive, via the touchscreen display 9, a drawing made by the user. The drawing represents a luminaire drawn over a representation of a room. The processor 5 is further configured to analyze an image of the representation of the room to determine one or more representation sizes of one or more elements in the image, determine one or more real-world sizes of the one or more elements, and analyze the drawing to determine a representation size of the drawn luminaire and a shape of the drawn luminaire. In other words, the representation size and the shape are detected by using computer vision. Small elements may be removed from the image using filters. The user may be allowed to upload the image of the representation of the room or this image may be captured with camera 8.
The processor 5 is further configured to determine a real-world size of the drawn luminaire based on the representation size of the drawn luminaire and a ratio between the one or more representation sizes and the one or more real-world sizes of the one or more elements, compare the shape and the real-world size of the drawn luminaire with shapes and sizes of the plurality of luminaire models, select the one or more luminaire models from the plurality of luminaire models based on results of the comparison, and output one or more identifiers of the one or more selected luminaire models.
The plurality of luminaire models may be included in a catalogue stored in the memory 7, for example. The plurality of luminaire models may be obtained from an Internet server 13 before they are stored in the memory 7, for example. The luminaire models are characterized for shape and size. The luminaire models and their shape and size may be stored in a database for fast retrieval and comparison. Preferably, a graphical representation is also stored to show the luminaire model to the user, e.g. superimposed over the representation of their room.
In an implementation, the processor 5 is configured to display a graphical input area on the display 9 where the user can upload the image of their room. This image is then displayed in this input area. On top of this room, the user can draw the luminaire they are looking for. The size of the drawn luminaire is determined based on elements in the uploaded room, as described above.
The drawn results are compared with the shape and size of the luminaire models in the catalogue. Computer vision tools are used for the comparison. The identifiers of the one or more selected luminaire models may be displayed to the user via the touch screen display 9 and/or may be transmitted, via the transmitter 4, to the bridge 16 to commission a new luminaire as soon as the new luminaire has been added if the new luminaire is one of the one or more selected models. This makes the commissioning easier. Optionally, a determined real-world position of the drawn luminaire or a lighting group determined based on this real -world position is transmitted to the bridge 16 to commission the new luminaire. The match level between the drawn luminaire and the luminaire models in the catalogue may be used to order the results. The selected luminaire model(s) may be shown as overlay on the uploaded image of the room. In case the user is not yet satisfied, the user may be able to resize their drawing and the new results may then be shown using the same algorithm.
In the embodiment of the mobile device 1 shown in Fig. 1, the mobile device 1 comprises one processor 5. In an alternative embodiment, the mobile device 1 comprises multiple processors. The processor 5 of the mobile device 1 may be a general-purpose processor, e.g. from ARM or Qualcomm or an application-specific processor. The processor 5 of the mobile device 1 may run an Android or iOS operating system for example. The display 9 may comprise an LCD or OLED display panel, for example. The memory 7 may comprise one or more memory units. The memory 7 may comprise solid state memory, for example.
The receiver 3 and the transmitter 4 may use one or more wireless communication technologies, e.g. Wi-Fi (IEEE 802.11) for communicating with the wireless LAN access point 17, for example. In an alternative embodiment, multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter. In the embodiment shown in Fig. 1, a separate receiver and a separate transmitter are used. In an alternative embodiment, the receiver 3 and the transmitter 4 are combined into a transceiver. The mobile device 1 may comprise other components typical for a mobile device such as a battery and a power connector. The invention may be implemented using a computer program running on one or more processors.
In the embodiment of Fig. 1, the lighting devices 31-33 are controlled by the mobile device 1 via the bridge 16. In an alternative embodiment, one or more of the lighting devices 31-33 are controlled by the mobile device 1 without a bridge, e.g. via the Internet server 13 and the wireless LAN access point 17 or directly via Bluetooth. The lighting devices 31-33 may be capable of receiving and transmitting Wi-Fi signals, for example. Commissioning information may be stored on another device than the bridge 16.
Fig. 2 shows a second embodiment of the system for selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user. In this second embodiment, the system is a computer 21. The computer 21 is connected to the Internet 11 and acts as a server.
The computer 21 comprises a receiver 23, a transmitter 24, a processor 25, and storage means 27. The processor 25 is configured to receive, via receiver 23, a drawing made by the user. The drawing represents a luminaire drawn over a representation of a room. The processor 25 is further configured to analyze an image of the representation of the room to determine one or more representation sizes of one or more elements in the image, determine one or more real-world sizes of the one or more elements, and analyze the drawing to determine a representation size of the drawn luminaire and a shape of the drawn luminaire.
The processor 25 is further configured to determine a real-world size of the drawn luminaire based on the representation size of the drawn luminaire and a ratio between the one or more representation sizes and the one or more real-world sizes of the one or more elements, compare the shape and the real-world size of the drawn luminaire with shapes and sizes of the plurality of luminaire models, select the one or more luminaire models from the plurality of luminaire models based on results of the comparison, and output one or more identifiers of the one or more selected luminaire models.
The drawing and the image may be received from a mobile device 41, for example. The plurality of luminaire models maybe stored on storage means 27, for example. The identifiers of the one or more selected luminaire models may be transmitted, via the transmitter 24, to the mobile device 41 to be displayed on a display of the mobile device 41 and/or may be transmitted, via the transmitter 24, to the bridge 16 to commission a new luminaire as soon as the new luminaire has been added if the new luminaire is one of the one or more selected models.
In the embodiment of the computer 21 shown in Fig. 2, the computer 21 comprises one processor 25. In an alternative embodiment, the computer 21 comprises multiple processors. The processor 25 of the computer 21 may be a general -purpose processor, e.g. from Intel or AMD, or an application-specific processor. The processor 25 of the computer 21 may run a Windows or Unix-based operating system for example. The storage means 27 may comprise one or more memory units. The storage means 27 may comprise one or more hard disks and/or solid-state memory, for example. The storage means 27 may be used to store an operating system, applications and application data, for example.
The receiver 23 and the transmitter 24 may use one or more wired and/or wireless communication technologies such as Ethernet and/or Wi-Fi (IEEE 802.11) to communicate with the wireless LAN access point 17, for example. In an alternative embodiment, multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter. In the embodiment shown in Fig. 2, a separate receiver and a separate transmitter are used. In an alternative embodiment, the receiver 23 and the transmitter 24 are combined into a transceiver. The computer 21 may comprise other components typical for a computer such as a power connector. The invention may be implemented using a computer program running on one or more processors.
In the embodiment of Fig. 2, the computer 21 transmits data to the lighting devices 31-33 via the bridge 16. In an alternative embodiment, the computer 21 transmits data to the lighting devices 31-33 without a bridge. Commissioning information may be stored on another device than the bridge 16.
In the embodiments of Figs. 1 and 2, the system comprises a single device. In an alternative embodiment, the system comprises a plurality of devices. For example, the functionality described above may be partly implemented in the mobile device 1 of Fig. 1 and partly in the computer 21 of Fig. 2.
A first embodiment of the method of selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user is shown in Fig. 3. The method may be performed by the mobile device 1 of Fig. 1 or the cloud computer 21 of Fig. 2, for example.
A step 101 comprises receiving a drawing made by a user. The drawing represents a luminaire drawn over a representation of a room. The drawing may, for example, be drawn by the user over an image displayed on a display, e.g. by using a touch screen. The image may be captured by a camera or selected from a locally or remotely stored collection of images by the user, for example. A step 107 comprises analyzing the drawing received in step 101 to determine a representation size of the drawn luminaire and a shape of the drawn luminaire.
A step 103 comprises analyzing an image of the representation of the room to determine one or more representation sizes of one or more elements in the image. If the user has drawn the luminaire over an image displayed on a display, this image may be analyzed. If the user uses augmented reality glasses, an image captured by a camera corresponding to the user’s view while the user was drawing the luminaire in the air may be analyzed. Image analysis techniques to determine objects (elements) in a scene and their respective representation size are known in the field of image analysis and computer vision. The one or more elements may comprise a surface and/or a person and/or an object, for example. A step 105 comprises determining one or more real -world sizes of the one or more elements of which the representations sizes are determined in step 103. Steps 101 and 107 and steps 103 and 105 may be performed (partly) in parallel or in sequence.
A step 109 is performed after steps 105 and 107 have been performed. Step 109 comprises determining a ratio between the one or more representation sizes and the one or more real -world sizes of the one or more elements, as determined in steps 103 and 105, and determining a real-world size of the drawn luminaire based on the representation size of the drawn luminaire, as determined in step 107, and this ratio. A ratio may be determined for each element between the representation size and the real-world size of the element and an average ratio may be calculated based on these ratios.
A step 111 comprises comparing the shape of the drawn luminaire, as determined in step 107, and the real -world size of the drawn luminaire, as determined in step 109, with shapes and sizes of the plurality of luminaire models. A step 113 comprises selecting the one or more luminaire models from the plurality of luminaire models based on results of the comparison in step 111. A step 115 comprises outputting one or more identifiers of the one or more luminaire models selected in step 113.
Fig. 4 shows an example of a drawing 61 of a luminaire superimposed over an image 51 representing a room. The image 51 includes multiple elements, including a ceiling 55, a person 57, and a closet 59. When the method of Fig. 3 has been performed, a luminaire model with identifier 67 named “LM245” is found. The identifier 67 and a visual representation 69 of the luminaire are shown in the results field shown in Fig. 4. In variant, the drawing 61 is replaced by the visual representation 69 when the results are shown. In this way, a selected luminaire model may be shown as overlay on the representation of the room as if a luminaire of this model is already present.
A second embodiment of the method of selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user is shown in Fig. 5. The embodiment of Fig. 5 is an extension of the first embodiment of Fig. 3. In the embodiment of Fig. 5, a step 131 is performed after steps 105 and 107 have been performed and before step 109 is performed and step 109 of Fig. 3 is implemented by a step 133.
Step 131 comprises determining a location on a surface at which the drawn luminaire is connected to the surface. This surface is one of the one or more elements of which the representation size(s) are determined in step 103 and of which the real -world size(s) have been determined in step 105. The surface may comprise a wall, a ceiling (e.g. ceiling 55 of Fig. 4), or a floor, for example.
Step 133 comprises determining the ratio between the one or more representation sizes and the one or more real-world sizes of the one or more elements, as determined in steps 103 and 105, based on the location on the surface, as determined in step 131. Thus, the ratio determined in step 133 depends on the z coordinate (i.e. depth) of the real-world position of the drawn luminaire. A ratio may be determined for each element between the representation size and the real-world size of the element and this ratio may then be adjusted based on the difference between the z coordinate of the real world position of the drawn luminaire and the z coordinate of the real world position of the element. An average ratio may then be calculated based on these ratios.
Step 133 further comprises determining a real-world size of the drawn luminaire based on the representation size of the drawn luminaire, as determined in step 107, and this ratio. Step 133 optionally comprises determining the real-world position of the drawn luminaire based on the representation position of the drawn luminaire with respect to the representation of the room and the location on the surface at which the drawn luminaire is connected to the surface.
A third embodiment of the method of selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user is shown in Fig. 6. The embodiment of Fig. 6 is an extension of the second embodiment of Fig. 5. In the embodiment of Fig. 6, steps 151 and 153 are performed before step 103 is performed and step 105 is implemented by a step 155. Furthermore, a step 157 is performed after steps 105 and 107 have been performed and before step 131 is performed, and step 109 is implemented by a step 159.
Step 151 comprises receiving a signal indicative of at least one user-specified real -world size of at least one element in an image of a representation of a room. For example, the user may specify the dimensions of the closet 59 of Fig. 4. Step 153 comprises analyzing the image to determine perspective features. Step 103 comprises analyzing the image to determine one or more representation sizes of one or more elements in the image. These one or more elements includes the element(s) of which the real-world size was specified by the user.
Step 155 comprises determining the one or more real-world sizes of the one or more elements based on at least the real-world size(s) received in step 151. In step 155, the real-world size(s) of one or more elements of which the real-world size was not specified by the user may be determined automatically based on the kind of element. For example, the edges of the ceiling and the walls may be used as reference and assigned an average height of 2.5 m.
The one or elements may comprise one or more persons, e.g. person 57 of Fig. 4. If a person is detected in the image, the person may be assumed to have an average length. This average length may vary per region and/or per gender, e.g. 1.8 meters for males. The real-world sizes of elements for which the user has not specified a real-world size and of which the real-world size could not be determined automatically based on the kind of element may be determined by using perspective rules in step 155.
Step 159 comprises determining the ratio between the one or more representation sizes and the one or more real-world sizes of the one or more elements, as determined in steps 103 and 105, based on the location on the surface, as determined in step 131, and the perspective features, as determined in step 153. For example, the z coordinate of the one or more elements may be determined based on the perspective features. Step 159 further comprises determining a real-world size of the drawn luminaire based on the representation size of the drawn luminaire, as determined in step 107, and this ratio.
If step 159 comprises determining the real -world position of the drawn luminaire, this may be done based on the representation position of the drawn luminaire with respect to the representation of the room, the location on the surface at which the drawn luminaire is connected to the surface, and the perspective features.
A fourth embodiment of the method of selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user is shown in Fig. 7. The method may be performed by the mobile device 1 of Fig. 1 or the cloud computer 21 of Fig. 2, for example.
A step 191 comprises receiving a drawing made by a user and an image of the representation of the room. Sub step 101 of step 191 comprises receiving the drawing. Sub step 193 of step 191 comprises receiving the image. The drawing and the image may be received in the same way and at the same time or almost the same time. Optionally, the drawing and the representation of the room may be part of the same image, i.e. combined before received in step 191. Step 195 comprises displaying the drawing and the image received in step 191.
A step 197 comprises determining whether a signal indicative of user request to resize the image and/or to resize the drawing has been received. If so, step 195 is repeated and the resized image and/or resized drawing is displayed. In an alternative embodiment, it is only possible to resize the image or only possible to resize the drawing. Steps 103 to 115 are performed as described in relation to Fig. 3 as soon as it is determined in step 195 that the user has provided certain user input to search for matching luminaire models, e.g. by pressing a virtual “search” button. If the user has resized the image, step 105 comprises determining one or more representation sizes of the one or more elements in the resized image.
In the embodiment of Fig. 7, step 197 is repeated after step 115. Thus, the user is again provided the opportunity to resize the image and/or the drawing. If the user resizes the image and/or the drawing, step 195 is performed after step 197 and as soon as the user provides the certain user input again, steps 103 to 115 are performed again.
Fig. 8 shows the image of Fig. 4 being resized. While the resized image 53 is smaller than the original image 51 of Fig. 4, the drawing 61 still has the same size. As a result, larger luminaire models are searched for. Fig. 9 shows the drawing of Fig. 4 being resized. While the resized drawing 63 is smaller than the original drawing 61 of Fig. 4, the image 51 still has the same size. As a result, smaller luminaire models are searched for.
A fifth embodiment of the method of selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user is shown in Fig. 9. The method may be performed by the mobile device 1 of Fig. 1 or the cloud computer 21 of Fig. 2, for example.
Step 101 comprises receiving a drawing made by a user. Next, step 211 comprises receiving an image captured by a camera in real-time. Step 195 comprises displaying the image received in step 211 to create an augmented reality view. A step 215 comprises checking whether the user has provided certain input to search for matching luminaire models, e.g. by pressing a virtual “search” button. If not, step 211 is repeated, i.e. a new camera image is received (and then displayed in step 195). If so, steps 103 to 115 are performed as described in relation to Fig. 3. Step 211 is repeated after step 115, and the method proceeds as shown in Fig. 10 with a new image captured by the camera.
The embodiments of Figs. 3, 5 to 7, and 10 differ from each other in multiple aspects, i.e. multiple steps have been added or replaced. In variations on these embodiments, only a subset of these steps is added or replaced and/or one or more steps is omitted. One or more of the embodiments of Figs. 3, 5 to 7, and 10 may be combined. For example, the embodiment of Fig. 6 may be combined with the embodiment of Fig. 7 or Fig. 10.
Fig. 11 depicts a block diagram illustrating an exemplary data processing system that may perform the method as described with reference to Figs. 3, 5 to 7, and 10.
As shown in Fig. 11, the data processing system 300 may include at least one processor 302 coupled to memory elements 304 through a system bus 306. As such, the data processing system may store program code within memory elements 304. Further, the processor 302 may execute the program code accessed from the memory elements 304 via a system bus 306. In one aspect, the data processing system may be implemented as a computer that is suitable for storing and/or executing program code. It should be appreciated, however, that the data processing system 300 may be implemented in the form of any system including a processor and a memory that is capable of performing the functions described within this specification. The data processing system may be an Internet/cloud server, for example.
The memory elements 304 may include one or more physical memory devices such as, for example, local memory 308 and one or more bulk storage devices 310. The local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive or other persistent data storage device. The processing system 300 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the quantity of times program code must be retrieved from the bulk storage device 310 during execution. The processing system 300 may also be able to use memory elements of another processing system, e.g. if the processing system 300 is part of a cloud-computing platform.
Input/output (I/O) devices depicted as an input device 312 and an output device 314 optionally can be coupled to the data processing system. Examples of input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a microphone (e.g. for voice and/or speech recognition), or the like. Examples of output devices may include, but are not limited to, a monitor or a display, speakers, or the like. Input and/or output devices may be coupled to the data processing system either directly or through intervening VO controllers.
In an embodiment, the input and the output devices may be implemented as a combined input/output device (illustrated in Fig. 11 with a dashed line surrounding the input device 312 and the output device 314). An example of such a combined device is a touch sensitive display, also sometimes referred to as a “touch screen display” or simply “touch screen”. In such an embodiment, input to the device may be provided by a movement of a physical object, such as e.g. a stylus or a finger of a user, on or near the touch screen display.
A network adapter 316 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. The network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 300, and a data transmitter for transmitting data from the data processing system 300 to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 300. As pictured in Fig. 11, the memory elements 304 may store an application 318. In various embodiments, the application 318 may be stored in the local memory 308, the one or more bulk storage devices 310, or separate from the local memory and the bulk storage devices. It should be appreciated that the data processing system 300 may further execute an operating system (not shown in Fig. 11) that can facilitate execution of the application 318. The application 318, being implemented in the form of executable program code, can be executed by the data processing system 300, e.g., by the processor 302. Responsive to executing the application, the data processing system 300 may be configured to perform one or more operations or method steps described herein.
Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein). In one embodiment, the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression “non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal. In another embodiment, the program(s) can be contained on a variety of transitory computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. The computer program may be run on the processor 302 described herein.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of embodiments of the present invention has been presented for purposes of illustration, but is not intended to be exhaustive or limited to the implementations in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the present invention.
The embodiments were chosen and described in order to best explain the principles and some practical applications of the present invention, and to enable others of ordinary skill in the art to understand the present invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims

CLAIMS:
1. A system (1,21) for selecting one or more luminaire models (69) from a plurality of luminaire models based on a drawing (61) made by a user, said system (1,21) comprising: at least one input interface (9,23); at least one output interface (4,9,24); and at least one processor (5,25) configured to:
- receive, via said at least one input interface (9,23), said drawing (61) made by said user, said drawing (61) representing a luminaire drawn over a representation of a room,
- analyze an image (51) of said representation of said room to determine one or more representation sizes of one or more elements (55,57,59) in said image (51),
- determine one or more real-world sizes of said one or more elements
(55,57,59),
- analyze said drawing (61) to determine a representation size of said drawn luminaire and a shape of said drawn luminaire,
- determine a real-world size of said drawn luminaire based on said representation size of said drawn luminaire and a ratio between said one or more representation sizes and said one or more real-world sizes of said one or more elements
(55.57.59),
- compare said shape and said real-world size of said drawn luminaire with shapes and sizes of said plurality of luminaire models,
- select said one or more luminaire models (69) from said plurality of luminaire models based on results of said comparison, and
- output, via said at least one output interface (4,9,24), one or more identifiers (67) of said one or more selected luminaire models (69).
2. A system (1,21) as claimed in claim 1, wherein said one or more elements
(55.57.59) comprise a surface (55) and said at least one processor (5,25) is configured to: - determine a location on said surface (55) at which said drawn luminaire is connected to said surface (65), and
- determine said ratio between said one or more representation sizes and said one or more real-world sizes of said one or more elements (55,57,59) based on said location on said surface (55).
3. A system (1,21) as claimed in claim 2, wherein said surface (55) comprises a wall, a ceiling, or a floor.
4. A system (1,21) as claimed in claim 2, wherein said one or more elements (55,57,59) further comprise a person (57) and/or an object (59).
5. A system (1,21) as claimed in claim 1 or 2, wherein said at least one processor
(5,25) is configured to receive a signal indicative of at least one user-specified real-world size of at least one of said one or more elements (55,57,59) and determine said one or more real- world sizes of said one or more elements (55,57,59) based on said at least one user-specified real-world size of said at least one element.
6. A system (1,21) as claimed in claim 1 or 2, wherein said at least one processor
(5,25) is configured to analyze said image (51) to determine perspective features and determine said one or more real-world sizes of said one or more elements (55,57,59) based on said one or more representation sizes of said one or more elements (55,57,59) and said perspective features.
7. A system (1,21) as claimed in claim 1 or 2, wherein said at least one processor
(5,25) is configured to analyze said image (51) to determine perspective features and determine said ratio between said one or more representation sizes and said one or more real- world sizes of said one or more elements (55,57,59) based on said perspective features.
8. A system (1,21) as claimed in claim 1 or 2, wherein said at least one processor
(5,25) is configured to determine a real-world position of said drawn luminaire based on a representation position of said drawn luminaire with respect to said representation of said room.
9. A system (1,21) as claimed in claim 1 or 2, wherein said image (51) is an image selected or provided by user and said at least one processor (5,25) is configured to display said image (51).
10. A system (1,21) as claimed in claim 1 or 2, wherein said at least one processor
(5,25) is configured to display images captured by a camera (8) in real-time, said captured images comprising said image (51).
11. A system (1,21) as claimed in claim 1 or 2, wherein said at least one processor
(5,25) is configured to receive a signal indicative of user request to resize said drawing (61) of said luminaire and analyze said resized drawing (63) to determine a new representation size of said drawn luminaire.
12. A system (1,21) as claimed in claim 1 or 2, wherein said at least one processor
(5,25) is configured to receive a signal indicative of user request to resize said image (51) and determine one or more new representation sizes of said one or more elements (55,57,59) in said resized image (53).
13. A method of selecting one or more luminaire models from a plurality of luminaire models based on a drawing made by a user, said method comprising:
- receiving (101) said drawing made by said user, said drawing representing a luminaire drawn over a representation of a room;
- analyzing (103) an image of said representation of said room to determine one or more representation sizes of one or more elements in said image;
- determining (105) one or more real -world sizes of said one or more elements;
- analyzing (107) said drawing to determine a representation size of said drawn luminaire and a shape of said drawn luminaire;
- determining (109) a real -world size of said drawn luminaire based on said representation size of said drawn luminaire and a ratio between said one or more representation sizes and said one or more real-world sizes of said one or more elements;
- comparing (111) said shape and said real-world size of said drawn luminaire with shapes and sizes of said plurality of luminaire models;
- selecting (113) said one or more luminaire models from said plurality of luminaire models based on results of said comparison; and - outputting (115) one or more identifiers of said one or more selected luminaire models.
14. A computer program product for a computing device, the computer program product comprising computer program code to perform the method of claim 13 when the computer program product is run on a processing unit of the computing device.
PCT/EP2022/084359 2021-12-13 2022-12-05 Selecting a luminaire model by analyzing a drawing of a luminaire and an image of a room WO2023110483A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP21213959.6 2021-12-13
EP21213959 2021-12-13

Publications (1)

Publication Number Publication Date
WO2023110483A1 true WO2023110483A1 (en) 2023-06-22

Family

ID=79231013

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/084359 WO2023110483A1 (en) 2021-12-13 2022-12-05 Selecting a luminaire model by analyzing a drawing of a luminaire and an image of a room

Country Status (1)

Country Link
WO (1) WO2023110483A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110267578A1 (en) * 2010-04-26 2011-11-03 Wilson Hal E Method and systems for measuring interpupillary distance
WO2012066555A2 (en) * 2010-11-17 2012-05-24 Upcload Gmbh Collecting and using anthropometric measurements
WO2014087274A1 (en) * 2012-10-24 2014-06-12 Koninklijke Philips N.V. Assisting a user in selecting a lighting device design
WO2015103620A1 (en) * 2014-01-06 2015-07-09 Andrea Aliverti Systems and methods to automatically determine garment fit
EP2912924A2 (en) * 2012-10-24 2015-09-02 Koninklijke Philips N.V. Assisting a user in selecting a lighting device design
US20180168261A1 (en) * 2016-08-10 2018-06-21 Michael J. Weiler Methods of generating compression garment measurement information for a patient body part and fitting pre-fabricated compression garments thereto
US20190012716A1 (en) 2015-12-28 2019-01-10 Rakuten, Inc. Information processing device, information processing method, and information processing program
WO2020216826A1 (en) * 2019-04-25 2020-10-29 Signify Holding B.V. Determining an arrangement of light units based on image analysis
WO2021244918A1 (en) * 2020-06-04 2021-12-09 Signify Holding B.V. A method of configuring a plurality of parameters of a lighting device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110267578A1 (en) * 2010-04-26 2011-11-03 Wilson Hal E Method and systems for measuring interpupillary distance
WO2012066555A2 (en) * 2010-11-17 2012-05-24 Upcload Gmbh Collecting and using anthropometric measurements
WO2014087274A1 (en) * 2012-10-24 2014-06-12 Koninklijke Philips N.V. Assisting a user in selecting a lighting device design
EP2912924A2 (en) * 2012-10-24 2015-09-02 Koninklijke Philips N.V. Assisting a user in selecting a lighting device design
WO2015103620A1 (en) * 2014-01-06 2015-07-09 Andrea Aliverti Systems and methods to automatically determine garment fit
US20190012716A1 (en) 2015-12-28 2019-01-10 Rakuten, Inc. Information processing device, information processing method, and information processing program
US20180168261A1 (en) * 2016-08-10 2018-06-21 Michael J. Weiler Methods of generating compression garment measurement information for a patient body part and fitting pre-fabricated compression garments thereto
WO2020216826A1 (en) * 2019-04-25 2020-10-29 Signify Holding B.V. Determining an arrangement of light units based on image analysis
WO2021244918A1 (en) * 2020-06-04 2021-12-09 Signify Holding B.V. A method of configuring a plurality of parameters of a lighting device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
EITZ M ET AL: "Sketch-Based Image Retrieval: Benchmark and Bag-of-Features Descriptors", IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, IEEE, USA, vol. 17, no. 11, November 2011 (2011-11-01), pages 1624 - 1636, XP011444623, ISSN: 1077-2626, DOI: 10.1109/TVCG.2010.266 *
PACZKOWSKI PATRICK ET AL: "Insitu sketching architectural designs in context", COMPUTERS AND ACCESSIBILITY, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 12 December 2011 (2011-12-12), pages 1 - 10, XP058499560, ISBN: 978-1-4503-0920-2, DOI: 10.1145/2024156.2024216 *
SHIN HYOJONG ET AL: "Magic canvas : interactive design of a 3-D scene prototype from freehand sketches", 2007, 403 King Street West, Suite 205 Toronto, Ont. M5U 1LS Canada, pages 63 - 70, XP055928624, ISSN: 0713-5424, ISBN: 978-1-56881-337-0, Retrieved from the Internet <URL:https://dl.acm.org/doi/pdf/10.1145/1268517.1268530> DOI: 10.1145/1268517.1268530 *
YOUYI ZHENG ET AL: "SmartCanvas: Context-inferred Interpretation of Sketches for Preparatory Design Studies", COMPUTER GRAPHICS FORUM : JOURNAL OF THE EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS, WILEY-BLACKWELL, OXFORD, vol. 35, no. 2, 27 May 2016 (2016-05-27), pages 37 - 48, XP071488400, ISSN: 0167-7055, DOI: 10.1111/CGF.12809 *

Similar Documents

Publication Publication Date Title
US10846938B2 (en) User device augmented reality based item modeling
US10068373B2 (en) Electronic device for providing map information
KR101357260B1 (en) Apparatus and Method for Providing Augmented Reality User Interface
US9398413B1 (en) Mapping electronic devices within an area
CN111832447B (en) Building drawing component identification method, electronic equipment and related product
WO2015102854A1 (en) Assigning virtual user interface to physical object
US11557080B2 (en) Dynamically modeling an object in an environment from different perspectives
TW201339863A (en) Device and method for recognizing and searching image
CN111623755B (en) Enabling automatic measurements
CN112020630B (en) System and method for updating 3D models of buildings
CN111968247B (en) Method and device for constructing three-dimensional house space, electronic equipment and storage medium
CN110619807B (en) Method and device for generating global thermodynamic diagram
CN110633381A (en) Method and device for identifying false house source, storage medium and electronic equipment
EP3138018A1 (en) Identifying entities to be investigated using storefront recognition
US11893073B2 (en) Method and apparatus for displaying map points of interest, and electronic device
CN108846899B (en) Method and system for improving area perception of user for each function in house source
CN110189398A (en) House type drawing generating method, device, equipment and storage medium based on off-the-air picture
CN105009114A (en) Predictively presenting search capabilities
CN112669271A (en) Object surface defect detection method, related device and computer storage medium
CN110657760B (en) Method and device for measuring space area based on artificial intelligence and storage medium
KR102316846B1 (en) Method for sorting a media content and electronic device implementing the same
KR20190059629A (en) Electronic device and the Method for providing Augmented Reality Service thereof
WO2023110483A1 (en) Selecting a luminaire model by analyzing a drawing of a luminaire and an image of a room
CN114494486B (en) Method, device and storage medium for generating user type graph
KR102299393B1 (en) Server for matching product with buyer and matching system having the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22823590

Country of ref document: EP

Kind code of ref document: A1