CN112074062A - Scene-based light adjusting method and intelligent lighting device - Google Patents

Scene-based light adjusting method and intelligent lighting device Download PDF

Info

Publication number
CN112074062A
CN112074062A CN201910426290.9A CN201910426290A CN112074062A CN 112074062 A CN112074062 A CN 112074062A CN 201910426290 A CN201910426290 A CN 201910426290A CN 112074062 A CN112074062 A CN 112074062A
Authority
CN
China
Prior art keywords
scene
lighting
user
image
lighting device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910426290.9A
Other languages
Chinese (zh)
Inventor
秦伟
董勇军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201910426290.9A priority Critical patent/CN112074062A/en
Publication of CN112074062A publication Critical patent/CN112074062A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Circuit Arrangement For Electric Light Sources In General (AREA)

Abstract

The invention belongs to the technical field of illumination, and discloses a scene-based light adjusting method and an intelligent illumination device, wherein the method comprises the following steps: when the intelligent lighting device is started or moved, acquiring a current use scene image of a user; identifying a lighting scene required by the user according to the using scene image; and adjusting the light parameters of the intelligent lighting device according to the identified lighting scene. According to the intelligent lighting device and the control method thereof, the using scene image of the user is collected, the lighting scene required by the user is identified according to the using scene image, and the light parameter of the intelligent lighting device is automatically adjusted according to the lighting scene required by the user, so that the light parameter of the intelligent lighting device is automatically adjusted according to the actual using requirement of the user, manual operation of the user is not required, and the using experience of the user is improved.

Description

Scene-based light adjusting method and intelligent lighting device
Technical Field
The invention belongs to the technical field of intelligent lighting, and particularly relates to a scene-based light adjusting method and an intelligent lighting device.
Background
The lighting device is a necessity commonly used in home life, people can not leave the lighting device in life, office and even outdoors, and different occasions have different requirements on the brightness, color and the like of lamplight. For example, bedroom lights are usually warm and have softer lights, whereas study lights need to be brighter.
In the prior art, there are two main ways of adjusting the light brightness and the light color temperature of the lighting device, which are respectively:
first, ambient brightness information around the outside is detected by a light-sensitive sensor, and then the brightness of the lamp itself is adjusted according to the ambient brightness. This way of adjustment does not allow to adjust the parameters of the light according to the actual needs of the user.
Second, lamp brightness and color temperature adjustment schemes based on user active behavior. For example, the user can adjust the brightness and color temperature of the lamp through a physical key or a terminal application APP. The adjustment mode is completely manually adjusted by a user, automatic adjustment cannot be realized, and the use experience of the user is reduced.
Disclosure of Invention
The invention aims to provide a scene-based light adjusting method and an intelligent lighting device, which can automatically adjust the light parameters of the lighting device according to the requirements of users.
The technical scheme provided by the invention is as follows:
in one aspect, a scene-based light adjusting method is provided, including:
when the intelligent lighting device is started or moved, acquiring a current use scene image of a user;
identifying a lighting scene required by the user according to the using scene image;
and adjusting the light parameters of the intelligent lighting device according to the identified lighting scene.
Further preferably, the identifying, according to the usage scene image, the lighting scene required by the user specifically includes:
extracting scene features from the use scene image;
matching the scene characteristics with scene characteristics in a preset scene characteristic library;
and identifying the lighting scene required by the user according to the matching result.
Further preferably, the identifying, according to the matching result, the lighting scene required by the user specifically includes:
calculating the probability of the lighting scene corresponding to the using scene image according to the matching result;
and identifying the lighting scene required by the user according to the probability.
Further preferably, the method further comprises the following steps:
collecting a plurality of use scene images;
extracting scene features in each use scene image;
establishing a mapping relation between scene features in each use scene image and the lighting scene;
and creating the scene feature library according to the lighting scene and the corresponding scene features.
Further preferably, the method further comprises the following steps:
when the lighting scene required by the user cannot be identified according to the using scene image, sending voice prompt information and collecting voice information input by the user;
and identifying the lighting scene required by the user according to the voice information.
In another aspect, an intelligent lighting device is also provided, including:
the image acquisition module is used for acquiring a current use scene image of a user when the intelligent lighting device is started or moved;
the scene recognition module is used for recognizing the lighting scene required by the user according to the using scene image;
and the light adjusting module is used for adjusting the light parameters of the intelligent lighting device according to the identified lighting scene.
Further preferably, the scene recognition module includes:
a feature extraction unit configured to extract scene features from the usage scene image;
the characteristic matching unit is used for matching the scene characteristics with the scene characteristics in a preset scene characteristic library;
and the scene identification unit is used for identifying the lighting scene required by the user according to the matching result.
Further preferably, the scene recognition unit includes:
the probability calculating subunit is used for calculating the probability of the lighting scene corresponding to the using scene image according to the matching result;
and the scene identification subunit is used for identifying the lighting scene required by the user according to the probability.
Further preferably, the method further comprises the following steps:
the image collection module is used for collecting various using scene images;
the characteristic extraction module is also used for extracting scene characteristics in each use scene image;
the mapping relation establishing module is used for establishing a mapping relation between scene characteristics in each use scene image and the lighting scene;
and the feature library creating module is used for creating the scene feature library according to the lighting scene and the corresponding scene features.
Further preferably, the system also comprises a voice acquisition module;
the voice acquisition module is used for sending voice prompt information and acquiring voice information input by a user when the lighting scene required by the user cannot be identified according to the using scene image;
and the scene recognition module is also used for recognizing the lighting scene required by the user according to the voice information.
Compared with the prior art, the scene-based light adjusting method and the intelligent lighting device have the beneficial effects that: according to the intelligent lighting device and the control method thereof, the using scene image of the user is collected, the lighting scene required by the user is identified according to the using scene image, and the light parameter of the intelligent lighting device is automatically adjusted according to the lighting scene required by the user, so that the light parameter of the intelligent lighting device is automatically adjusted according to the actual using requirement of the user, manual operation of the user is not required, and the using experience of the user is improved.
Drawings
The above features, technical features, advantages and implementations of a scene-based light adjustment method and an intelligent lighting device will be further described in the following preferred embodiments in a clearly understandable manner with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a first embodiment of a scene-based light adjusting method according to the present invention;
FIG. 2 is a schematic flow chart of a second embodiment of a scene-based light adjustment method of the present invention;
FIG. 3 is a schematic flow chart of a third embodiment of a scene-based light adjustment method according to the present invention;
FIG. 4 is a schematic flow chart of a fourth embodiment of a scene-based light adjustment method according to the present invention;
FIG. 5 is a schematic flow chart of a fifth embodiment of a scene-based light adjustment method of the present invention;
FIG. 6 is a block diagram illustrating the structure of one embodiment of an intelligent lighting device of the present invention;
fig. 7 is a block diagram schematically illustrating the structure of another embodiment of the intelligent lighting device according to the present invention.
Description of the reference numerals
100. An image acquisition module; 200. A scene recognition module;
210. a feature extraction unit; 220. A feature matching unit;
230. a scene recognition unit; 231. A probability calculation subunit;
232. a scene identification subunit; 300. A light adjusting module;
400. an image collection module; 500. A feature extraction module;
600. a mapping relation establishing module; 700. A feature library creation module;
800. and a voice acquisition module.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
According to a first embodiment provided by the present invention, as shown in fig. 1, a scene-based light adjusting method includes:
s100, when the intelligent lighting device is started or moved, acquiring a current use scene image of a user;
specifically, when the intelligent lighting device is started, a camera on the intelligent lighting device is automatically started to collect the current use scene image of the user; or when the intelligent lighting device is in a working state and is moved, the camera on the intelligent lighting device is automatically started to collect the current use scene image of the user. The intelligent lighting device can be an intelligent desk lamp, an intelligent bedside lamp and the like.
The captured usage scene image is an image that captures what the user is doing, for example, an image captured while the user is learning, an image captured while the user is watching a video, or an image captured while the user is using a computer. The usage scene image includes images of the user and the environment.
According to the scheme, the camera is not started in real time, but is started when specific conditions (such as being started or being moved) are met, and the energy consumption of the camera is reduced.
S200, identifying an illumination scene required by the user according to the use scene image;
specifically, after a usage scene image of a user is acquired, a lighting scene required by the user is identified according to scene feature information contained in the usage scene image.
For example, when the user is learning, the captured usage scene image should include scene features such as the user holding a pen in his hand, pointing the pen at a book, and looking at the book with the user's gaze, or scene features such as the user's finger turning a page on the book and looking at the book with the user's gaze.
For another example, when the user uses a computer, the captured image of the usage scene should include scene features such as a computer display screen and the view of the user looking at the computer display screen.
For another example, when the user watches a video using a tablet computer, the captured usage scene image should include scene features such as the tablet computer and the user's sight line looking at the tablet computer.
The intelligent lighting device can be integrated with an image processing chip, scene features in the used scene image are extracted through the image processing chip, and then the lighting scene required by the user is identified according to the extracted scene features.
Or the intelligent lighting device sends the collected using scene images to a server, the server extracts scene features from the using scene images, and then the lighting scene required by the user is identified. The server extracts the scene features in the used scene image, so that an image processing chip is not required to be integrated on the intelligent lighting device, the performance requirement on the intelligent lighting device is reduced, and the manufacturing cost of the intelligent lighting device is further reduced.
S300, adjusting the light parameters of the intelligent lighting device according to the identified lighting scene.
Specifically, after the lighting scene required by the user is identified according to the collected using scene image, the lighting parameters of the intelligent lighting device can be adjusted according to the lighting scene. The light parameters include, but are not limited to, the illumination angle of the light, the color of the light, the brightness of the light, etc.
For example, if it is recognized that the user is doing work according to the collected usage scene image, the light irradiation angle of the intelligent lighting device can be adjusted, and the shadows under the light irradiation can be reduced as much as possible by adjusting the light irradiation angle of the intelligent lighting device, so that the eyesight of the user is protected.
For another example, if the user is identified to be watching a video according to the collected usage scene image, the lighting intensity of the light of the intelligent lighting device may be adjusted.
According to the embodiment, the using scene images of the user are collected, the lighting scene required by the user is identified according to the using scene images, and the light parameters of the intelligent lighting device are automatically adjusted according to the lighting scene required by the user, so that the light parameters of the intelligent lighting device are automatically adjusted according to the actual using requirements of the user, the user does not need to manually operate, and the using experience of the user is improved.
According to a second embodiment provided by the present invention, as shown in fig. 2, a scene-based light adjusting method includes:
s100, when the intelligent lighting device is started or moved, acquiring a current use scene image of a user;
s210, extracting scene features from the use scene image;
specifically, the intelligent lighting device may be integrated with an image processing chip, and the image processing chip extracts scene features in the usage scene image, and then identifies the lighting scene required by the user according to the extracted scene features.
Or the intelligent lighting device sends the collected using scene images to a server, the server extracts scene features from the using scene images, and then the lighting scene required by the user is identified. The server extracts the scene features in the used scene image, so that an image processing chip is not required to be integrated on the intelligent lighting device, the performance requirement on the intelligent lighting device is reduced, and the manufacturing cost of the intelligent lighting device is further reduced.
S220, matching the scene characteristics with scene characteristics in a preset scene characteristic library;
specifically, the pre-generated scene feature library includes scene features corresponding to various lighting scenes, and one lighting scene may correspond to one or more scene features.
For example, the scene features corresponding to the lighting scene of the writing job include that the user holds a pen in the hand, the pen points at a book, the user looks at the book, and the like, or that the user's finger performs a page turning action on the book, and the user looks at the book, and the like.
For another example, the scene characteristics corresponding to the lighting scene using the computer include a computer display screen, a user's sight line looking at the computer display screen, and the like.
For another example, the scene characteristics corresponding to the lighting scene for watching the video by using the tablet computer include the tablet computer and the view of the user looking at the tablet computer.
And after the scene features are extracted, matching the scene features with the scene features corresponding to each lighting scene in the scene feature library.
S230, identifying the lighting scene required by the user according to the matching result;
specifically, suppose that the scene features extracted from the usage scene image are that the user holds the pen in his hand, the pen is pointed at the book, and the user's line of sight is looking at the book.
If the extracted scene features are matched with the scene features corresponding to the lighting scenes using the computer in the scene feature library, if the scene features cannot be matched, the lighting scenes corresponding to the images using the scene are not the lighting scenes using the computer.
And if the extracted scene features are matched with the scene features corresponding to the writing operation lighting scene in the scene feature library, and a plurality of scene features can be matched, the lighting scene corresponding to the using scene image is the writing operation lighting scene.
S300, adjusting the light parameters of the intelligent lighting device according to the identified lighting scene.
In the embodiment, the lighting scene corresponding to the scene image is identified in a scene feature matching mode, so that the identification difficulty can be simplified, and the identification speed and accuracy can be improved.
According to a third embodiment provided by the present invention, as shown in fig. 3, a scene-based light adjusting method includes:
s100, when the intelligent lighting device is started or moved, acquiring a current use scene image of a user;
s210, extracting scene features from the use scene image;
s220, matching the scene characteristics with scene characteristics in a preset scene characteristic library;
s231, calculating the probability of the lighting scene corresponding to the using scene image according to the matching result;
specifically, when matching the scene features extracted from the usage scene image with the scene features in the scene feature library, the matching probability of the usage scene image and each lighting scene is calculated according to the matching result. When calculating the matching probability, the matching probability may be calculated according to the number of matches between the extracted scene features and the scene features corresponding to the respective lighting scenes.
For example, five scene features, a first scene feature, a second scene feature, a third scene feature, a fourth scene feature, and a fifth scene feature, are extracted from the usage scene image.
A first lighting scene in the scene feature library corresponds to the first scene feature, the second scene feature, the sixth scene feature, the seventh scene feature, and the eighth scene feature.
The second lighting scene corresponds to the first scene feature, the second scene feature, the third scene feature, the sixth scene feature, and the eighth scene feature.
The third lighting scene corresponds to the first scene feature, the second scene feature, the third scene feature, the fourth scene feature, the fifth scene feature, and the sixth scene feature.
Five scene features extracted from the usage scene image are matched with two scene features in the first lighting scene; matching three scene features in the second lighting scene; matching five scene features in the third lighting scene. By dividing the number of matched scene features by the total number of scene features extracted from the usage scene image, it can be seen that the probability of the usage scene image matching the first lighting scene is 40%, the probability of the usage scene image matching the second lighting scene is 60%, and the probability of the usage scene image matching the third lighting scene is 100%. And calculating the matching probability of the extracted scene features and each lighting scene according to the method.
S232, identifying the lighting scene required by the user according to the probability;
specifically, after the matching probabilities of the usage scene images and the lighting scenes are calculated, the lighting scene with the highest matching probability may be selected as the lighting scene corresponding to the usage scene image. In order to improve the matching accuracy, a lowest threshold of the matching probability needs to be set, that is, the lighting scenes with the matching probability greater than the threshold are screened out first, and then the lighting scene with the highest matching probability is selected from the screened lighting scenes as the lighting scene corresponding to the use scene image.
S300, adjusting the light parameters of the intelligent lighting device according to the identified lighting scene.
In this embodiment, the most likely lighting scene is identified by calculating the probability of matching the lighting scene, and the accuracy of identification can be improved.
According to a fourth embodiment provided by the present invention, as shown in fig. 4, a scene-based light adjusting method includes:
s010 collects a plurality of using scene images;
specifically, the collected usage scene image needs to contain various usage fields, such as an image at the time of writing, an image at the time of using a computer, and an image at the time of viewing a video using a tablet computer, and the more abundant the collected usage scene image, the better.
The collection of the using scene images can shoot images of a user in various using scenes through a camera device, or crawl a large number of using scene images on a network through technologies such as a crawler and the like, and then the images are classified and sorted. Of course, other collection of the usage scene image may be used to enrich the usage scene image.
S020 extracting scene features in each use scene image;
specifically, the scene features in each of the usage scene images may be extracted by an image processing technique, or the scene features in each of the usage scene images may be extracted by a trained scene feature extraction model.
The scene feature extraction model is some open source model algorithms, which are obtained by inputting a large number of used scene images and corresponding scene features and training, the training process of the scene feature extraction model needs to be determined according to the adopted open source algorithm, and the training method is the prior art and is not repeated herein.
S030 establishes a mapping relation between scene features in each use scene image and the lighting scene;
specifically, after the scene features are extracted from each usage scene image, a mapping relationship between the scene features and the lighting scenes corresponding to the usage scene images is established, so that the lighting scenes corresponding to the usage scene features can be identified according to the scene features in the usage scene images.
S040 creates the scene feature library according to the lighting scene and the corresponding scene features;
specifically, after the mapping relationship between the lighting scene and the scene features is established, a scene feature library is established, the lighting scene and the corresponding scene features are placed in the scene feature library, and the scene feature library for identifying the lighting scene corresponding to the used scene image is established.
S100, when the intelligent lighting device is started or moved, acquiring a current use scene image of a user;
s210, extracting scene features from the use scene image;
s220, matching the scene characteristics with scene characteristics in a preset scene characteristic library;
s230, identifying the lighting scene required by the user according to the matching result;
s300, adjusting the light parameters of the intelligent lighting device according to the identified lighting scene.
In the embodiment, the scene feature library is established by collecting a large number of using scene images, so that the recognition rate of the lighting scene can be improved, the lighting angle, the lighting color and the lighting brightness of the lighting can be adjusted according to the actual using requirements of the user, manual adjustment of the user is not needed, and the using experience of the user is improved.
According to a fifth embodiment provided by the present invention, as shown in fig. 5, a scene-based light adjusting method includes:
s100, when the intelligent lighting device is started or moved, acquiring a current use scene image of a user;
s200, identifying an illumination scene required by the user according to the use scene image;
s250, when the lighting scene required by the user cannot be identified according to the using scene image, sending voice prompt information and collecting voice information input by the user;
s260, identifying the lighting scene required by the user according to the voice information;
s300, adjusting the light parameters of the intelligent lighting device according to the identified lighting scene.
Specifically, when the lighting scene required by the user cannot be identified according to the usage scene image, for example, the scene features extracted from the usage scene image cannot be matched with the scene features in the scene feature library, or the matching probability with the matched lighting scene is all lower than a preset threshold, the intelligent lighting device sends out voice prompt information to prompt the user to input the current lighting scene in a voice mode, and then the voice information (the current lighting scene) input by the user is collected through a microphone installed on the intelligent lighting device.
After voice information input by a user is collected, the voice information is converted into text information, then the text information is subjected to semantic analysis, the semantics corresponding to the voice information are analyzed, and then the lighting scene required by the user is identified according to the analyzed semantics.
In this embodiment, when the lighting scene required by the user cannot be identified by using the scene image, the lighting scene required by the user is identified by using the voice information input by the user, so that the user can be better served, the irradiation angle, the light brightness and the light color of the intelligent lighting device are prevented from being manually adjusted by the user, and the use experience of the user is improved.
According to a sixth embodiment provided by the present invention, as shown in fig. 6, an intelligent lighting device includes:
the image acquisition module 100 is used for acquiring a current use scene image of a user when the intelligent lighting device is started or moved;
specifically, when the intelligent lighting device is started, a camera on the intelligent lighting device is automatically started to collect the current use scene image of the user; or when the intelligent lighting device is in a working state and is moved, the camera on the intelligent lighting device is automatically started to collect the current use scene image of the user. The intelligent lighting device can be an intelligent desk lamp, an intelligent bedside lamp and the like.
The captured usage scene image is an image that captures what the user is doing, for example, an image captured while the user is learning, an image captured while the user is watching a video, or an image captured while the user is using a computer. The usage scene image includes images of the user and the environment.
According to the scheme, the camera is not started in real time, but is started when specific conditions (such as being started or being moved) are met, and the energy consumption of the camera is reduced.
A scene recognition module 200, configured to recognize a lighting scene required by the user according to the usage scene image;
specifically, after a usage scene image of a user is acquired, a lighting scene required by the user is identified according to scene feature information contained in the usage scene image.
For example, when the user is learning, the captured usage scene image should include scene features such as the user holding a pen in his hand, pointing the pen at a book, and looking at the book with the user's gaze, or scene features such as the user's finger turning a page on the book and looking at the book with the user's gaze.
For another example, when the user uses a computer, the captured image of the usage scene should include scene features such as a computer display screen and the view of the user looking at the computer display screen.
For another example, when the user watches a video using a tablet computer, the captured usage scene image should include scene features such as the tablet computer and the user's sight line looking at the tablet computer.
The intelligent lighting device can be integrated with an image processing chip, scene features in the used scene image are extracted through the image processing chip, and then the lighting scene required by the user is identified according to the extracted scene features.
Or the intelligent lighting device sends the collected using scene images to a server, the server extracts scene features from the using scene images, and then the lighting scene required by the user is identified. The server extracts the scene features in the used scene image, so that an image processing chip is not required to be integrated on the intelligent lighting device, the performance requirement on the intelligent lighting device is reduced, and the manufacturing cost of the intelligent lighting device is further reduced.
And the light adjusting module 300 is configured to adjust light parameters of the intelligent lighting device according to the identified lighting scene.
Specifically, after the lighting scene required by the user is identified according to the collected using scene image, the lighting parameters of the intelligent lighting device can be adjusted according to the lighting scene. The light parameters include, but are not limited to, the illumination angle of the light, the color of the light, the brightness of the light, etc.
For example, if it is recognized that the user is doing a job according to the collected usage scene image, the light irradiation angle of the intelligent lighting device may be adjusted to reduce shadows as much as possible and protect the eyesight of the user.
For another example, if the user is identified to be watching a video according to the collected usage scene image, the lighting intensity of the light of the intelligent lighting device may be adjusted.
According to the embodiment, the using scene images of the user are collected, the lighting scene required by the user is identified according to the using scene images, and the light parameters of the intelligent lighting device are automatically adjusted according to the lighting scene required by the user, so that the light parameters of the intelligent lighting device are automatically adjusted according to the actual using requirements of the user, the user does not need to manually operate, and the using experience of the user is improved.
According to a seventh embodiment provided by the present invention, as shown in fig. 7, an intelligent lighting device includes:
the image acquisition module 100 is used for acquiring a current use scene image of a user when the intelligent lighting device is started or moved;
a scene recognition module 200, configured to recognize a lighting scene required by the user according to the usage scene image;
and the light adjusting module 300 is configured to adjust light parameters of the intelligent lighting device according to the identified lighting scene.
The scene recognition module 200 includes:
a feature extraction unit 210 configured to extract scene features from the usage scene image;
specifically, the intelligent lighting device may be integrated with an image processing chip, and the image processing chip extracts scene features in the usage scene image, and then identifies the lighting scene required by the user according to the extracted scene features.
Or the intelligent lighting device sends the collected using scene images to a server, the server extracts scene features from the using scene images, and then the lighting scene required by the user is identified. The server extracts the scene features in the used scene image, so that an image processing chip is not required to be integrated on the intelligent lighting device, the performance requirement on the intelligent lighting device is reduced, and the manufacturing cost of the intelligent lighting device is further reduced.
A feature matching unit 220, configured to match the scene features with scene features in a preset scene feature library;
specifically, the pre-generated scene feature library includes scene features corresponding to various lighting scenes, and one lighting scene may correspond to one or more scene features.
For example, the scene features corresponding to the lighting scene of the writing job include that the user holds a pen in the hand, the pen points at a book, the user looks at the book, and the like, or that the user's finger performs a page turning action on the book, and the user looks at the book, and the like.
For another example, the scene characteristics corresponding to the lighting scene using the computer include a computer display screen, a user's sight line looking at the computer display screen, and the like.
For another example, the scene characteristics corresponding to the lighting scene for watching the video by using the tablet computer include the tablet computer and the view of the user looking at the tablet computer.
And after the scene features are extracted, matching the scene features with the scene features corresponding to each lighting scene in the scene feature library.
And a scene recognition unit 230, configured to recognize a lighting scene required by the user according to the matching result.
Specifically, suppose that the scene features extracted from the usage scene image are that the user holds the pen in his hand, the pen is pointed at the book, and the user's line of sight is looking at the book.
If the extracted scene features are matched with the scene features corresponding to the lighting scenes using the computer in the scene feature library, if the scene features cannot be matched, the lighting scenes corresponding to the images using the scene are not the lighting scenes using the computer.
And if the extracted scene features are matched with the scene features corresponding to the writing operation lighting scene in the scene feature library, and a plurality of scene features can be matched, the lighting scene corresponding to the using scene image is the writing operation lighting scene.
The lighting scene corresponding to the scene image is identified in a scene feature matching mode, so that the identification difficulty can be simplified, and the identification speed and accuracy can be improved.
Preferably, the scene recognition unit 230 includes:
a probability calculating subunit 231, configured to calculate, according to the matching result, a probability that the usage scene image corresponds to the lighting scene;
specifically, when matching the scene features extracted from the usage scene image with the scene features in the scene feature library, the matching probability of the usage scene image and each lighting scene is calculated according to the matching result. When calculating the matching probability, the matching probability may be calculated according to the number of matches between the extracted scene features and the scene features corresponding to the respective lighting scenes.
For example, five scene features, a first scene feature, a second scene feature, a third scene feature, a fourth scene feature, and a fifth scene feature, are extracted from the usage scene image.
A first lighting scene in the scene feature library corresponds to the first scene feature, the second scene feature, the sixth scene feature, the seventh scene feature, and the eighth scene feature.
The second lighting scene corresponds to the first scene feature, the second scene feature, the third scene feature, the sixth scene feature, and the eighth scene feature.
The third lighting scene corresponds to the first scene feature, the second scene feature, the third scene feature, the fourth scene feature, the fifth scene feature, and the sixth scene feature.
Five scene features extracted from the usage scene image are matched with two scene features in the first lighting scene; matching three scene features in the second lighting scene; matching five scene features in the third lighting scene. By dividing the number of matched scene features by the total number of scene features extracted from the usage scene image, it can be seen that the probability of the usage scene image matching the first lighting scene is 40%, the probability of the usage scene image matching the second lighting scene is 60%, and the probability of the usage scene image matching the third lighting scene is 100%.
And a scene identification subunit 232, configured to identify the lighting scene required by the user according to the probability.
Specifically, after the matching probabilities of the usage scene images and the lighting scenes are calculated, the lighting scene with the highest matching probability may be selected as the lighting scene corresponding to the usage scene image. In order to improve the matching accuracy, a lowest threshold of the matching probability needs to be set, that is, the lighting scenes with the matching probability greater than the threshold are screened out first, and then the lighting scene with the highest matching probability is selected from the screened lighting scenes as the lighting scene corresponding to the use scene image.
By calculating the probability of matching the lighting scenes to identify the most likely lighting scene, the accuracy of the identification can be improved.
Preferably, the method further comprises the following steps:
an image collection module 400 for collecting a plurality of usage scene images;
specifically, the collected usage scene image needs to contain various usage fields, such as an image at the time of writing, an image at the time of using a computer, and an image at the time of viewing a video using a tablet computer, and the more abundant the collected usage scene image, the better.
The collection of the using scene images can shoot images of a user in various using scenes through a camera device, or crawl a large number of using scene images on a network through technologies such as a crawler and the like, and then the images are classified and sorted. Of course, other collection of the usage scene image may be used to enrich the usage scene image.
The feature extraction unit 500 is further configured to extract scene features in each usage scene image;
specifically, the scene features in each of the usage scene images may be extracted by an image processing technique, or the scene features in each of the usage scene images may be extracted by a trained scene feature extraction model.
The scene feature extraction model is some open source model algorithms, which are obtained by inputting a large number of used scene images and corresponding scene features and training, the training process of the scene feature extraction model needs to be determined according to the adopted open source algorithm, and the training method is the prior art and is not repeated herein.
A mapping relationship establishing module 600, configured to establish a mapping relationship between the scene features in each usage scene image and the lighting scene;
specifically, after the scene features are extracted from each usage scene image, a mapping relationship between the scene features and the lighting scenes corresponding to the usage scene images is established, so that the lighting scenes corresponding to the usage scene features can be identified according to the scene features in the usage scene images.
A feature library creation module 700 configured to create the scene feature library according to the lighting scene and the corresponding scene features.
Specifically, after the mapping relationship between the lighting scene and the scene features is established, a scene feature library is established, the lighting scene and the corresponding scene features are placed in the scene feature library, and the scene feature library for identifying the lighting scene corresponding to the used scene image is established.
The scene feature library is established by collecting a large number of using scene images, so that the recognition rate of the lighting scene can be improved, the lighting angle, the lighting color and the lighting brightness of the lighting can be adjusted according to the actual using requirements of the user, the user does not need to manually adjust the lighting angle, the lighting color and the lighting brightness, and the using experience of the user is improved.
Preferably, a voice acquisition module 800 is further included;
a voice collecting module 800, configured to send voice prompt information and collect voice information input by a user when the lighting scene required by the user cannot be identified according to the usage scene image;
and the scene recognition module 200 is further configured to recognize a lighting scene required by the user according to the voice information.
Specifically, when the lighting scene required by the user cannot be identified according to the usage scene image, for example, the scene features extracted from the usage scene image cannot be matched with the scene features in the scene feature library, or the matching probability with the matched lighting scene is all lower than a preset threshold, the intelligent lighting device sends out voice prompt information to prompt the user to input the current lighting scene in a voice mode, and then the voice information (the current lighting scene) input by the user is collected through a microphone installed on the intelligent lighting device.
After voice information input by a user is collected, the voice information is converted into text information, then the text information is subjected to semantic analysis, the semantics corresponding to the voice information are analyzed, and then the lighting scene required by the user is identified according to the analyzed semantics.
When the lighting scene required by the user cannot be identified by using the scene image, the lighting scene required by the user is identified by the voice information input by the user so as to better serve the user, the condition that the user manually adjusts the irradiation angle, the light brightness and the light color of the intelligent lighting device is avoided, and the use experience of the user is improved.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A scene-based light adjustment method is characterized by comprising the following steps:
when the intelligent lighting device is started or moved, acquiring a current use scene image of a user;
identifying a lighting scene required by the user according to the using scene image;
and adjusting the light parameters of the intelligent lighting device according to the identified lighting scene.
2. A scene-based light adjustment method according to claim 1, wherein the identifying of the lighting scene required by the user from the usage scene image specifically comprises:
extracting scene features from the use scene image;
matching the scene characteristics with scene characteristics in a preset scene characteristic library;
and identifying the lighting scene required by the user according to the matching result.
3. A scene-based light adjustment method according to claim 2, wherein the identifying the lighting scene required by the user according to the matching result specifically comprises:
calculating the probability of the lighting scene corresponding to the using scene image according to the matching result;
and identifying the lighting scene required by the user according to the probability.
4. A scene-based light adjustment method according to claim 2, characterized by further comprising:
collecting a plurality of use scene images;
extracting scene features in each use scene image;
establishing a mapping relation between scene features in each use scene image and the lighting scene;
and creating the scene feature library according to the lighting scene and the corresponding scene features.
5. A scene-based light adjustment method according to claim 1, further comprising:
when the lighting scene required by the user cannot be identified according to the using scene image, sending voice prompt information and collecting voice information input by the user;
and identifying the lighting scene required by the user according to the voice information.
6. An intelligent lighting device, comprising:
the image acquisition module is used for acquiring a current use scene image of a user when the intelligent lighting device is started or moved;
the scene recognition module is used for recognizing the lighting scene required by the user according to the using scene image;
and the light adjusting module is used for adjusting the light parameters of the intelligent lighting device according to the identified lighting scene.
7. The intelligent lighting device according to claim 6, wherein the scene recognition module comprises:
a feature extraction unit configured to extract scene features from the usage scene image;
the characteristic matching unit is used for matching the scene characteristics with the scene characteristics in a preset scene characteristic library;
and the scene identification unit is used for identifying the lighting scene required by the user according to the matching result.
8. The intelligent lighting device according to claim 7, wherein the scene recognition unit comprises:
the probability calculating subunit is used for calculating the probability of the lighting scene corresponding to the using scene image according to the matching result;
and the scene identification subunit is used for identifying the lighting scene required by the user according to the probability.
9. The intelligent lighting device according to claim 7, further comprising:
the image collection module is used for collecting various using scene images;
the characteristic extraction module is also used for extracting scene characteristics in each use scene image;
the mapping relation establishing module is used for establishing a mapping relation between scene characteristics in each use scene image and the lighting scene;
and the feature library creating module is used for creating the scene feature library according to the lighting scene and the corresponding scene features.
10. The intelligent lighting device according to claim 6, further comprising a voice acquisition module;
the voice acquisition module is used for sending voice prompt information and acquiring voice information input by a user when the lighting scene required by the user cannot be identified according to the using scene image;
and the scene recognition module is also used for recognizing the lighting scene required by the user according to the voice information.
CN201910426290.9A 2019-05-21 2019-05-21 Scene-based light adjusting method and intelligent lighting device Pending CN112074062A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910426290.9A CN112074062A (en) 2019-05-21 2019-05-21 Scene-based light adjusting method and intelligent lighting device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910426290.9A CN112074062A (en) 2019-05-21 2019-05-21 Scene-based light adjusting method and intelligent lighting device

Publications (1)

Publication Number Publication Date
CN112074062A true CN112074062A (en) 2020-12-11

Family

ID=73657847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910426290.9A Pending CN112074062A (en) 2019-05-21 2019-05-21 Scene-based light adjusting method and intelligent lighting device

Country Status (1)

Country Link
CN (1) CN112074062A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113282178A (en) * 2021-06-17 2021-08-20 威强科技(北京)有限公司 But lighting device of automatically regulated gesture
CN113313061A (en) * 2021-06-18 2021-08-27 张学勇 Illumination scene recognition method for ear-nose-throat department
CN113483283A (en) * 2021-08-05 2021-10-08 威强科技(北京)有限公司 Lighting device capable of automatically adjusting posture according to use scene
CN113966051A (en) * 2021-12-23 2022-01-21 深圳市奥新科技有限公司 Intelligent control method, device and equipment for desk lamp illumination and storage medium
CN114189969A (en) * 2021-12-31 2022-03-15 苏州欧普照明有限公司 Lamp control method and device, electronic equipment and computer readable storage medium
CN116326976A (en) * 2023-05-11 2023-06-27 合肥坤语智能科技有限公司 Controllable multilayer wisdom (window) curtain of degree of opening and shutting
WO2023166677A1 (en) * 2022-03-03 2023-09-07 日本電気株式会社 Lighting management device, lighting management system, lighting management method, and recording medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106304533A (en) * 2015-06-12 2017-01-04 泉州市金太阳照明科技有限公司 A kind of lamplight scene intelligence control system
CN108009588A (en) * 2017-12-01 2018-05-08 深圳市智能现实科技有限公司 Localization method and device, mobile terminal
CN207995446U (en) * 2017-12-28 2018-10-19 松下电气机器(北京)有限公司 Terminal device and lamp control system
CN109587875A (en) * 2018-11-16 2019-04-05 厦门盈趣科技股份有限公司 A kind of intelligent desk lamp and its adjusting method
CN109769333A (en) * 2019-03-29 2019-05-17 山东建筑大学 Event driven home furnishings intelligent means of illumination and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106304533A (en) * 2015-06-12 2017-01-04 泉州市金太阳照明科技有限公司 A kind of lamplight scene intelligence control system
CN108009588A (en) * 2017-12-01 2018-05-08 深圳市智能现实科技有限公司 Localization method and device, mobile terminal
CN207995446U (en) * 2017-12-28 2018-10-19 松下电气机器(北京)有限公司 Terminal device and lamp control system
CN109587875A (en) * 2018-11-16 2019-04-05 厦门盈趣科技股份有限公司 A kind of intelligent desk lamp and its adjusting method
CN109769333A (en) * 2019-03-29 2019-05-17 山东建筑大学 Event driven home furnishings intelligent means of illumination and system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113282178A (en) * 2021-06-17 2021-08-20 威强科技(北京)有限公司 But lighting device of automatically regulated gesture
CN113313061A (en) * 2021-06-18 2021-08-27 张学勇 Illumination scene recognition method for ear-nose-throat department
CN113483283A (en) * 2021-08-05 2021-10-08 威强科技(北京)有限公司 Lighting device capable of automatically adjusting posture according to use scene
CN113966051A (en) * 2021-12-23 2022-01-21 深圳市奥新科技有限公司 Intelligent control method, device and equipment for desk lamp illumination and storage medium
CN113966051B (en) * 2021-12-23 2022-03-15 深圳市奥新科技有限公司 Intelligent control method, device and equipment for desk lamp illumination and storage medium
CN114189969A (en) * 2021-12-31 2022-03-15 苏州欧普照明有限公司 Lamp control method and device, electronic equipment and computer readable storage medium
CN114189969B (en) * 2021-12-31 2024-03-01 苏州欧普照明有限公司 Lamp control method, device, electronic equipment and computer readable storage medium
WO2023166677A1 (en) * 2022-03-03 2023-09-07 日本電気株式会社 Lighting management device, lighting management system, lighting management method, and recording medium
CN116326976A (en) * 2023-05-11 2023-06-27 合肥坤语智能科技有限公司 Controllable multilayer wisdom (window) curtain of degree of opening and shutting
CN116326976B (en) * 2023-05-11 2023-08-15 合肥坤语智能科技有限公司 Controllable multilayer wisdom (window) curtain of degree of opening and shutting

Similar Documents

Publication Publication Date Title
CN112074062A (en) Scene-based light adjusting method and intelligent lighting device
CN108898579B (en) Image definition recognition method and device and storage medium
US11393205B2 (en) Method of pushing video editing materials and intelligent mobile terminal
CN110163115B (en) Video processing method, device and computer readable storage medium
KR102174595B1 (en) System and method for identifying faces in unconstrained media
CN106250877B (en) Near-infrared face identification method and device
US10438080B2 (en) Handwriting recognition method and apparatus
WO2019120029A1 (en) Intelligent screen brightness adjustment method and apparatus, and storage medium and mobile terminal
CN103714347B (en) Face identification method and face identification device
CN110309709A (en) Face identification method, device and computer readable storage medium
TW201344546A (en) Method for selecting icon from photo folder automatically and automatic selecting system
CN105267013B (en) A kind of head-wearing type intelligent visually impaired accessory system
CN106707512B (en) Low-power consumption intelligent AR system and intelligent AR glasses
CN107911643A (en) Show the method and apparatus of scene special effect in a kind of video communication
CN104217718A (en) Method and system for voice recognition based on environmental parameter and group trend data
CN113596344A (en) Shooting processing method and device, electronic equipment and readable storage medium
KR100847142B1 (en) Preprocessing method for face recognition, face recognition method and apparatus using the same
CN105512119A (en) Image ranking method and terminal
CN108153568B (en) Information processing method and electronic equipment
WO2019170038A1 (en) Target screen determining method and device, and storage medium
CN110491384B (en) Voice data processing method and device
WO2007014121B1 (en) Neural network based rating system
Kannoth et al. Hand Gesture Recognition Using CNN & Publication of World's Largest ASL Database
CN111652131A (en) Face recognition device, light supplementing method thereof and readable storage medium
CN112861571A (en) Household appliance control method, control device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination