CN114529506A - Lamplight monitoring method and system based on machine learning - Google Patents

Lamplight monitoring method and system based on machine learning Download PDF

Info

Publication number
CN114529506A
CN114529506A CN202111679344.6A CN202111679344A CN114529506A CN 114529506 A CN114529506 A CN 114529506A CN 202111679344 A CN202111679344 A CN 202111679344A CN 114529506 A CN114529506 A CN 114529506A
Authority
CN
China
Prior art keywords
data
image
many
characteristic value
lamp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111679344.6A
Other languages
Chinese (zh)
Inventor
林晓阳
张小云
郭栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Yankon Energetic Lighting Co Ltd
Original Assignee
Xiamen Yankon Energetic Lighting Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Yankon Energetic Lighting Co Ltd filed Critical Xiamen Yankon Energetic Lighting Co Ltd
Priority to CN202111679344.6A priority Critical patent/CN114529506A/en
Publication of CN114529506A publication Critical patent/CN114529506A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a light monitoring method based on machine learning, which comprises the following steps: acquiring RGB data, IR gray data and Depth data of a sample image set image, and acquiring luminous flux and color temperature of a lamp in a corresponding image; constructing a characteristic sequence according to RGB data, IR gray scale data and Depth data of the image, and taking luminous flux and color temperature of a lamp in the corresponding image as a first label and a second label; inputting the characteristic sequence, the first label and the second label into a pre-trained many-to-many regression network model for training to obtain a trained many-to-many network model; fusing the model of the RGB data, the IR gray data and the Depth data of the extracted image with the trained many-to-many network model to obtain a light monitoring model; inputting an image to be monitored into the lamplight monitoring model to obtain the luminous flux and the color temperature of the lamp in the image; the method provided by the invention realizes automatic light monitoring of the lamp, and has the advantages of rapid test process, accurate result and low cost.

Description

Lamplight monitoring method and system based on machine learning
Technical Field
The invention relates to the field of lamp testing, in particular to a machine learning-based lamplight monitoring method and system.
Background
In research and development testing of a lamp, the brightness and the color temperature of the lamp need to be identified, the brightness of the lamp is generally obtained through a photosensitive sensor in the traditional lamp brightness identification, the color temperature of the lamp is mainly related to a current ratio parameter passing through the lamp, therefore, the current ratio parameter passing through the lamp needs to be obtained firstly in the identification of the color temperature of the lamp, the corresponding color temperature is searched according to the corresponding relation between the preset current ratio parameter and the color temperature, the testing process is complicated, and the testing cost is high.
Disclosure of Invention
The invention mainly aims to overcome the defects in the prior art, and provides a machine learning-based lamplight monitoring method, which combines image recognition and machine learning to realize automatic lamplight monitoring of lamps in the research and development process and the production process without manual intervention, and has the advantages of rapid test process, accurate result and low cost.
The invention adopts the following technical scheme:
a machine learning-based light monitoring method comprises the following steps:
acquiring RGB data, IR gray data and Depth data of a sample image set image, and acquiring luminous flux and color temperature of a lamp in a corresponding image;
constructing a characteristic sequence according to RGB data, IR gray scale data and Depth data of the image, and taking luminous flux and color temperature of a lamp in the corresponding image as a first label and a second label;
inputting the characteristic sequence, the first label and the second label into a pre-trained many-to-many regression network model for training to obtain a trained many-to-many network model;
fusing the model of the RGB data, the IR gray data and the Depth data of the extracted image with the trained many-to-many network model to obtain a light monitoring model;
and inputting the image to be monitored into the lamplight monitoring model to obtain the luminous flux and the color temperature of the lamp in the image.
Specifically, a feature sequence is constructed according to RGB data, IR gray data, and Depth data of the image, specifically:
taking the average value of the R values of the pixels in the image as a first characteristic value;
taking the average value of the G values of the pixels in the image as a second characteristic value;
taking the average value of the B values of the pixels in the image as a third characteristic value;
taking the average IR value of the pixels in the image as a fourth characteristic value;
taking the average value of Depth values of pixels in the image as a fifth characteristic value;
and constructing the first characteristic value, the second characteristic value, the third characteristic value, the fourth characteristic value and the fifth characteristic value as a characteristic sequence.
Specifically, the many-to-many network model includes, but is not limited to: many-to-many cyclic neural network model, LSTM model.
Specifically, the method further comprises the following steps:
the method comprises the steps of obtaining RGB data, IR gray data and Depth data of images in a sample image set, determining a lamp area according to the RGB data, the IR gray data and the Depth data of the images in the sample image set, and constructing a feature sequence according to the RGB data, the IR gray data and the Depth data in the lamp area.
Another aspect of an embodiment of the present invention provides a light monitoring system based on machine learning, including:
the sample data acquisition unit: acquiring RGB data, IR gray data and Depth data of a sample image set image, and acquiring luminous flux and color temperature of a lamp in a corresponding image;
a characteristic sequence and label obtaining unit: constructing a characteristic sequence according to RGB data, IR gray scale data and Depth data of the image, and taking luminous flux and color temperature of a lamp in the corresponding image as a first label and a second label;
a many-to-many network model training unit: inputting the characteristic sequence, the first label and the second label into a pre-trained many-to-many regression network model for training to obtain a trained many-to-many network model;
the light monitoring model acquisition unit: fusing the model of the RGB data, the IR gray data and the Depth data of the extracted image with the trained many-to-many network model to obtain a light monitoring model;
a lamp monitoring unit: and inputting the image to be monitored into the lamplight monitoring model to obtain the luminous flux and the color temperature of the lamp in the image.
Specifically, a feature sequence is constructed according to RGB data, IR gray data, and Depth data of the image, specifically:
taking the average value of the R values of the pixels in the image as a first characteristic value;
taking the average value of the G values of the pixels in the image as a second characteristic value;
taking the average value of the B values of the pixels in the image as a third characteristic value;
taking the average IR value of the pixels in the image as a fourth characteristic value;
taking the average value of Depth values of pixels in the image as a fifth characteristic value;
and constructing the first characteristic value, the second characteristic value, the third characteristic value, the fourth characteristic value and the fifth characteristic value as a characteristic sequence.
Specifically, the many-to-many network model includes, but is not limited to: many-to-many cyclic neural network model, LSTM model.
Specifically, the method further comprises the following steps:
the method comprises the steps of obtaining RGB data, IR gray data and Depth data of images in a sample image set, determining a lamp area according to the RGB data, the IR gray data and the Depth data of the images in the sample image set, and constructing a feature sequence according to the RGB data, the IR gray data and the Depth data in the lamp area.
In another aspect, an electronic device according to an embodiment of the present invention includes: a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements the above-mentioned method steps for machine learning based lighting monitoring.
In yet another aspect of the embodiments of the present invention, a computer-readable storage medium is stored with a computer program, and the computer program, when executed by a processor, implements the above-mentioned method steps of machine learning-based light monitoring.
As can be seen from the above description of the present invention, compared with the prior art, the present invention has the following advantages:
(1) the invention provides a light monitoring method based on machine learning, which comprises the steps of firstly, obtaining RGB data, IR gray scale data and Depth data of a sample image concentrated image, and obtaining luminous flux and color temperature of a lamp in a corresponding image; constructing a characteristic sequence according to RGB data, IR gray scale data and Depth data of the image, and taking luminous flux and color temperature of a lamp in the corresponding image as a first label and a second label; inputting the characteristic sequence, the first label and the second label into a pre-trained many-to-many regression network model for training to obtain a trained many-to-many network model; fusing the model of the RGB data, the IR gray data and the Depth data of the extracted image with the trained many-to-many network model to obtain a light monitoring model; inputting an image to be monitored into the lamplight monitoring model to obtain the luminous flux and the color temperature of the lamp in the image; the method provided by the invention combines image recognition and machine learning, realizes automatic light monitoring of the lamps in the research and development process and the production process, does not need manual intervention, and has the advantages of rapid test process, accurate result and low cost.
(2) The RGB data, the IR gray data and the Depth data of the image are used as input characteristic values, the attribute of the image is comprehensively represented, the training model is accurate, and the test result is more accurate.
Drawings
Fig. 1 is a flowchart of a machine learning-based lighting monitoring method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an RNN according to an embodiment of the present invention;
fig. 3 is a schematic diagram of lamp identification provided in the embodiment of the present invention;
fig. 4 is a structural diagram of a system for machine learning-based light monitoring according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an embodiment of an electronic device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an embodiment of a computer-readable storage medium according to an embodiment of the present invention.
The invention is described in further detail below with reference to the figures and specific examples.
Detailed Description
The invention provides a light monitoring method based on machine learning, which combines image recognition and machine learning to realize automatic light monitoring of lamps in the research and development process and the production process without manual intervention, and has the advantages of rapid test process, accurate result and low cost.
As shown in fig. 1, a flowchart of a machine learning-based light monitoring method provided in an embodiment of the present invention is specifically:
a machine learning-based light monitoring method comprises the following steps:
s101: acquiring RGB data, IR gray data and Depth data of a sample image set image, and acquiring luminous flux and color temperature of a lamp in a corresponding image;
firstly, acquiring a sample image set containing lamps, and then acquiring RGB data, IR gray data and Depth data of images in the image set;
it is worth to be noted that, a 3D camera product currently has a binocular structured light (RGB + IR) scheme and a TOF (single IR camera) scheme, and the embodiment of the present invention may adopt a structure form in which the TOF scheme is added to the RGB camera.
Specifically, frame synchronization signals are added to color RGB and infrared IR, RGB data and IR gray data are synchronously acquired, and Depth data is generated by processing the frame data by a 3D Depth generation algorithm. The TOF scheme requires only IR grayscale data to generate Depth data.
S102: constructing a characteristic sequence according to RGB data, IR gray scale data and Depth data of the image, and taking luminous flux and color temperature of a lamp in the corresponding image as a first label and a second label;
in fact, the method further includes, before the step, acquiring RGB data, IR grayscale data, and Depth data of the image in the sample image set, determining a lamp region according to the RGB data, the IR grayscale data, and the Depth data of the image in the sample image set, and constructing a feature sequence according to the RGB data, the IR grayscale data, and the Depth data in the lamp region.
Determining a lamp area according to RGB data, IR gray data and Depth data of an image in a sample image set, wherein specifically, according to the change rate of the RGB data, the IR gray data and the Depth data of the image, the change rate of each data in the lamp area is slow, and the change rate of each data exceeds the lamp area and suddenly drops; and determining the lamp area by taking the sudden drop of the change rate of the data as a boundary.
Taking the average value of the R values of the pixels in the image lamp area as a first characteristic value;
taking the average value of the G values of the pixels in the image lamp area as a second characteristic value;
taking the average value of the B values of the pixels in the image lamp area as a third characteristic value;
taking the average IR value of the pixels in the image lamp area as a fourth characteristic value;
taking the average value of Depth values of pixels in the image lamp area as a fifth characteristic value;
and constructing the first characteristic value, the second characteristic value, the third characteristic value, the fourth characteristic value and the fifth characteristic value as a characteristic sequence.
The brightness of a luminaire is characterized by the luminous flux, unit: lumen, i.e. lm. The amount of light emitted by the light source per unit time is referred to as the luminous flux of the light source. Again, this quantity is for the light source and describes the magnitude of the total amount of light emitted by the light source, equivalent to the optical power. The greater the luminous flux of the light source, the more light is emitted.
Color temperature is an important index for the lamp, and color temperature is a unit of measure representing the color components contained in the light. Theoretically, the blackbody temperature refers to the color that an absolute blackbody would appear after warming from absolute zero (-273 ℃). After being heated, the black body gradually turns from black to red, turns yellow and becomes white, and finally emits blue light. When heated to a certain temperature, the light emitted by a black body contains spectral components, referred to as the color temperature at that temperature, measured in "K".
S103: inputting the characteristic sequence, the first label and the second label into a pre-trained many-to-many regression network model for training to obtain a trained many-to-many network model;
the many-to-many network model includes, but is not limited to: many-to-many cyclic neural network model, LSTM model.
The embodiment of the invention adopts a many-to-many cyclic neural network model:
the Recurrent Neural Network (RNN) increases the transverse connection among the units of the hidden layer on the basis of the common Back Propagation (BP) Neural Network, and can realize the transmission of the value of the Neural unit of the previous time sequence to the current Neural unit through a weight matrix, thereby leading the Neural Network to have the memory function and having good applicability to the processing of the natural language processing with context connection or the machine learning problem of the time sequence. For a standard RNN structure, the RNN's body structure inputs at time t have a circular edge in addition to the input layer Xt to provide the hidden state passed from time t-1.
The applicability of the RNN model varies depending on the number of output and input sequences, as shown in fig. 2, RNN can have many different structures, 5 structures are in turn: one-to-one, one-to-many, many-to-one, spaced many-to-many, synchronous many-to-many. Different structures naturally have different application occasions, and the 5 RNN model structures can respectively correspond to Vanilla neural networks, picture title generation, emotion analysis, machine translation and context prediction application scenes.
The data input of the invention is a characteristic sequence, the required output is the luminous flux and the color temperature of the lamp, and the natural sequence conforms to 2 structures of the RNN model with more intervals and more synchronization. The biggest difference between the two is that the model cannot utilize the association relationship between the features in the input feature sequence at intervals as many as the multiple modes, and the model can synchronize as many as the multiple modes. Therefore, the embodiment of the invention selects the RNN model with a synchronous multi-to-multi structure, namely the synchronous many-to-many cyclic neural network model, as the basic network structure.
S104: fusing the model of extracting RGB data, IR gray data and Depth data of the image with a trained many-to-many network model to obtain a light monitoring model;
combining a conventional algorithm for extracting RGB data, IR gray data and Depth data of an image with a trained many-to-many network model to obtain a light monitoring model;
in this embodiment, the RGB data and the IR gray data of the image may be directly acquired by the camera, and the Depth data is generated by processing the RGB data and the IR gray data through a conventional 3D Depth generation algorithm, so that the light monitoring model may be obtained only by combining the conventional 3D Depth generation algorithm with the trained many-to-many cyclic neural network model.
S105: and inputting the image to be monitored into the light monitoring model to obtain the luminous flux and the color temperature of the lamp in the image.
Fig. 3 is a schematic diagram illustrating identification of a lamp during a test process.
Referring to fig. 4, there is provided a light monitoring system based on machine learning for another aspect of the embodiments of the present invention, including:
the sample data acquisition unit 401: acquiring RGB data, IR gray data and Depth data of a sample image set image, and acquiring luminous flux and color temperature of a lamp in a corresponding image;
firstly, acquiring a sample image set containing a lamp, and then acquiring RGB data, IR gray data and Depth data of images in the image set;
it is worth to be noted that, a 3D camera product currently has a binocular structured light (RGB + IR) scheme and a TOF (single IR camera) scheme, and the embodiment of the present invention may adopt a structure form in which the TOF scheme is added to the RGB camera.
Specifically, frame synchronization signals are added to color RGB and infrared IR, RGB data and IR gray data are synchronously acquired, and Depth data is generated by processing the frame data by a 3D Depth generation algorithm. The TOF scheme requires only IR grayscale data to generate Depth data.
The feature sequence and tag acquisition unit 402: constructing a characteristic sequence according to RGB data, IR gray scale data and Depth data of the image, and taking luminous flux and color temperature of a lamp in the corresponding image as a first label and a second label;
in fact, the method further includes, before the step, acquiring RGB data, IR grayscale data, and Depth data of the image in the sample image set, determining a lamp region according to the RGB data, the IR grayscale data, and the Depth data of the image in the sample image set, and constructing a feature sequence according to the RGB data, the IR grayscale data, and the Depth data in the lamp region.
Determining a lamp area according to RGB data, IR gray data and Depth data of an image in a sample image set, wherein specifically, according to the change rate of the RGB data, the IR gray data and the Depth data of the image, the change rate of each data in the lamp area is slow, and the change rate of each data exceeds the lamp area and suddenly drops; and determining the lamp area by taking the sudden drop of the change rate of the data as a boundary.
Taking the average value of the R values of the pixels in the image lamp area as a first characteristic value;
taking the average value of the G values of the pixels in the image lamp area as a second characteristic value;
taking the average value of the B values of the pixels in the image lamp area as a third characteristic value;
taking the average IR value of the pixels in the image lamp area as a fourth characteristic value;
taking the average value of Depth values of pixels in the image lamp area as a fifth characteristic value;
and constructing the first characteristic value, the second characteristic value, the third characteristic value, the fourth characteristic value and the fifth characteristic value as a characteristic sequence.
The brightness of a luminaire is characterized by the luminous flux, unit: lumen, i.e. lm. The amount of light emitted by the light source per unit time is referred to as the luminous flux of the light source. Again, this quantity is for the light source and describes the magnitude of the total amount of light emitted by the light source, equivalent to the optical power. The greater the luminous flux of the light source, the more light is emitted.
Color temperature is an important index for the lamp, and color temperature is a unit of measure representing the color components contained in the light. Theoretically, the blackbody temperature refers to the color that an absolute blackbody would appear after warming from absolute zero (-273 ℃). After being heated, the black body gradually turns from black to red, turns yellow and becomes white, and finally emits blue light. When heated to a certain temperature, the light emitted by a black body contains spectral components, referred to as the color temperature at that temperature, measured in "K".
Many-to-many network model training unit 403: inputting the characteristic sequence, the first label and the second label into a pre-trained many-to-many regression network model for training to obtain a trained many-to-many network model;
the many-to-many network model includes, but is not limited to: many-to-many cyclic neural network model, LSTM model.
The embodiment of the invention adopts a many-to-many cyclic neural network model:
the Recurrent Neural Network (RNN) increases the transverse connection among the units of the hidden layer on the basis of the common Back Propagation (BP) Neural Network, and can realize the transmission of the value of the Neural unit of the previous time sequence to the current Neural unit through a weight matrix, thereby leading the Neural Network to have the memory function and having good applicability to the processing of the natural language processing with context connection or the machine learning problem of the time sequence. For a standard RNN structure, the RNN's body structure inputs at time t have a circular edge in addition to the input layer Xt to provide the hidden state passed from time t-1.
The applicability of the RNN model varies depending on the number of output and input sequences, as shown in fig. 2, RNN can have many different structures, 5 structures are in turn: one-to-one, one-to-many, many-to-one, spaced many-to-many, synchronous many-to-many. Different structures naturally have different application occasions, and the 5 RNN model structures can respectively correspond to Vanilla neural networks, picture title generation, emotion analysis, machine translation and context prediction application scenes.
The data input of the invention is a characteristic sequence, the required output is the luminous flux and the color temperature of the lamp, and the natural sequence conforms to 2 structures of the RNN model with more intervals and more synchronization. The biggest difference between the two is that the model cannot utilize the association relationship between the features in the input feature sequence at intervals as many as the multiple modes, and the model can synchronize as many as the multiple modes. Therefore, the embodiment of the invention selects the RNN model with a synchronous multi-to-multi structure, namely the synchronous many-to-many cyclic neural network model, as the basic network structure.
The light monitoring model obtaining unit 404: fusing the model of the RGB data, the IR gray data and the Depth data of the extracted image with the trained many-to-many network model to obtain a light monitoring model;
combining a conventional algorithm for extracting RGB data, IR gray data and Depth data of an image with a trained many-to-many network model to obtain a light monitoring model;
in this embodiment, the RGB data and the IR gray data of the image may be directly acquired by the camera, and the Depth data is generated by processing the RGB data and the IR gray data through a conventional 3D Depth generation algorithm, so that the light monitoring model may be obtained only by combining the conventional 3D Depth generation algorithm with the trained many-to-many cyclic neural network model
The luminaire monitoring unit 405: and inputting the image to be monitored into the lamplight monitoring model to obtain the luminous flux and the color temperature of the lamp in the image.
The intelligent terminal carrying the method is deployed to a site needing to be monitored, target equipment needing to be monitored is installed, testing is started, and the output result is waited.
As shown in fig. 5, an embodiment of the present invention provides an electronic device 500, which includes a memory 510, a processor 520, and a computer program 511 stored in the memory 520 and capable of running on the processor 520, where when the processor 520 executes the computer program 511, the light monitoring method based on machine learning according to the embodiment of the present invention is implemented.
In a specific implementation, when the processor 520 executes the computer program 511, any of the embodiments corresponding to fig. 1 may be implemented.
Since the electronic device described in this embodiment is a device used for implementing a data processing apparatus in the embodiment of the present invention, based on the method described in this embodiment of the present invention, a person skilled in the art can understand the specific implementation manner of the electronic device in this embodiment and various variations thereof, so that how to implement the method in this embodiment of the present invention by the electronic device is not described in detail herein, and as long as the person skilled in the art implements the device used for implementing the method in this embodiment of the present invention, the device used for implementing the method in this embodiment of the present invention belongs to the protection scope of the present invention.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating an embodiment of a computer-readable storage medium according to an embodiment of the present invention.
As shown in fig. 6, the present embodiment provides a computer-readable storage medium 600, on which a computer program 611 is stored, and when the computer program 611 is executed by a processor, the method for monitoring light based on machine learning according to the present embodiment is implemented;
in a specific implementation, the computer program 611 may implement any of the embodiments corresponding to fig. 1 when executed by a processor.
It should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The invention provides a light monitoring method based on machine learning, which comprises the steps of firstly, obtaining RGB data, IR gray scale data and Depth data of a sample image concentrated image, and obtaining luminous flux and color temperature of a lamp in a corresponding image; constructing a characteristic sequence according to RGB data, IR gray scale data and Depth data of the image, and taking luminous flux and color temperature of a lamp in the corresponding image as a first label and a second label; inputting the characteristic sequence, the first label and the second label into a pre-trained many-to-many regression network model for training to obtain a trained many-to-many network model; fusing the model of extracting RGB data, IR gray data and Depth data of the image with a trained many-to-many network model to obtain a light monitoring model; inputting an image to be monitored into the lamplight monitoring model to obtain the luminous flux and the color temperature of the lamp in the image; the method provided by the invention combines image recognition and machine learning, realizes automatic light monitoring of the lamps in the research and development process and the production process, does not need manual intervention, and has the advantages of rapid test process, accurate result and low cost.
The RGB data, the IR gray data and the Depth data of the image are used as input characteristic values, the attribute of the image is comprehensively represented, the training model is accurate, and the test result is more accurate.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The above description is only an embodiment of the present invention, but the design concept of the present invention is not limited thereto, and any insubstantial modifications made by using the design concept should fall within the scope of infringing the present invention.

Claims (10)

1. A light monitoring method based on machine learning is characterized by comprising the following steps:
acquiring RGB data, IR gray data and Depth data of a sample image set image, and acquiring luminous flux and color temperature of a lamp in a corresponding image;
constructing a characteristic sequence according to RGB data, IR gray scale data and Depth data of the image, and taking luminous flux and color temperature of a lamp in the corresponding image as a first label and a second label;
inputting the characteristic sequence, the first label and the second label into a pre-trained many-to-many regression network model for training to obtain a trained many-to-many network model;
fusing the model of the RGB data, the IR gray data and the Depth data of the extracted image with the trained many-to-many network model to obtain a light monitoring model;
and inputting the image to be monitored into the lamplight monitoring model to obtain the luminous flux and the color temperature of the lamp in the image.
2. The machine learning-based light monitoring method according to claim 1, wherein a feature sequence is constructed according to RGB data, IR gray data and Depth data of the image, specifically:
taking the average value of the R values of the pixels in the image as a first characteristic value;
taking the average value of the G values of the pixels in the image as a second characteristic value;
taking the average value of the B values of the pixels in the image as a third characteristic value;
taking the average value of the IR values of the pixels in the image as a fourth characteristic value;
taking the average value of Depth values of pixels in the image as a fifth characteristic value;
and constructing the first characteristic value, the second characteristic value, the third characteristic value, the fourth characteristic value and the fifth characteristic value into a characteristic sequence.
3. A machine learning based lighting monitoring method as claimed in claim 1 where the many-to-many network model includes but is not limited to: many-to-many cyclic neural network model, LSTM model.
4. The machine learning-based light monitoring method according to claim 1, further comprising:
the method comprises the steps of obtaining RGB data, IR gray data and Depth data of images in a sample image set, determining a lamp area according to the RGB data, the IR gray data and the Depth data of the images in the sample image set, and constructing a feature sequence according to the RGB data, the IR gray data and the Depth data in the lamp area.
5. A light monitoring system based on machine learning, comprising:
the sample data acquisition unit: acquiring RGB data, IR gray data and Depth data of a sample image set image, and acquiring luminous flux and color temperature of a lamp in a corresponding image;
a characteristic sequence and label obtaining unit: constructing a characteristic sequence according to RGB data, IR gray scale data and Depth data of the image, and taking luminous flux and color temperature of a lamp in the corresponding image as a first label and a second label;
a many-to-many network model training unit: inputting the characteristic sequence, the first label and the second label into a pre-trained many-to-many regression network model for training to obtain a trained many-to-many network model;
the light monitoring model acquisition unit: fusing the model of the RGB data, the IR gray data and the Depth data of the extracted image with the trained many-to-many network model to obtain a light monitoring model;
a lamp monitoring unit: and inputting the image to be monitored into the lamplight monitoring model to obtain the luminous flux and the color temperature of the lamp in the image.
6. A light monitoring system based on machine learning according to claim 5, characterized in that a feature sequence is constructed according to RGB data, IR gray data and Depth data of an image, specifically:
taking the average value of the R values of the pixels in the image as a first characteristic value;
taking the average value of the G values of the pixels in the image as a second characteristic value;
taking the average value of the B values of the pixels in the image as a third characteristic value;
taking the average IR value of the pixels in the image as a fourth characteristic value;
taking the average value of Depth values of pixels in the image as a fifth characteristic value;
and constructing the first characteristic value, the second characteristic value, the third characteristic value, the fourth characteristic value and the fifth characteristic value as a characteristic sequence.
7. A machine learning based lighting monitoring system as claimed in claim 5 where the many-to-many network model includes but is not limited to: many-to-many cyclic neural network model, LSTM model.
8. A machine learning based lighting monitoring system as claimed in claim 5 further comprising:
the method comprises the steps of obtaining RGB data, IR gray data and Depth data of images in a sample image set, determining a lamp area according to the RGB data, the IR gray data and the Depth data of the images in the sample image set, and constructing a feature sequence according to the RGB data, the IR gray data and the Depth data in the lamp area.
9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, wherein the processor implements the method steps of any of claims 1 to 4 when executing the computer program.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 4.
CN202111679344.6A 2021-12-31 2021-12-31 Lamplight monitoring method and system based on machine learning Pending CN114529506A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111679344.6A CN114529506A (en) 2021-12-31 2021-12-31 Lamplight monitoring method and system based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111679344.6A CN114529506A (en) 2021-12-31 2021-12-31 Lamplight monitoring method and system based on machine learning

Publications (1)

Publication Number Publication Date
CN114529506A true CN114529506A (en) 2022-05-24

Family

ID=81620924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111679344.6A Pending CN114529506A (en) 2021-12-31 2021-12-31 Lamplight monitoring method and system based on machine learning

Country Status (1)

Country Link
CN (1) CN114529506A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE202012103449U1 (en) * 2012-09-11 2012-09-28 Koninklijke Philips Electronics N.V. Remote control unit for light source
CN109523617A (en) * 2018-10-15 2019-03-26 中山大学 A kind of illumination estimation method based on monocular-camera
US20190373704A1 (en) * 2016-10-13 2019-12-05 Osram Gmbh A method of identifying light sources and a corresponding system and product
CN111556630A (en) * 2020-06-29 2020-08-18 东北大学 Intelligent lamp self-adaptive scene recognition system and method based on Bayesian network
CN111698409A (en) * 2020-06-23 2020-09-22 韶关市启之信息技术有限公司 Indoor photographing light dimming method
CN112903093A (en) * 2021-02-01 2021-06-04 清华大学 Near field distribution photometry measuring method and device based on deep learning
CN112926497A (en) * 2021-03-20 2021-06-08 杭州知存智能科技有限公司 Face recognition living body detection method and device based on multi-channel data feature fusion
CN112926498A (en) * 2021-03-20 2021-06-08 杭州知存智能科技有限公司 In-vivo detection method based on multi-channel fusion and local dynamic generation of depth information
CN113326935A (en) * 2019-05-26 2021-08-31 中国计量大学上虞高等研究院有限公司 Dimming optimization method for sleep environment
CN113610936A (en) * 2021-09-16 2021-11-05 北京世纪好未来教育科技有限公司 Color temperature determination method, device, equipment and medium
CN113598722A (en) * 2019-04-24 2021-11-05 中国计量大学上虞高等研究院有限公司 Sleep environment illumination condition identification method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE202012103449U1 (en) * 2012-09-11 2012-09-28 Koninklijke Philips Electronics N.V. Remote control unit for light source
US20190373704A1 (en) * 2016-10-13 2019-12-05 Osram Gmbh A method of identifying light sources and a corresponding system and product
CN109523617A (en) * 2018-10-15 2019-03-26 中山大学 A kind of illumination estimation method based on monocular-camera
CN113598722A (en) * 2019-04-24 2021-11-05 中国计量大学上虞高等研究院有限公司 Sleep environment illumination condition identification method
CN113326935A (en) * 2019-05-26 2021-08-31 中国计量大学上虞高等研究院有限公司 Dimming optimization method for sleep environment
CN111698409A (en) * 2020-06-23 2020-09-22 韶关市启之信息技术有限公司 Indoor photographing light dimming method
CN111556630A (en) * 2020-06-29 2020-08-18 东北大学 Intelligent lamp self-adaptive scene recognition system and method based on Bayesian network
CN112903093A (en) * 2021-02-01 2021-06-04 清华大学 Near field distribution photometry measuring method and device based on deep learning
CN112926497A (en) * 2021-03-20 2021-06-08 杭州知存智能科技有限公司 Face recognition living body detection method and device based on multi-channel data feature fusion
CN112926498A (en) * 2021-03-20 2021-06-08 杭州知存智能科技有限公司 In-vivo detection method based on multi-channel fusion and local dynamic generation of depth information
CN113610936A (en) * 2021-09-16 2021-11-05 北京世纪好未来教育科技有限公司 Color temperature determination method, device, equipment and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KANISIUS KARYONO ET AL.: "《a smart adaptive lighting system for a multifunctional room》", 《DESE》, 31 December 2019 (2019-12-31) *
李峦 等: "《一种基于机器学习的自然调光LED 节能灯》", 《电脑知识与技术》, 29 February 2020 (2020-02-29) *
薛帅: "《基于LED色温调节的隧道照明智能控制系统关键技术研究》", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》, 31 July 2021 (2021-07-31) *

Similar Documents

Publication Publication Date Title
US10685215B2 (en) Method and apparatus for recognizing face
US8817128B2 (en) Real-time adjustment of illumination color temperature for digital imaging applications
CN103250418B (en) Image processing device, imaging device, image processing method, and white balance adjustment method
CN106937049A (en) The processing method of the portrait color based on the depth of field, processing unit and electronic installation
RU2012134748A (en) DETECTING DATA FOR TRANSMISSION OF DATA IN A VISIBLE LIGHT, USING A USUAL CAMERA SENSOR
CN106446883B (en) Scene reconstruction method based on optical label
US20130321448A1 (en) Lighting control system
KR20130033331A (en) Sensibility lighting control apparatus and method
CN108881730A (en) Image interfusion method, device, electronic equipment and computer readable storage medium
CN108184286A (en) The control method and control system and electronic equipment of lamps and lanterns
US8478028B2 (en) Method and system for converting at least one first-spectrum image into a second-spectrum image
CN114549864A (en) Intelligent lamp control method and system based on environment image
CN114468973B (en) Intelligent vision detection system
US20220391693A1 (en) Training transformer neural networks to generate parameters of convolutional neural networks
CN117241445B (en) Intelligent debugging method and system for self-adaptive scene of combined atmosphere lamp
CN114529506A (en) Lamplight monitoring method and system based on machine learning
CN108197563B (en) Method and device for acquiring information
CN106163373A (en) Signal processing apparatus and endoscopic system
CN108235831A (en) The control method and control system and electronic equipment of lamps and lanterns
CN115661597B (en) Visible light and infrared fusion target detection method based on dynamic weight positioning distillation
CN109348592B (en) Illumination situation construction method and system, computer equipment and storage medium
KR101694824B1 (en) Energy saving signboard using dual smart camer and the method thereof
CN113408380B (en) Video image adjustment method, device and storage medium
KR101049409B1 (en) Apparatus and method for color correction in image processing system
CN110514662A (en) A kind of vision detection system of multiple light courcess fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination