CN111835984A - Intelligent light supplementing method and device, electronic equipment and storage medium - Google Patents

Intelligent light supplementing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111835984A
CN111835984A CN202010721961.7A CN202010721961A CN111835984A CN 111835984 A CN111835984 A CN 111835984A CN 202010721961 A CN202010721961 A CN 202010721961A CN 111835984 A CN111835984 A CN 111835984A
Authority
CN
China
Prior art keywords
image
brightness
light supplement
target
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010721961.7A
Other languages
Chinese (zh)
Other versions
CN111835984B (en
Inventor
邹芳
李尔卫
谢树家
张亚男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202010721961.7A priority Critical patent/CN111835984B/en
Publication of CN111835984A publication Critical patent/CN111835984A/en
Application granted granted Critical
Publication of CN111835984B publication Critical patent/CN111835984B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Educational Administration (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Educational Technology (AREA)
  • Evolutionary Computation (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Circuit Arrangement For Electric Light Sources In General (AREA)

Abstract

The invention relates to artificial intelligence, and provides an intelligent light supplementing method which is applied to electronic equipment and comprises the steps of receiving a first image sent by a data acquisition terminal, and determining position information of a target to be identified in the first image; generating a position adjusting instruction according to the position information, sending the position adjusting instruction to the light supplementing lamp system, controlling the light supplementing lamp system to adjust the irradiation angle of the light supplementing lamp in real time, and recording the current deflection speed and the irradiation brightness of the light supplementing lamp; acquiring a second image shot by the data acquisition terminal in the light supplement lamp adjusting process, inputting the second image into the emotion recognition model, and outputting the emotion type of the target to be recognized in the second image; adjusting the deflection speed and/or the illumination brightness when the emotion type is an emotion type including a first preset tag. In addition, the invention also relates to a block chain technology, and the first image uploaded by the data acquisition terminal can be stored in the block chain node. The intelligent light supplementing device can control the irradiation direction of the light supplementing lamp to always face towards the target to be identified, and the purpose of intelligent light supplementing is achieved.

Description

Intelligent light supplementing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to artificial intelligence, and in particular, to an intelligent light supplement method and apparatus, an electronic device, and a storage medium.
Background
In an application scenario of remote video teaching, in order to provide better viewing experience for a learner, light supplement processing is usually performed on a picture of the learner. But because present light filling lamp shines the angle and is fixed unchangeable, when appearing walking about the position by the teacher and changing great, the light that shines the angle fixed can't follow the teacher and remove and carry out intelligent light filling, leads to the video image quality of shooing not good, for example the shadow appears in the people's face, influences the whole effect of long-range teaching, has reduced the teaching quality. Therefore, how to improve the image quality of the shot video becomes a technical problem which needs to be solved urgently.
Disclosure of Invention
The invention mainly aims to provide an intelligent light supplementing method, an intelligent light supplementing device, electronic equipment and a storage medium, and aims to solve the problem of improving the image quality of a shot video.
In order to achieve the above object, the present invention provides an intelligent light supplement method applied to an electronic device, including:
receiving a first image containing a target to be identified, which is sent by a data acquisition terminal, and determining the position information of the target to be identified in the first image by utilizing a predetermined positioning algorithm;
generating a position adjusting instruction according to the position information, sending the position adjusting instruction to a preset light supplementing lamp system, controlling the light supplementing lamp system to adjust the irradiation angle of the light supplementing lamp in real time, and recording the current deflection speed and the irradiation brightness of the light supplementing lamp in a database;
acquiring a second image which is shot by the data acquisition terminal in the light supplement lamp adjusting process and contains the target to be recognized, inputting the second image into a predetermined emotion recognition model, and outputting the emotion type of the target to be recognized in the second image; and
and when the emotion type is the emotion type comprising a first preset label, adjusting the deflection speed and/or the illumination brightness of the light supplement lamp according to a preset parameter adjustment algorithm.
Preferably, before controlling the fill-in light system to adjust the illumination angle of the fill-in light, the method further includes the following steps:
and obtaining the original illumination brightness of the light supplement lamp, reducing the brightness of a preset numerical value on the basis of the original illumination brightness to obtain a first brightness, and gradually increasing the brightness of the first brightness in the deflection process of the light supplement lamp until the original illumination brightness is recovered.
Preferably, the method further comprises the steps of:
and acquiring a plurality of second images of emotion types including a second preset label, respectively acquiring the deflection speed and the illumination brightness of the light supplement lamp when the data acquisition terminal shoots each second image, and selecting the image with the highest deflection speed and the highest illumination brightness as the initial speed and the initial brightness of the light supplement lamp.
Preferably, the method further comprises the steps of:
and acquiring the ambient brightness in the acquisition area of the data acquisition terminal in real time, judging whether the ambient brightness is greater than or equal to a preset threshold value, and turning off the light supplement lamp if the ambient brightness is greater than or equal to the preset threshold value.
Preferably, the data acquisition terminal is a monocular vision camera, a binocular stereo vision camera or an omnidirectional vision camera, and the positioning algorithm is a visual positioning algorithm.
Preferably, the training process of the emotion recognition model includes:
acquiring sample images, and extracting a feature vector diagram, such as a directional gradient histogram feature vector diagram, of each sample image in the training set and the verification set according to a predetermined feature extraction algorithm;
assigning a unique emotion label to each sample image;
dividing the sample images into a training set and a verification set according to a preset proportion, wherein the number of the sample images in the training set is greater than that of the sample images in the verification set;
inputting the sample images in the training set into the emotion recognition model for training, verifying the emotion recognition model by using the verification set every other preset period, and verifying the accuracy of the emotion recognition model by using each feature vector image in the verification set and the corresponding emotion label; and
and when the accuracy is greater than a preset threshold value, finishing training to obtain the emotion recognition model.
Preferably, the parameter adjustment algorithm comprises: reducing the deflection speed of the light supplement lamp; and/or a preset numerical value of the irradiation brightness of the light supplement lamp.
In order to achieve the above object, the present invention further provides an intelligent light supplement device, including:
the positioning module is used for receiving a first image containing a target to be identified and sent by a data acquisition terminal, and determining the position information of the target to be identified in the first image by utilizing a predetermined positioning algorithm;
the light supplement module is used for generating a position adjustment instruction according to the position information, sending the position adjustment instruction to a preset light supplement lamp system, controlling the light supplement lamp system to adjust the irradiation angle of the light supplement lamp in real time, and recording the current deflection speed and the irradiation brightness of the light supplement lamp in a database;
the recognition module is used for acquiring a second image which is shot by the data acquisition terminal in the light supplement lamp adjustment process and contains the target to be recognized, inputting the second image into a predetermined emotion recognition model, and outputting the emotion type of the target to be recognized in the second image; and
and the adjusting module is used for adjusting the deflection speed and/or the illumination brightness of the light supplementing lamp according to a preset parameter adjusting algorithm when the emotion type is the emotion type containing the first preset label.
To achieve the above object, the present invention further provides an electronic device, including:
a memory storing at least one instruction; and
and the processor executes the instructions stored in the memory to realize the intelligent light supplement method.
To achieve the above object, the present invention further provides a computer-readable storage medium including a storage data area storing data created according to use of a blockchain node and a storage program area storing a computer program; wherein, the computer program is executed by the processor to implement the steps of the intelligent light supplement method.
According to the intelligent light supplementing method, the intelligent light supplementing device, the electronic equipment and the storage medium, the first image which is uploaded by the data acquisition terminal and contains the target to be identified is obtained in real time, and the position information of the target to be identified in the first image is positioned by utilizing a predetermined positioning algorithm; generating a position adjusting instruction according to the position information, sending the position adjusting instruction to a preset light supplementing lamp system, controlling the light supplementing lamp system to adjust the irradiation angle of the light supplementing lamp in real time, and recording the current deflection speed and the irradiation brightness of the light supplementing lamp in a database; acquiring a second image which is shot by the data acquisition terminal in the light supplement lamp adjusting process and contains a target to be recognized, inputting the second image into a predetermined emotion recognition model, and outputting the emotion type of the target to be recognized in the second image; and when the emotion type is the emotion type comprising the first preset label, adjusting the deflection speed and/or the illumination brightness according to a preset parameter adjusting algorithm. The invention can acquire the specific position of the target to be recognized in real time, and control the irradiation direction of the light supplement lamp to always face the target to be recognized, thereby realizing the purpose of intelligent light supplement. Compared with the prior art that the irradiation angle of the light supplement lamp is fixed, the intelligent light supplement lamp can perform intelligent light supplement along with the movement of the position of a teacher, improves the image quality of a shot video, and achieves the purposes of good overall effect of remote teaching and high user experience.
Drawings
Fig. 1 is a schematic flow chart illustrating a method for implementing intelligent light supplement according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of an intelligent light supplement device according to an embodiment of the present invention;
fig. 3 is a schematic view of an internal structure of an electronic device according to an intelligent light supplement method provided in an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical embodiments and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the description relating to "first", "second", etc. in the present invention is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, the technical embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the combination of the technical embodiments contradicts each other or cannot be realized, such combination of the technical embodiments should be considered to be absent and not within the protection scope of the present invention.
The invention provides an intelligent light supplementing method. Fig. 1 is a schematic flow chart of an intelligent light supplement method according to an embodiment of the present invention. The method may be performed by an apparatus, which may be implemented by software and/or hardware.
In this embodiment, the intelligent light supplement method includes:
s110, receiving a first image containing a target to be identified sent by a data acquisition terminal, and determining the position information of the target to be identified in the first image by using a predetermined positioning algorithm.
The present solution is explained by taking remote video teaching as an example. Because current lamp light irradiation angle is fixed, when the target of treating discernment (for example the lecturer) that is shot by the camera walking the position and changing great, the light that shines the fixed angle can't follow and treat that the discernment target removes and intelligent light filling, and the video image quality that leads to shooing is not good, and the shadow appears in people's face for example, influences the whole effect of long-range teaching, has reduced the teaching quality.
Therefore, in this embodiment, the binocular camera is used as the data acquisition terminal, the first image containing the target to be recognized and uploaded by the data acquisition terminal is acquired in real time, the position information of the target to be recognized in the first image is determined by using a positioning algorithm, and the deflection angle of the light supplement lamp is adjusted in real time for follow-up according to the position information of the target to be recognized, so that the light supplement lamp always faces the target to be recognized, and the problem that the teaching quality is affected by the existence of light in the video image shot by the data acquisition terminal is avoided.
Specifically, the data acquisition terminal is a monocular vision camera, a binocular stereo vision camera or an omnidirectional vision camera, and the positioning algorithm is a vision positioning algorithm.
In this embodiment, the visual positioning algorithm is to acquire an image of an object by using a visual sensor, and then perform image processing by using a computer to obtain position information of the object. According to the difference of the number of cameras, the target positioning method based on computer vision can be divided into monocular vision positioning, binocular vision stereo positioning and omnibearing vision positioning. The monocular vision positioning method only uses one vision sensor to complete the positioning work, and the binocular stereo vision positioning method imitates the method that humans perceive the distance by using binocular clues to realize the perception of three-dimensional information, namely, two vision sensors are used to complete the positioning work. The omnibearing vision positioning utilizes an omnibearing vision sensor to complete the positioning work.
And S120, generating a position adjusting instruction according to the position information, sending the position adjusting instruction to a preset light supplementing lamp system, controlling the light supplementing lamp system to adjust the irradiation angle of the light supplementing lamp in real time, and recording the current deflection speed and the irradiation brightness of the light supplementing lamp in a database.
In this embodiment, the preset light supplement lamp system controls the light supplement lamp to adjust the irradiation angle in real time according to the position adjustment instruction containing the position information of the target to be recognized, so that the light supplement lamp always faces the target to be recognized, and the purpose of intelligent light supplement is achieved. And recording the current deflection speed and the illumination brightness of the light supplement lamp in a database.
In another embodiment, before controlling the fill-in light system to adjust the illumination angle of the fill-in light, the method further includes the following steps:
and obtaining the original illumination brightness of the light supplement lamp, reducing the brightness of a preset numerical value on the basis of the original illumination brightness to obtain a first brightness, and gradually increasing the brightness of the first brightness in the deflection process of the light supplement lamp until the original illumination brightness is recovered.
In this embodiment, by obtaining the original illumination brightness of the fill-in light, for example, 500mcd, the brightness of a preset value (determined according to the actual situation) is reduced on the basis of the original illumination brightness to obtain a first brightness, for example, the illumination intensity of 100mcd is reduced, and the brightness is gradually increased on the basis of the first brightness in the deflection process of the fill-in light until the original illumination brightness is recovered, for example, 500mcd, so that the target to be identified (for example, an instructor) can adapt to the illumination change of the fill-in light.
S130, acquiring a second image containing the target to be recognized, which is shot by the data acquisition terminal in the light supplement lamp adjusting process, inputting the second image into a predetermined emotion recognition model, and outputting the emotion type of the target to be recognized in the second image.
In this embodiment, in order to avoid that the light supplement lamp deflects and/or irradiates brightness that does not meet the actual requirement in the process of adjusting the irradiation angle, for example, the deflection speed is too fast or the irradiation brightness is too strong, which causes dissatisfaction with the target to be identified. Therefore, in the embodiment, by acquiring the second image containing the target to be recognized, which is captured by the data acquisition terminal during the adjustment of the fill-in light, the second image is input into the emotion recognition model, and the emotion label (such as "usual", "happy", "sad", "angry", "fear", "disgust", and the like) of the target to be recognized in the second image is output.
In a further embodiment, the training process of the emotion recognition model comprises:
acquiring sample images, and extracting a feature vector diagram, such as a directional gradient histogram feature vector diagram, of each sample image in the training set and the verification set according to a predetermined feature extraction algorithm;
assigning a unique emotion label to each sample image;
dividing the sample images into a training set and a verification set according to a preset proportion, wherein the number of the sample images in the training set is greater than that of the sample images in the verification set;
inputting the sample images in the training set into the emotion recognition model for training, verifying the emotion recognition model by using the verification set every other preset period, and verifying the accuracy of the emotion recognition model by using each feature vector image in the verification set and the corresponding emotion label; and
and when the accuracy is greater than a preset threshold value, finishing training to obtain the emotion recognition model.
In this embodiment, the emotion recognition model is a Support Vector Machine (SVM, which is a common discrimination method, and in the field of Machine learning, it is a supervised learning model, and is generally used for performing pattern recognition, classification, and regression analysis), and the training process of the emotion recognition model includes:
acquiring a preset number (for example, 10 tens of thousands) of sample images, and extracting a feature vector diagram, such as an Histogram of Oriented Gradient (HOG) feature vector diagram, of each sample image in the training set and the verification set according to a predetermined feature extraction algorithm;
assigning a unique emotion label to each sample image;
dividing the sample images into a training set and a verification set according to a preset proportion (for example, 4:1), wherein the number of the sample images in the training set is larger than that in the verification set;
inputting the sample images in the training set into the emotion recognition model for training, verifying the emotion recognition model by using the verification set every preset period (for example, every 1000 times of iteration), and verifying the accuracy of the emotion recognition model by using each feature vector image in the verification set and the corresponding emotion label; and
and when the accuracy is greater than a preset threshold (for example, 95%), ending the training to obtain the emotion recognition model.
And S140, when the emotion type is the emotion type comprising the first preset label, adjusting the deflection speed and/or the illumination brightness of the light supplement lamp according to a preset parameter adjustment algorithm.
In this embodiment, when the emotion type output by the emotion recognition model is an emotion type including a first preset tag, for example, "anger", it indicates that the deflection speed and the illumination brightness of the fill-in light when the second image is captured bring a bad experience to the target to be recognized (for example, an instructor). And modifying the deflection speed and/or the illumination brightness according to a preset parameter adjusting algorithm.
Specifically, the parameter adjustment algorithm includes: reducing the deflection speed of the light supplement lamp; and/or a preset numerical value of the irradiation brightness of the light supplement lamp.
In this embodiment, by reducing the preset values of the deflection speed and the illumination brightness at the same time or adjusting the data of one index of the deflection speed and the illumination brightness, the light supplement lamp can bring better experience to the target to be recognized (for example, an instructor) in the moving process.
In another embodiment, the method further comprises the steps of:
and acquiring a plurality of second images of emotion types including a second preset label, respectively acquiring the deflection speed and the illumination brightness of the light supplement lamp when the data acquisition terminal shoots each second image, and selecting the image with the highest deflection speed and the highest illumination brightness as the initial speed and the initial brightness of the light supplement lamp.
In this embodiment, when the emotion recognition model recognizes that the emotion type of the target to be recognized in the second image is a second preset type, for example, "happy", it indicates that the deflection speed and the illumination brightness of the fill-in light are appropriate when the second image is photographed, and a better experience feeling can be brought to the target to be recognized (for example, a lecturer). Meanwhile, in order to reduce errors, the deflection speed and the illumination brightness of the light supplement lamp when the data acquisition terminal shoots each second image are respectively obtained by obtaining all second images containing emotion types of second preset labels, and the image with the highest frequency of the deflection speed and the highest frequency of the illumination brightness is used as the initial speed and the initial brightness of the light supplement lamp when the light supplement lamp just starts to operate.
In another embodiment, the method further comprises the steps of:
and acquiring the ambient brightness in the acquisition area of the data acquisition terminal in real time, judging whether the ambient brightness is greater than or equal to a preset threshold value, and turning off the light supplement lamp if the ambient brightness is greater than or equal to the preset threshold value.
In this embodiment, the ambient brightness in the acquisition area of the data acquisition terminal is acquired in real time, and whether the ambient brightness is greater than or equal to a preset threshold, for example, 500mcd, is determined. If so, the brightness of the environment where the target to be recognized is located is comfortable, the light supplement lamp is not needed, and the light supplement lamp can be turned off, so that the purpose of intelligent energy conservation is achieved.
For detailed description of the above steps, please refer to the following description of fig. 2 regarding a schematic diagram of a program module of an embodiment of the intelligent light supplement program 10 and fig. 3 regarding a schematic diagram of a structure of the electronic device 1 of an embodiment of the intelligent light supplement method.
Fig. 2 is a functional block diagram of the intelligent light supplement device 100 according to the present invention.
The intelligent light supplement device 100 of the present invention can be installed in an electronic device. According to the realized functions, the intelligent light supplement device 100 may include a positioning module 110, a light supplement module 120, an identification module 130, and an adjustment module 140. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the positioning module 110 is configured to receive a first image containing a target to be identified sent by a data acquisition terminal, and determine position information of the target to be identified in the first image by using a predetermined positioning algorithm.
The present solution is explained by taking remote video teaching as an example. Because current lamp light irradiation angle is fixed, when the target of treating discernment (for example the lecturer) that is shot by the camera walking the position and changing great, the light that shines the fixed angle can't follow and treat that the discernment target removes and intelligent light filling, and the video image quality that leads to shooing is not good, and the shadow appears in people's face for example, influences the whole effect of long-range teaching, has reduced the teaching quality.
Therefore, in this embodiment, the binocular camera is used as the data acquisition terminal, the first image containing the target to be recognized and uploaded by the data acquisition terminal is acquired in real time, the position information of the target to be recognized in the first image is determined by using a positioning algorithm, and the deflection angle of the light supplement lamp is adjusted in real time for follow-up according to the position information of the target to be recognized, so that the light supplement lamp always faces the target to be recognized, and the problem that the teaching quality is affected by the existence of light in the video image shot by the data acquisition terminal is avoided.
Specifically, the data acquisition terminal is a monocular vision camera, a binocular stereo vision camera or an omnidirectional vision camera, and the positioning algorithm is a vision positioning algorithm.
In this embodiment, the visual positioning algorithm is to acquire an image of an object by using a visual sensor, and then perform image processing by using a computer to obtain position information of the object. According to the difference of the number of cameras, the target positioning method based on computer vision can be divided into monocular vision positioning, binocular vision stereo positioning and omnibearing vision positioning. The monocular vision positioning method only uses one vision sensor to complete the positioning work, and the binocular stereo vision positioning method imitates the method that humans perceive the distance by using binocular clues to realize the perception of three-dimensional information, namely, two vision sensors are used to complete the positioning work. The omnibearing vision positioning utilizes an omnibearing vision sensor to complete the positioning work.
And the light supplement module 120 is configured to generate a position adjustment instruction according to the position information, send the position adjustment instruction to a preset light supplement lamp system, control the light supplement lamp system in real time to adjust an irradiation angle of the light supplement lamp, and record the current deflection speed and the irradiation brightness of the light supplement lamp in a database.
In this embodiment, the preset light supplement lamp system controls the light supplement lamp to adjust the irradiation angle in real time according to the position adjustment instruction containing the position information of the target to be recognized, so that the light supplement lamp always faces the target to be recognized, and the purpose of intelligent light supplement is achieved. And recording the current deflection speed and the illumination brightness of the light supplement lamp in a database.
In another embodiment, before controlling the fill-in light system to adjust the illumination angle of the fill-in light, the apparatus further includes the following modules for:
and obtaining the original illumination brightness of the light supplement lamp, reducing the brightness of a preset numerical value on the basis of the original illumination brightness to obtain a first brightness, and gradually increasing the brightness of the first brightness in the deflection process of the light supplement lamp until the original illumination brightness is recovered.
In this embodiment, by obtaining the original illumination brightness of the fill-in light, for example, 500mcd, the brightness of a preset value (determined according to the actual situation) is reduced on the basis of the original illumination brightness to obtain a first brightness, for example, the illumination intensity of 100mcd is reduced, and the brightness is gradually increased on the basis of the first brightness in the deflection process of the fill-in light until the original illumination brightness is recovered, for example, 500mcd, so that the target to be identified (for example, an instructor) can adapt to the illumination change of the fill-in light.
The identification module 130 is configured to acquire a second image which is shot by the data acquisition terminal in a light supplement lamp adjustment process and contains the target to be identified, input the second image into a predetermined emotion recognition model, and output an emotion type of the target to be identified in the second image.
In this embodiment, in order to avoid that the light supplement lamp deflects and/or irradiates brightness that does not meet the actual requirement in the process of adjusting the irradiation angle, for example, the deflection speed is too fast or the irradiation brightness is too strong, which causes dissatisfaction with the target to be identified. Therefore, in the embodiment, by acquiring the second image containing the target to be recognized, which is captured by the data acquisition terminal during the adjustment of the fill-in light, the second image is input into the emotion recognition model, and the emotion label (such as "usual", "happy", "sad", "angry", "fear", "disgust", and the like) of the target to be recognized in the second image is output.
In a further embodiment, the training process of the emotion recognition model comprises:
acquiring sample images, and extracting a feature vector diagram, such as a directional gradient histogram feature vector diagram, of each sample image in the training set and the verification set according to a predetermined feature extraction algorithm;
assigning a unique emotion label to each sample image;
dividing the sample images into a training set and a verification set according to a preset proportion, wherein the number of the sample images in the training set is greater than that of the sample images in the verification set;
inputting the sample images in the training set into the emotion recognition model for training, verifying the emotion recognition model by using the verification set every other preset period, and verifying the accuracy of the emotion recognition model by using each feature vector image in the verification set and the corresponding emotion label; and
and when the accuracy is greater than a preset threshold value, finishing training to obtain the emotion recognition model.
In this embodiment, the emotion recognition model is a Support Vector Machine (SVM, which is a common discrimination method, and in the field of Machine learning, it is a supervised learning model, and is generally used for performing pattern recognition, classification, and regression analysis), and the training process of the emotion recognition model includes:
acquiring a preset number (for example, 10 tens of thousands) of sample images, and extracting a feature vector diagram, such as an Histogram of Oriented Gradient (HOG) feature vector diagram, of each sample image in the training set and the verification set according to a predetermined feature extraction algorithm;
assigning a unique emotion label to each sample image;
dividing the sample images into a training set and a verification set according to a preset proportion (for example, 4:1), wherein the number of the sample images in the training set is larger than that in the verification set;
inputting the sample images in the training set into the emotion recognition model for training, verifying the emotion recognition model by using the verification set every preset period (for example, every 1000 times of iteration), and verifying the accuracy of the emotion recognition model by using each feature vector image in the verification set and the corresponding emotion label; and
and when the accuracy is greater than a preset threshold (for example, 95%), ending the training to obtain the emotion recognition model.
And the adjusting module 140 is configured to adjust the deflection speed and/or the illumination brightness of the light supplement lamp according to a preset parameter adjusting algorithm when the emotion type is an emotion type including a first preset tag.
In this embodiment, when the emotion type output by the emotion recognition model is an emotion type including a first preset tag, for example, "anger", it indicates that the deflection speed and the illumination brightness of the fill-in light when the second image is captured bring a bad experience to the target to be recognized (for example, an instructor). And modifying the deflection speed and/or the illumination brightness according to a preset parameter adjusting algorithm.
Specifically, the parameter adjustment algorithm includes: reducing the deflection speed of the light supplement lamp; and/or a preset numerical value of the irradiation brightness of the light supplement lamp.
In this embodiment, by reducing the preset values of the deflection speed and the illumination brightness at the same time or adjusting the data of one index of the deflection speed and the illumination brightness, the light supplement lamp can bring better experience to the target to be recognized (for example, an instructor) in the moving process.
In another embodiment, the apparatus further comprises means for:
and acquiring a plurality of second images of emotion types including a second preset label, respectively acquiring the deflection speed and the illumination brightness of the light supplement lamp when the data acquisition terminal shoots each second image, and selecting the image with the highest deflection speed and the highest illumination brightness as the initial speed and the initial brightness of the light supplement lamp.
In this embodiment, when the emotion recognition model recognizes that the emotion type of the target to be recognized in the second image is a second preset type, for example, "happy", it indicates that the deflection speed and the illumination brightness of the fill-in light are appropriate when the second image is photographed, and a better experience feeling can be brought to the target to be recognized (for example, a lecturer). Meanwhile, in order to reduce errors, the deflection speed and the illumination brightness of the light supplement lamp when the data acquisition terminal shoots each second image are respectively obtained by obtaining all second images containing emotion types of second preset labels, and the image with the highest frequency of the deflection speed and the highest frequency of the illumination brightness is used as the initial speed and the initial brightness of the light supplement lamp when the light supplement lamp just starts to operate.
In another embodiment, the apparatus further comprises means for:
and acquiring the ambient brightness in the acquisition area of the data acquisition terminal in real time, judging whether the ambient brightness is greater than or equal to a preset threshold value, and turning off the light supplement lamp if the ambient brightness is greater than or equal to the preset threshold value.
In this embodiment, the ambient brightness in the acquisition area of the data acquisition terminal is acquired in real time, and whether the ambient brightness is greater than or equal to a preset threshold, for example, 500mcd, is determined. If so, the brightness of the environment where the target to be recognized is located is comfortable, the light supplement lamp is not needed, and the light supplement lamp can be turned off, so that the purpose of intelligent energy conservation is achieved.
Fig. 3 is a schematic structural diagram of an electronic device implementing the intelligent light supplement method according to the present invention.
The electronic device 1 may include a processor 12, a memory 11 and a bus, and may further include a computer program, such as the intelligent fill-in program 10, stored in the memory 11 and executable on the processor 12.
Wherein the memory 11 includes at least one type of readable storage medium, and the computer usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like. The readable storage medium includes flash memory, removable hard disks, multimedia cards, card type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used to store not only application software installed in the electronic device 1 and various data, such as codes of the intelligent supplementary lighting program 10, but also temporarily store data that has been output or will be output.
The processor 12 may be formed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be formed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 12 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by operating or executing programs or modules (e.g., an intelligent light supplement program, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 12 or the like.
Fig. 3 shows only an electronic device with components, and it will be understood by those skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 12 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface 13, and optionally, the network interface 13 may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The intelligent supplementary lighting program 10 stored in the memory 11 of the electronic device 1 is a combination of a plurality of instructions, and when running in the processor 12, can implement:
receiving a first image containing a target to be identified, which is sent by a data acquisition terminal, and determining the position information of the target to be identified in the first image by utilizing a predetermined positioning algorithm;
generating a position adjusting instruction according to the position information, sending the position adjusting instruction to a preset light supplementing lamp system, controlling the light supplementing lamp system to adjust the irradiation angle of the light supplementing lamp in real time, and recording the current deflection speed and the irradiation brightness of the light supplementing lamp in a database;
acquiring a second image which is shot by the data acquisition terminal in the light supplement lamp adjusting process and contains the target to be recognized, inputting the second image into a predetermined emotion recognition model, and outputting the emotion type of the target to be recognized in the second image; and
and when the emotion type is the emotion type comprising a first preset label, adjusting the deflection speed and/or the illumination brightness of the light supplement lamp according to a preset parameter adjustment algorithm.
In another embodiment, the program further performs the steps of:
and obtaining the original illumination brightness of the light supplement lamp, reducing the brightness of a preset numerical value on the basis of the original illumination brightness to obtain a first brightness, and gradually increasing the brightness of the first brightness in the deflection process of the light supplement lamp until the original illumination brightness is recovered.
In another embodiment, the program further performs the steps of:
and acquiring a plurality of second images of emotion types including a second preset label, respectively acquiring the deflection speed and the illumination brightness of the light supplement lamp when the data acquisition terminal shoots each second image, and selecting the image with the highest deflection speed and the highest illumination brightness as the initial speed and the initial brightness of the light supplement lamp.
In another embodiment, the program further performs the steps of:
and acquiring the ambient brightness in the acquisition area of the data acquisition terminal in real time, judging whether the ambient brightness is greater than or equal to a preset threshold value, and turning off the light supplement lamp if the ambient brightness is greater than or equal to the preset threshold value.
Specifically, the processor 11 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the instruction, which is not described herein again. It is emphasized that, in order to further ensure the privacy and security of the collected data, the collected data may also be stored in a node of a block chain.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. An intelligent light supplementing method is applied to electronic equipment, and is characterized by comprising the following steps:
receiving a first image containing a target to be identified, which is sent by a data acquisition terminal, and determining the position information of the target to be identified in the first image by utilizing a predetermined positioning algorithm;
generating a position adjusting instruction according to the position information, sending the position adjusting instruction to a preset light supplementing lamp system, controlling the light supplementing lamp system to adjust the irradiation angle of the light supplementing lamp in real time, and recording the current deflection speed and the irradiation brightness of the light supplementing lamp in a database;
acquiring a second image which is shot by the data acquisition terminal in the light supplement lamp adjusting process and contains the target to be recognized, inputting the second image into a predetermined emotion recognition model, and outputting the emotion type of the target to be recognized in the second image; and
and when the emotion type is the emotion type comprising a first preset label, adjusting the deflection speed and/or the illumination brightness of the light supplement lamp according to a preset parameter adjustment algorithm.
2. The intelligent supplementary lighting method according to claim 1, wherein before controlling the supplementary lighting system to adjust the illumination angle of the supplementary lighting, the method further comprises the following steps:
and obtaining the original illumination brightness of the light supplement lamp, reducing the brightness of a preset numerical value on the basis of the original illumination brightness to obtain a first brightness, and gradually increasing the brightness of the first brightness in the deflection process of the light supplement lamp until the original illumination brightness is recovered.
3. The intelligent supplementary lighting method according to claim 1, further comprising the steps of:
and acquiring a plurality of second images of emotion types including a second preset label, respectively acquiring the deflection speed and the illumination brightness of the light supplement lamp when the data acquisition terminal shoots each second image, and selecting the image with the highest deflection speed and the highest illumination brightness as the initial speed and the initial brightness of the light supplement lamp.
4. The intelligent supplementary lighting method according to claim 1, further comprising the steps of:
and acquiring the ambient brightness in the acquisition area of the data acquisition terminal in real time, judging whether the ambient brightness is greater than or equal to a preset threshold value, and turning off the light supplement lamp if the ambient brightness is greater than or equal to the preset threshold value.
5. The intelligent supplementary lighting method according to claim 1, wherein the data acquisition terminal is a monocular vision camera, a binocular stereo vision camera or an omnidirectional vision camera, and the positioning algorithm is a visual positioning algorithm.
6. The intelligent supplementary lighting method according to claim 1, wherein the training process of the emotion recognition model comprises:
acquiring sample images, and extracting a feature vector diagram, such as a directional gradient histogram feature vector diagram, of each sample image in the training set and the verification set according to a predetermined feature extraction algorithm;
assigning a unique emotion label to each sample image;
dividing the sample images into a training set and a verification set according to a preset proportion, wherein the number of the sample images in the training set is greater than that of the sample images in the verification set;
inputting the sample images in the training set into the emotion recognition model for training, verifying the emotion recognition model by using the verification set every other preset period, and verifying the accuracy of the emotion recognition model by using each feature vector image in the verification set and the corresponding emotion label; and
and when the accuracy is greater than a preset threshold value, finishing training to obtain the emotion recognition model.
7. The intelligent light supplement method according to any one of claims 1-6, wherein the parameter adjustment algorithm comprises: reducing the deflection speed of the light supplement lamp; and/or a preset numerical value of the irradiation brightness of the light supplement lamp.
8. The utility model provides an intelligence light filling device which characterized in that includes:
the positioning module is used for receiving a first image containing a target to be identified and sent by a data acquisition terminal, and determining the position information of the target to be identified in the first image by utilizing a predetermined positioning algorithm;
the light supplement module is used for generating a position adjustment instruction according to the position information, sending the position adjustment instruction to a preset light supplement lamp system, controlling the light supplement lamp system to adjust the irradiation angle of the light supplement lamp in real time, and recording the current deflection speed and the irradiation brightness of the light supplement lamp in a database;
the recognition module is used for acquiring a second image which is shot by the data acquisition terminal in the light supplement lamp adjustment process and contains the target to be recognized, inputting the second image into a predetermined emotion recognition model, and outputting the emotion type of the target to be recognized in the second image; and
and the adjusting module is used for adjusting the deflection speed and/or the illumination brightness of the light supplementing lamp according to a preset parameter adjusting algorithm when the emotion type is the emotion type containing the first preset label.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the intelligent fill light method of any one of claims 1 to 7.
10. A computer-readable storage medium comprising a storage data area storing data created according to use of a blockchain node and a storage program area storing a computer program; wherein the computer program, when being executed by a processor, implements the steps of the intelligent light filling method according to any one of claims 1-7.
CN202010721961.7A 2020-07-24 2020-07-24 Intelligent light supplementing method and device, electronic equipment and storage medium Active CN111835984B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010721961.7A CN111835984B (en) 2020-07-24 2020-07-24 Intelligent light supplementing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010721961.7A CN111835984B (en) 2020-07-24 2020-07-24 Intelligent light supplementing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111835984A true CN111835984A (en) 2020-10-27
CN111835984B CN111835984B (en) 2023-02-07

Family

ID=72925452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010721961.7A Active CN111835984B (en) 2020-07-24 2020-07-24 Intelligent light supplementing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111835984B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112590428A (en) * 2020-12-15 2021-04-02 安徽文香信息技术有限公司 Intelligent blackboard light supplementing equipment based on cursor positioning and control system thereof
CN112954152A (en) * 2020-12-30 2021-06-11 神思电子技术股份有限公司 System and method for eliminating light reflection of laser camera
CN113158975A (en) * 2021-05-13 2021-07-23 青岛海尔工业智能研究院有限公司 Information writing method and device of intelligent equipment, equipment and storage medium
CN113176270A (en) * 2021-06-29 2021-07-27 中移(上海)信息通信科技有限公司 Dimming method, device and equipment
CN113411513A (en) * 2021-08-20 2021-09-17 中国传媒大学 Intelligent light adjusting method and device based on display terminal and storage medium
CN113965671A (en) * 2021-02-04 2022-01-21 福建汇川物联网技术科技股份有限公司 Light supplementing method and device for distance measurement, electronic equipment and storage medium
CN115278092A (en) * 2021-04-29 2022-11-01 北京小米移动软件有限公司 Image acquisition method, image acquisition device and storage medium
WO2023160219A1 (en) * 2022-02-28 2023-08-31 荣耀终端有限公司 Light supplementing model training method, image processing method, and related device thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1012005A (en) * 1996-06-19 1998-01-16 Matsushita Electric Works Ltd Automatic tracking lighting system
CN103019369A (en) * 2011-09-23 2013-04-03 富泰华工业(深圳)有限公司 Electronic device and method for playing documents based on facial expressions
CN104853481A (en) * 2015-04-01 2015-08-19 浙江农林大学 LED mood presenting and adjusting device and method
CN205945991U (en) * 2015-12-20 2017-02-08 天津博普科技有限公司 A video acquisition system for recording teaching video
CN106469301A (en) * 2016-08-31 2017-03-01 北京天诚盛业科技有限公司 The adjustable face identification method of self adaptation and device
CN109757016A (en) * 2017-11-07 2019-05-14 德阳艺空装饰设计有限公司 A kind of indoor intelligent light compensating apparatus
CN110274178A (en) * 2019-06-06 2019-09-24 聊城大学 A kind of intelligent illuminating system of tiny lattice classroom
CN110287766A (en) * 2019-05-06 2019-09-27 平安科技(深圳)有限公司 One kind being based on recognition of face adaptive regulation method, system and readable storage medium storing program for executing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1012005A (en) * 1996-06-19 1998-01-16 Matsushita Electric Works Ltd Automatic tracking lighting system
CN103019369A (en) * 2011-09-23 2013-04-03 富泰华工业(深圳)有限公司 Electronic device and method for playing documents based on facial expressions
CN104853481A (en) * 2015-04-01 2015-08-19 浙江农林大学 LED mood presenting and adjusting device and method
CN205945991U (en) * 2015-12-20 2017-02-08 天津博普科技有限公司 A video acquisition system for recording teaching video
CN106469301A (en) * 2016-08-31 2017-03-01 北京天诚盛业科技有限公司 The adjustable face identification method of self adaptation and device
CN109757016A (en) * 2017-11-07 2019-05-14 德阳艺空装饰设计有限公司 A kind of indoor intelligent light compensating apparatus
CN110287766A (en) * 2019-05-06 2019-09-27 平安科技(深圳)有限公司 One kind being based on recognition of face adaptive regulation method, system and readable storage medium storing program for executing
CN110274178A (en) * 2019-06-06 2019-09-24 聊城大学 A kind of intelligent illuminating system of tiny lattice classroom

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112590428A (en) * 2020-12-15 2021-04-02 安徽文香信息技术有限公司 Intelligent blackboard light supplementing equipment based on cursor positioning and control system thereof
CN112954152A (en) * 2020-12-30 2021-06-11 神思电子技术股份有限公司 System and method for eliminating light reflection of laser camera
CN113965671A (en) * 2021-02-04 2022-01-21 福建汇川物联网技术科技股份有限公司 Light supplementing method and device for distance measurement, electronic equipment and storage medium
CN115278092A (en) * 2021-04-29 2022-11-01 北京小米移动软件有限公司 Image acquisition method, image acquisition device and storage medium
CN113158975A (en) * 2021-05-13 2021-07-23 青岛海尔工业智能研究院有限公司 Information writing method and device of intelligent equipment, equipment and storage medium
CN113158975B (en) * 2021-05-13 2023-09-12 卡奥斯工业智能研究院(青岛)有限公司 Information writing method, device, equipment and storage medium of intelligent equipment
CN113176270A (en) * 2021-06-29 2021-07-27 中移(上海)信息通信科技有限公司 Dimming method, device and equipment
CN113411513A (en) * 2021-08-20 2021-09-17 中国传媒大学 Intelligent light adjusting method and device based on display terminal and storage medium
WO2023160219A1 (en) * 2022-02-28 2023-08-31 荣耀终端有限公司 Light supplementing model training method, image processing method, and related device thereof

Also Published As

Publication number Publication date
CN111835984B (en) 2023-02-07

Similar Documents

Publication Publication Date Title
CN111835984B (en) Intelligent light supplementing method and device, electronic equipment and storage medium
CN108197547B (en) Face pose estimation method, device, terminal and storage medium
CN104834379A (en) Repair guide system based on AR (augmented reality) technology
CN112052850A (en) License plate recognition method and device, electronic equipment and storage medium
CN111932564A (en) Picture identification method and device, electronic equipment and computer readable storage medium
CN111738212B (en) Traffic signal lamp identification method, device, equipment and medium based on artificial intelligence
CN111274937B (en) Tumble detection method, tumble detection device, electronic equipment and computer-readable storage medium
CN104951117B (en) Image processing system and related method for generating corresponding information by utilizing image identification
CN111476225B (en) In-vehicle human face identification method, device, equipment and medium based on artificial intelligence
CN114782901B (en) Sand table projection method, device, equipment and medium based on visual change analysis
CN113033543A (en) Curved text recognition method, device, equipment and medium
CN112380979A (en) Living body detection method, living body detection device, living body detection equipment and computer readable storage medium
CN112528909A (en) Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
CN111985449A (en) Rescue scene image identification method, device, equipment and computer medium
CN112528903B (en) Face image acquisition method and device, electronic equipment and medium
CN116797864B (en) Auxiliary cosmetic method, device, equipment and storage medium based on intelligent mirror
CN110210401B (en) Intelligent target detection method under weak light
CN112329666A (en) Face recognition method and device, electronic equipment and storage medium
CN112037235A (en) Injury picture automatic auditing method and device, electronic equipment and storage medium
CN116309873A (en) Acquisition system, method, computing device and storage medium for line-of-sight data samples
CN113792801B (en) Method, device, equipment and storage medium for detecting face dazzling degree
CN112434601B (en) Vehicle illegal detection method, device, equipment and medium based on driving video
CN114463685A (en) Behavior recognition method and device, electronic equipment and storage medium
CN114220149A (en) Method, device, equipment and storage medium for acquiring true value of head posture
CN113255456A (en) Non-active living body detection method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant