CN114750686A - Method and device for controlling atmosphere lamp - Google Patents
Method and device for controlling atmosphere lamp Download PDFInfo
- Publication number
- CN114750686A CN114750686A CN202210310252.9A CN202210310252A CN114750686A CN 114750686 A CN114750686 A CN 114750686A CN 202210310252 A CN202210310252 A CN 202210310252A CN 114750686 A CN114750686 A CN 114750686A
- Authority
- CN
- China
- Prior art keywords
- information
- emotion
- atmosphere lamp
- driver
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000008451 emotion Effects 0.000 claims abstract description 228
- 230000001815 facial effect Effects 0.000 claims abstract description 53
- 238000012549 training Methods 0.000 claims description 60
- 230000015654 memory Effects 0.000 claims description 34
- 230000036651 mood Effects 0.000 claims description 18
- 238000001514 detection method Methods 0.000 claims description 12
- 210000004709 eyebrow Anatomy 0.000 claims description 6
- 230000008909 emotion recognition Effects 0.000 claims description 4
- 230000002996 emotional effect Effects 0.000 claims description 4
- 238000004590 computer program Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 239000013598 vector Substances 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000004397 blinking Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 229940088597 hormone Drugs 0.000 description 2
- 239000005556 hormone Substances 0.000 description 2
- 150000003071 polychlorinated biphenyls Chemical class 0.000 description 2
- 206010026749 Mania Diseases 0.000 description 1
- 206010027940 Mood altered Diseases 0.000 description 1
- 230000001919 adrenal effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000000750 endocrine system Anatomy 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007510 mood change Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000028327 secretion Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000011410 subtraction method Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q3/00—Arrangement of lighting devices for vehicle interiors; Lighting devices specially adapted for vehicle interiors
- B60Q3/80—Circuits; Control arrangements
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Artificial Intelligence (AREA)
- Mechanical Engineering (AREA)
- Evolutionary Computation (AREA)
- Child & Adolescent Psychology (AREA)
- General Health & Medical Sciences (AREA)
- Hospice & Palliative Care (AREA)
- Psychiatry (AREA)
- Circuit Arrangement For Electric Light Sources In General (AREA)
Abstract
The embodiment of the application provides a method and a device for controlling an atmosphere lamp, wherein the method comprises the following steps: collecting face information and voice information of a driver; inputting the facial information and the voice information into an emotion recognizer, and generating emotion information corresponding to the facial information and the voice information, wherein the emotion information comprises calmness and fluctuation; inputting the emotion information into an atmosphere lamp parameter generator to generate atmosphere lamp parameters based on the emotion information, wherein the atmosphere lamp parameters comprise the color and the brightness of an atmosphere lamp; and controlling the atmosphere lamp according to the atmosphere lamp parameters. By adopting the embodiment of the application, the atmosphere lamp in the vehicle can be adjusted according to the emotion of the driver, and the emotion of the driver is relieved and stabilized, so that different requirements of the driver in each period in the driving process are met, the driving experience is greatly optimized, and the driving safety and the high-class feeling of the whole vehicle are improved.
Description
Technical Field
The application relates to the technical field of light control, in particular to a method and a device for controlling an atmosphere lamp.
Background
With the development of automobile technology and the popularization of users, automobiles play an increasingly important role in daily life of people, and users stay on the automobiles for longer and longer time. The driving safety can be improved by keeping the calm emotion of the driver in the process of driving the automobile, and the driving safety of the automobile is not guaranteed if the emotion of the driver fluctuates greatly in the process of driving the automobile. This requires that the car can automatically perceive the mood changes of the driver, intelligently adjust the atmosphere lamp in the car, provide different driving atmospheres, and alleviate and stabilize the mood of the driver.
Disclosure of Invention
The embodiment of the application provides a method and a device for controlling an atmosphere lamp, which can adjust the atmosphere lamp in a vehicle according to the emotion of a driver, and relieve and stabilize the emotion of the driver, so that different requirements of the driver at all time intervals in the driving process are met, driving experience is greatly optimized, and driving safety and the high-class feeling of the whole vehicle are improved.
In a first aspect, an embodiment of the present application provides a method for controlling an atmosphere lamp, including:
collecting face information and voice information of a driver;
inputting the facial information and the voice information into an emotion recognizer, and generating emotion information corresponding to the facial information and the voice information, wherein the emotion information comprises calmness and fluctuation;
inputting the emotion information into an atmosphere lamp parameter generator to generate atmosphere lamp parameters based on the emotion information, wherein the atmosphere lamp parameters comprise the color and the brightness of an atmosphere lamp;
and controlling the atmosphere lamp according to the atmosphere lamp parameters.
In one possible implementation, before the inputting the facial information and the voice information into a mood recognizer, the method further comprises:
training an emotion recognizer according to a training sample, wherein the training sample comprises a real face picture, real voice and emotion corresponding to the real face picture and the real voice, and the emotion recognizer is used for analyzing input picture information and voice information and judging the emotion corresponding to the input picture information and the voice information.
In a possible implementation manner, after the generating of emotion information corresponding to the facial information and the voice information and before the inputting of the emotion information into an ambience lamp parameter generator, the method further includes:
judging whether the emotion information of the driver detected in the current detection period is the same as the emotion information of the driver detected in the last detection period;
if not, inputting the emotion information into an atmosphere lamp parameter generator, and generating atmosphere lamp parameters based on the emotion information;
and if the emotion information is the same as the emotion information, the step of inputting the emotion information into an atmosphere lamp parameter generator and generating atmosphere lamp parameters based on the emotion information is not executed.
In one possible implementation, after the controlling the atmosphere lamp according to the atmosphere lamp parameter, the method further includes:
keeping the parameter of the atmosphere lamp unchanged within a preset time, and detecting the emotion information of the driver after the preset time;
if the emotion of the driver is calm, storing the facial information and the voice information and the emotion corresponding to the facial information and the voice information as a positive training sample;
If the emotion of the driver shows fluctuation, storing the facial information and the voice information and the emotion corresponding to the facial information and the voice information as negative training samples;
in one possible implementation, the method further includes:
retraining the emotion recognizer according to the positive training samples and the negative training samples.
In one possible implementation, the face information includes at least one of:
mouth information, eye information, or eyebrow information; the voice information includes at least one of the following information: volume, pitch, or dwell interval.
In a second aspect, an embodiment of the present application provides an apparatus for controlling an atmosphere lamp, including:
the acquisition unit is used for acquiring the facial information and the voice information of the driver;
the emotion recognition unit is used for inputting the face information and the voice information into an emotion recognizer and generating emotion information corresponding to the face information and the voice information, wherein the emotion information comprises calmness and fluctuation;
the atmosphere lamp parameter setting unit is used for inputting the emotion information into an atmosphere lamp parameter generator and generating atmosphere lamp parameters based on the emotion information, and the atmosphere lamp parameters comprise the color and the brightness of an atmosphere lamp;
And the control unit is used for controlling the atmosphere lamp according to the atmosphere lamp parameters.
In one possible implementation, the face information includes at least one of:
mouth information, eye information, or eyebrow information; the voice information includes at least one of the following information: volume, pitch, or dwell interval.
In a possible implementation manner, the ambience lamp parameter setting unit is further configured to, after generating the emotion information corresponding to the facial information and the voice information and before inputting the emotion information into the ambience lamp parameter generator, determine whether the emotion information of the driver detected in the current detection period is the same as the emotion information of the driver detected in the previous detection period;
if not, inputting the emotion information into an atmosphere lamp parameter generator, and generating atmosphere lamp parameters based on the emotion information;
and if the emotion information is the same as the emotion information, the step of inputting the emotion information into an atmosphere lamp parameter generator and generating atmosphere lamp parameters based on the emotion information is not executed.
The apparatus further comprises a sample generation unit and a training unit.
The sample generation unit is used for keeping the atmosphere lamp parameters unchanged within preset time and detecting the emotion information of the driver after the preset time;
If the emotion of the driver is calm, storing the facial information and the voice information and the emotion corresponding to the facial information and the voice information as a positive training sample;
if the emotion of the driver shows fluctuation, storing the facial information and the voice information and the emotion corresponding to the facial information and the voice information as negative training samples;
the training unit is used for training the emotion recognizer according to a training sample before inputting the facial information and the voice information into the emotion recognizer, the training sample comprises a real face picture and real voice and emotion corresponding to the real face picture and the real voice, and the emotion recognizer is used for analyzing the input picture information and the input voice information and judging the emotion corresponding to the input picture information and the input voice information.
In a possible implementation manner, the training unit is further configured to retrain the emotion recognizer according to the positive training sample and the negative training sample.
In a third aspect, an embodiment of the present application provides an apparatus for controlling an atmosphere lamp, including:
the system comprises a processor, a memory and a bus, wherein the processor and the memory are connected through the bus, the memory is used for storing a group of program codes, and the processor is used for calling the program codes stored in the memory and executing the method in the first aspect or any one of the possible implementation manners of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium for storing a computer program including instructions for performing the method of the first aspect or any one of the possible implementations of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, where the computer program product includes: computer program code for causing a computer to perform the method of the first aspect or any of the possible implementations of the first aspect when the computer program code runs on a computer.
Through implementing this application embodiment, can adjust the interior atmosphere lamp of car according to driver's mood, alleviate and stabilize driver's mood to satisfy the driver in the different demands of each period of driving in-process, optimized driving greatly and experienced, be favorable to improving driving security and whole car senior sense.
Drawings
The drawings used in the embodiments of the present application are described below.
FIG. 1 is a schematic flow chart of a method for controlling an atmosphere lamp according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of another method for controlling an atmosphere lamp provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of an apparatus for controlling an atmosphere lamp according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of another apparatus for controlling an atmosphere lamp according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the drawings in the embodiments of the present application.
The terms "including" and "having," and any variations thereof in the description and claims of this application and the above-described drawings, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements recited, but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to better understand the technical solution of the embodiments of the present application, a system for controlling an atmosphere lamp, which may be related to the embodiments of the present application, is described first. The system can comprise a vehicle-mounted terminal and an atmosphere lamp.
The vehicle-mounted terminal may also be referred to as a vehicle scheduling monitoring terminal (TCU for short), and provides functions of monitoring, scheduling management, media information Control, system management, and the like for a user for a front-end device of the vehicle monitoring management system. In the embodiment of the application, the vehicle-mounted terminal can collect the facial information and the voice information of a driver, input the facial information and the voice information into the emotion recognizer and generate emotion information corresponding to the facial information and the voice information; inputting emotion information into an atmosphere lamp parameter generator to generate atmosphere lamp parameters based on the emotion information; and controlling the atmosphere lamp according to the atmosphere lamp parameters.
The atmosphere lamp is also called as an LED (Light-Emitting Diode) atmosphere lamp, and can provide functions of environmental decoration and atmosphere backing for users. In the embodiment of the application, the atmosphere lamp is controlled by the vehicle-mounted terminal to change the light display state, such as changing the light color, changing the light brightness, changing the light flicker frequency and the like. The number of atmospheres may be one or more; when the number of atmosphere lamps is plural, the atmosphere lamps may also be referred to as an atmosphere lamp group. When the Printed circuit boards (PCBs for short) of the atmosphere lamp set have a small space and cannot place enough number of LEDs, the space can be divided into a plurality of PCBs, and each PCB is connected in series through a power line, a ground line, a data line and a clock line, so that the atmosphere lamp set can be controlled.
Referring to fig. 1, a schematic flow chart of a method for controlling an atmosphere lamp according to an embodiment of the present application may include the following steps:
s101, collecting face information and voice information of a driver.
The face information of the driver can be acquired through the vehicle-mounted camera, and the voice information of the driver is acquired through the vehicle-mounted microphone. And then the vehicle-mounted camera and the vehicle-mounted microphone transmit the acquired facial information and the voice information to the vehicle-mounted terminal, so that the vehicle-mounted terminal finishes the acquisition of the facial information and the voice information.
S102, inputting the face information and the voice information into an emotion recognizer to generate emotion information corresponding to the face information and the voice information.
Wherein the emotional information comprises calmness and wave motion. The fluctuating emotions in the embodiments of the present application are classified into pleasure, anger, sadness, and fear.
Optionally, the face information includes at least one of the following information:
mouth information, eye information, or eyebrow information; the voice information includes at least one of the following information: volume, pitch, or dwell interval.
In one possible example, prior to said inputting said facial information and said speech information into an emotion recognizer, said method further comprises:
training an emotion recognizer according to a training sample, wherein the training sample comprises a real face picture, real voice and emotion corresponding to the real face picture and the real voice, and the emotion recognizer is used for analyzing input picture information and voice information and judging the emotion corresponding to the input picture information and the voice information.
The emotion recognizer may be a neural network, a deep neural network, or the like, and the embodiments of the present application are not limited in any way.
For example, the driver's facial information and voice information are input into the emotion recognizer, and the emotion recognizer performs emotion analysis according to the mouth information, eye information and eyebrow information in the facial information to obtain a four-dimensional vector (0, 0.1, 0.8, 0.1) respectively representing the emotions of happiness, anger, sadness and fear, that is, 10% of the emotions of the driver are likely to be anger, 80% of the emotions are likely to be sadness and 10% of the emotions are likely to be feared by analyzing the facial information. If the voice information of the driver is not recognized, the vector is converted into (0, 0, 1, 0), namely the emotion of the driver is judged to be sad. If the voice information of the driver is identified, emotion analysis is carried out according to the volume, the tone or the pause interval in the voice information to obtain another four-dimensional vector (0.1, 0.1, 0.6 and 0.2), namely, 10% of emotion of the driver is possible to be angry, 60% of emotion of the driver is possible to be sad, and 30% of emotion of the driver is possible to be feared by analyzing the voice information. The two four-dimensional vectors are merged according to a merging coefficient a, which is a number greater than 0 but smaller than 1. For example, when the merging coefficient a is 0.5, the merged vector is ((0+0.1) 50%, (0.1+0.1) 50%, (0.8+0.6) 50%, (0.1+0.2) 50%), that is, (0.05, 0.1, 0.7, 0.15), and the merged vector is converted into (0, 0, 1, 0) based on the probability of the emotion, and thus it is determined that the emotion of the driver is sad based on the facial information and the voice information.
As can be seen, in this example, the emotion recognizer is trained by inputting a large number of real face pictures and real voices in the training sample and the corresponding emotions of the real face pictures and the real voices, so that the emotion recognizer can recognize the emotion of a person according to the facial information and voice information of the person, and the accuracy of the emotion recognizer can be improved.
In one possible implementation, after the collecting the facial information and the voice information of the driver, the method further comprises: and denoising the acquired voice signals according to methods such as an adaptive filter of a Least Mean Square algorithm (LMS for short), an adaptive notch filter of the LMS, a basic spectral subtraction method, wiener filtering denoising and the like.
By reducing the noise of the collected speech signal, the accuracy of the emotion recognizer can be further improved.
By combining the facial information and the voice information of the driver and recognizing the emotion information of the driver in a neural network mode in machine learning, the emotion information recognition accuracy can be improved, the atmosphere lamp can be controlled correctly, and the driving safety and the high-class feeling of the whole vehicle can be improved.
S103, inputting the emotion information into an atmosphere lamp parameter generator, and generating atmosphere lamp parameters based on the emotion information.
Wherein the atmosphere lamp parameters comprise the color and the brightness of the atmosphere lamp. The different colors can affect the endocrine system of human body through vision, thereby leading to the increase or decrease of human body hormone, and changing the emotion of human body. For example, yellow can excite people, green can relieve psychological stress of people, white can make people bright and fast, light blue can give people a cool feeling, pink can reduce secretion of adrenal hormone of people, people who are angry can watch pink, and emotion can quickly calm down.
The ambience lamp parameter generator may set the ambience lamp parameters based on the inputted mood information. For example, when the driver is calm, the color of the mood light is set to white; when the emotion of the driver is excessively pleasant, the color of the atmosphere lamp is set to light blue; when the driver is sad, the color of the atmosphere lamp is set to yellow; when the driver is angry, the color of the mood light is set to pink; when the driver is fear, the color of the atmosphere lamp is set to be green; the fluctuating emotions of people include four emotions, namely happiness, anger, sadness, mania and the like, and different emotions can also have corresponding atmosphere lamp colors, which is not limited in any way in the application. The atmosphere lamp parameter generator can set the corresponding atmosphere lamp brightness according to the difference of colors, can set the atmosphere lamp brightness according to the intensity of the emotion of a driver, and can also set the atmosphere lamp brightness according to the probability of the emotion in the merged vector, which is not limited in the embodiment of the application.
In one possible implementation, the mood light parameter further comprises a flicker frequency, and the flicker frequency can be controlled according to the emotional intensity of the driver.
Illustratively, when the emotion recognizer recognizes that the emotion of the driver is calm, controlling the flicker frequency of the atmosphere lamp to be A flicker frequency; when the emotion recognizer recognizes that the emotion of the driver is excessively pleasant, controlling the flicker frequency of the atmosphere lamp to be B flicker frequency; when the emotion recognizer recognizes that the driver is sad, controlling the flicker frequency of the atmosphere lamp to be C flicker frequency; when the emotion recognizer recognizes that the driver is angry, the blinking frequency of the mood light is controlled to be D blinking frequency.
As can be seen, in the present example, by controlling the blinking frequency of the atmosphere lamp in accordance with the emotion of the driver, the emotion of the driver can be further alleviated and stabilized, and driving safety and the sense of high-class of the entire vehicle can be improved.
And S104, controlling the atmosphere lamp according to the atmosphere lamp parameters.
The color and the brightness of the atmosphere lamp are controlled according to the emotion of the driver, different driving atmospheres can be provided in a targeted manner, the emotion of the driver is relieved and stabilized, and the driving safety and the high-class feeling of the whole vehicle can be improved.
Optionally, after generating emotion information corresponding to the facial information and the voice information and before inputting the emotion information into the atmosphere lamp parameter generator, the method further includes:
Judging whether the emotion information of the driver detected in the current detection period is the same as the emotion information of the driver detected in the last detection period;
if not, inputting the emotion information into an atmosphere lamp parameter generator, and generating atmosphere lamp parameters based on the emotion information;
and if the emotion information is the same as the emotion information, the step of inputting the emotion information into an atmosphere lamp parameter generator and generating atmosphere lamp parameters based on the emotion information is not executed.
For example, after a detection period, if the mood of the driver is not changed, the mood lamp parameters do not need to be adjusted; and if the emotion of the driver changes, adjusting the atmosphere lamp parameters according to the changed emotion. For example, if the original emotion of the driver is calm, the atmosphere lamp is white, and the brightness is weak, the parameters of the atmosphere lamp do not need to be adjusted; or the original emotion of the driver is expressed as fear, the atmosphere lamp is expressed as yellow, the brightness is medium, if the emotion of the driver is expressed as fear after a detection period, the atmosphere lamp is kept yellow continuously, and the brightness is medium to relieve the fear of the driver; and if the emotion of the driver is calm, adjusting the parameters of the atmosphere lamp to enable the atmosphere lamp to be in a state corresponding to the calm emotion, namely, the color is white and the brightness is weak.
By implementing the method, the parameters of the mood atmosphere lamp can be adjusted more reasonably, so that the control mode of the mood atmosphere lamp is more reasonable. The efficiency of atmosphere lamp control is improved.
Referring to fig. 2, a schematic flow chart of another method for controlling an atmosphere lamp according to an embodiment of the present application may include the following steps:
wherein, the steps of S201-S204 are the same as the steps of S101-S104, after S204, the following steps can be included:
s205, keeping the atmosphere lamp parameters unchanged within preset time, and detecting the emotion information of the driver after the preset time.
S206, if the emotion of the driver is calm, the face information and the voice information and the emotion corresponding to the face information and the voice information are saved as a positive training sample.
S207, if the emotion of the driver is fluctuated, the face information and the voice information and the emotion corresponding to the face information and the voice information are stored as negative training samples.
If the emotion of the driver is calm, the atmosphere lamp can be shown to play a role no matter the previous emotion of the driver is calm or fluctuated, so that the emotion recognizer is shown to recognize the correct emotion of the driver, and the group of facial information, voice information and corresponding emotion of the facial information and voice information can be used as a positive training sample;
If the emotion of the driver appears to fluctuate, whether the previous emotion of the driver is calm or fluctuating, it indicates that the atmosphere lamp has not acted or that the atmosphere lamp has acted against the effect, so that the emotion recognizer does not recognize the correct emotion of the driver, and the set of facial information and voice information and the corresponding emotion can be used as a negative training sample.
In one possible implementation, the method further comprises the steps of: retraining the emotion recognizer according to the positive training samples and the negative training samples.
Illustratively, there is a calmness of the emotion corresponding to a positive training sample, and the emotion corresponding to the positive training sample can be determined to be calmness according to the positive training sample, and then if the subsequently input data of the sample a is consistent with the data of the positive training sample or the similarity exceeds a preset threshold, the emotion corresponding to the sample a is output to be calmness. For example, there is a calm emotion corresponding to a negative training sample, and it may be determined that the emotion corresponding to the negative training sample may not be calm but fluctuating or other emotions according to the negative training sample, and if data of the subsequently input B sample is consistent with data of the negative training sample or similarity of the data of the negative training sample exceeds a preset threshold, the emotion corresponding to the B sample is output as fluctuating or other emotion, and is not output as calm.
It can be seen that, in this example, by saving the positive training sample and the negative training sample, and retraining the emotion recognizer according to the positive training sample and the negative training sample, the emotion recognition capability of the emotion recognizer becomes more and more accurate.
Optionally, after the emotion recognizer is retrained, the binding relationship between the emotion recognizer and the driver is configured.
Optionally, before the inputting the facial information and the voice information into an emotion recognizer, the method further includes recognizing identity information of a driver, determining whether there is an emotion recognizer bound to the driver, and if so, recognizing the emotion of the driver by using the emotion recognizer bound to the driver; if not, the driver's emotion is identified using a new emotion recognizer.
There may be more than one driver in a car, such as a bus or a bus, and by configuring the binding relationship between the emotion recognizer and the driver, the emotion recognizer can be specific to a unique driver, so that the emotion recognizer can more efficiently and accurately recognize the emotion of the emotional driver.
Referring to fig. 3, a schematic composition diagram of an apparatus for controlling an atmosphere lamp according to an embodiment of the present disclosure includes:
An acquisition unit 100 for acquiring facial information and voice information of a driver;
an emotion recognition unit 200, configured to input the facial information and the voice information into an emotion recognizer, and generate emotion information corresponding to the facial information and the voice information, where the emotion information includes calmness and fluctuation;
the atmosphere lamp parameter setting unit 300 is configured to input the mood information into an atmosphere lamp parameter generator, and generate atmosphere lamp parameters based on the mood information, where the atmosphere lamp parameters include color and brightness of an atmosphere lamp;
a control unit 400, configured to control the ambience lamp according to the ambience lamp parameter.
Optionally, the face information includes at least one of the following information:
mouth information, eye information, or eyebrow information; the voice information includes at least one of the following information: volume, pitch, or dwell interval.
Optionally, the atmosphere lamp parameter setting unit 300 is further configured to, after the emotion information corresponding to the face information and the voice information is generated and before the emotion information is input into the atmosphere lamp parameter generator, determine whether the emotion information of the driver detected in the current detection period is the same as the emotion information of the driver detected in the previous detection period;
If not, inputting the emotion information into an atmosphere lamp parameter generator, and generating atmosphere lamp parameters based on the emotion information;
and if the emotion information is the same as the emotion information, the step of inputting the emotion information into an atmosphere lamp parameter generator to generate atmosphere lamp parameters based on the emotion information is not executed.
Optionally, the apparatus further comprises a sample generation unit 500 and a training unit 600.
The sample generation unit 500 is configured to keep the atmosphere lamp parameter unchanged within a preset time, and detect emotion information of the driver after the preset time;
if the emotion of the driver is calm, storing the facial information and the voice information and the emotion corresponding to the facial information and the voice information as a positive training sample;
if the emotion of the driver shows fluctuation, storing the facial information and the voice information and the emotion corresponding to the facial information and the voice information as negative training samples;
the training unit 600 is configured to train an emotion recognizer according to a training sample before the facial information and the voice information are input to the emotion recognizer, where the training sample includes a real face picture and a real voice and an emotion corresponding to the real face picture and the real voice, and the emotion recognizer is configured to analyze input picture information and voice information and determine the emotion corresponding to the input picture information and voice information.
Optionally, the training unit 600 is further configured to retrain the emotion recognizer according to the positive training sample and the negative training sample.
Referring to fig. 4, a schematic composition diagram of another apparatus for controlling an atmosphere lamp according to an embodiment of the present disclosure includes:
a processor 110, a memory 120, and a transceiver 130. The processor 110, the memory 120, and the transceiver 130 are connected by a bus 140, the memory 120 is used for storing instructions, and the processor 110 is used for executing the instructions stored by the memory 120 to implement the steps in the method corresponding to fig. 1-2 as described above.
The processor 110 is configured to execute the instructions stored in the memory 120 to control the transceiver 130 to receive and transmit signals, thereby implementing the steps of the above-mentioned method. The memory 120 may be integrated in the processor 110 or may be provided separately from the processor 110.
As an implementation, the function of the transceiver 130 may be realized by a transceiver circuit or a dedicated chip for transceiving. The processor 110 may be considered to be implemented by a dedicated processing chip, processing circuit, processor, or a general-purpose chip.
As another implementation manner, a manner of using a general-purpose computer to implement the apparatus provided in the embodiment of the present application may be considered. Program code that will implement the functions of the processor 110 and the transceiver 130 is stored in the memory 120, and a general-purpose processor implements the functions of the processor 110 and the transceiver 130 by executing the code in the memory 120.
For the concepts, explanations, and details of other steps related to the technical solutions provided in the embodiments of the present application, please refer to the content of the method steps performed by the apparatus for controlling an atmosphere lamp in the foregoing method or other embodiments, and further details thereof are not described herein.
As another implementation of this embodiment, a computer-readable storage medium is provided, on which instructions are stored, which when executed perform the method in the above-described method embodiment.
As another implementation of the present embodiment, a computer program product is provided, which contains instructions that, when executed, perform the method in the above method embodiments.
Those skilled in the art will appreciate that only one memory and processor are shown in fig. 4 for ease of illustration. In an actual device, there may be multiple processors and memories. The memory may also be referred to as a storage medium or a storage device, and the like, which is not limited in this application.
It should be understood that, in the embodiment of the present Application, the processor may be a Central Processing Unit (CPU), and the processor may also be other general-purpose processors, Digital Signal Processors (DSP), Application Specific Integrated Circuits (ASIC), Field-Programmable Gate arrays (FPGA) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like.
It will also be appreciated that the memory referred to in embodiments of the invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM) which serves as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), SLDRAM (SLDRAM), and Direct Rambus RAM (DR RAM).
It should be noted that when the processor is a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, the memory (memory module) is integrated in the processor.
It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The bus may include a power bus, a control bus, a status signal bus, and the like, in addition to the data bus. But for clarity of illustration the various buses are labeled as buses in the figures.
It should also be understood that reference herein to first, second, third, fourth, and various numerical designations is made only for ease of description and should not be used to limit the scope of the present application.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
In the embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various Illustrative Logical Blocks (ILBs) and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), among others.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A method of controlling an atmosphere lamp, comprising:
collecting face information and voice information of a driver;
inputting the facial information and the voice information into an emotion recognizer, and generating emotion information corresponding to the facial information and the voice information, wherein the emotion information comprises calmness and fluctuation;
inputting the emotion information into an atmosphere lamp parameter generator to generate atmosphere lamp parameters based on the emotion information, wherein the atmosphere lamp parameters comprise the color and the brightness of an atmosphere lamp;
and controlling the atmosphere lamp according to the atmosphere lamp parameters.
2. The method of claim 1, wherein prior to said inputting said facial information and said speech information into a mood recognizer, said method further comprises:
training an emotion recognizer according to a training sample, wherein the training sample comprises a real face picture, real voice and emotion corresponding to the real face picture and the real voice, and the emotion recognizer is used for analyzing input picture information and voice information and judging the emotion corresponding to the input picture information and the voice information.
3. The in-vehicle ambience lamp control method of claim 2, wherein after the generating of mood information corresponding to the facial information and the voice information and before the inputting of the mood information into an ambience lamp parameter generator, the method further comprises:
judging whether the emotion information of the driver detected in the current detection period is the same as the emotion information of the driver detected in the last detection period;
if not, inputting the emotion information into an atmosphere lamp parameter generator, and generating atmosphere lamp parameters based on the emotion information;
and if the emotion information is the same as the emotion information, the step of inputting the emotion information into an atmosphere lamp parameter generator to generate atmosphere lamp parameters based on the emotion information is not executed.
4. The method according to claim 2 or 3, wherein after said controlling the atmosphere lamp according to said atmosphere lamp parameters, said method further comprises:
keeping the atmosphere lamp parameters unchanged within preset time, and detecting the emotion information of the driver after the preset time;
if the emotion of the driver is calm, storing the facial information and the voice information and the emotion corresponding to the facial information and the voice information as a positive training sample;
And if the emotional expression of the driver is fluctuant, storing the facial information and the voice information and the emotion corresponding to the facial information and the voice information as negative training samples.
5. The method of claim 4, further comprising:
retraining the emotion recognizer according to the positive training samples and the negative training samples.
6. The method of claim 1, wherein the face information comprises at least one of:
mouth information, eye information, or eyebrow information; the voice information includes at least one of the following information: volume, pitch, or dwell interval.
7. An apparatus for controlling an atmosphere lamp, comprising:
the acquisition unit is used for acquiring the facial information and the voice information of a driver;
the emotion recognition unit is used for inputting the face information and the voice information into an emotion recognizer and generating emotion information corresponding to the face information and the voice information, wherein the emotion information comprises calmness and fluctuation;
the atmosphere lamp parameter setting unit is used for inputting the emotion information into an atmosphere lamp parameter generator and generating atmosphere lamp parameters based on the emotion information, and the atmosphere lamp parameters comprise the color and the brightness of an atmosphere lamp;
And the control unit is used for controlling the atmosphere lamp according to the atmosphere lamp parameters.
8. The apparatus of claim 7, further comprising:
the training unit is used for training the emotion recognizer according to a training sample before inputting the facial information and the voice information into the emotion recognizer, the training sample comprises a real face picture, real voice and emotion corresponding to the real face picture and the real voice, and the emotion recognizer is used for analyzing the input picture information and the voice information and judging the emotion corresponding to the input picture information and the voice information.
9. An apparatus for controlling an atmosphere lamp, comprising:
a processor, a memory and a bus, the processor and the memory being connected by the bus, wherein the memory is configured to store a set of program codes and the processor is configured to call the program codes stored in the memory to execute the method according to any one of claims 1-6.
10. A computer-readable storage medium, comprising:
the computer-readable storage medium has stored therein instructions which, when run on a computer, implement the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210310252.9A CN114750686A (en) | 2022-03-28 | 2022-03-28 | Method and device for controlling atmosphere lamp |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210310252.9A CN114750686A (en) | 2022-03-28 | 2022-03-28 | Method and device for controlling atmosphere lamp |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114750686A true CN114750686A (en) | 2022-07-15 |
Family
ID=82328195
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210310252.9A Pending CN114750686A (en) | 2022-03-28 | 2022-03-28 | Method and device for controlling atmosphere lamp |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114750686A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115285015A (en) * | 2022-08-24 | 2022-11-04 | 长城汽车股份有限公司 | Welcome method and device of luminous backdrop, vehicle and storage medium |
CN116528438A (en) * | 2023-04-28 | 2023-08-01 | 广州力铭光电科技有限公司 | Intelligent dimming method and device for lamp |
CN116552379A (en) * | 2023-06-05 | 2023-08-08 | 浙江百康光学股份有限公司 | Automobile atmosphere lamp control method, electronic equipment and readable medium |
CN117082695A (en) * | 2023-10-13 | 2023-11-17 | 深圳市汇杰芯科技有限公司 | New energy automobile wireless atmosphere lamp control method, system and storage medium |
-
2022
- 2022-03-28 CN CN202210310252.9A patent/CN114750686A/en active Pending
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115285015A (en) * | 2022-08-24 | 2022-11-04 | 长城汽车股份有限公司 | Welcome method and device of luminous backdrop, vehicle and storage medium |
CN115285015B (en) * | 2022-08-24 | 2024-08-23 | 长城汽车股份有限公司 | Welcome method and device of luminous backdrop, vehicle and storage medium |
CN116528438A (en) * | 2023-04-28 | 2023-08-01 | 广州力铭光电科技有限公司 | Intelligent dimming method and device for lamp |
CN116528438B (en) * | 2023-04-28 | 2023-10-10 | 广州力铭光电科技有限公司 | Intelligent dimming method and device for lamp |
CN116552379A (en) * | 2023-06-05 | 2023-08-08 | 浙江百康光学股份有限公司 | Automobile atmosphere lamp control method, electronic equipment and readable medium |
CN117082695A (en) * | 2023-10-13 | 2023-11-17 | 深圳市汇杰芯科技有限公司 | New energy automobile wireless atmosphere lamp control method, system and storage medium |
CN117082695B (en) * | 2023-10-13 | 2023-12-26 | 深圳市汇杰芯科技有限公司 | New energy automobile wireless atmosphere lamp control method, system and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114750686A (en) | Method and device for controlling atmosphere lamp | |
CN106682602B (en) | Driver behavior identification method and terminal | |
US11364926B2 (en) | Method for operating a motor vehicle system of a motor vehicle depending on the driving situation, personalization device, and motor vehicle | |
CN110047487A (en) | Awakening method, device, vehicle and the machine readable media of vehicle-mounted voice equipment | |
CN108182714B (en) | Image processing method and device and storage medium | |
KR102199928B1 (en) | Interactive agent apparatus and method considering user persona | |
CN106101541A (en) | A kind of terminal, photographing device and image pickup method based on personage's emotion thereof | |
CN105574478A (en) | Information processing method and apparatus | |
CN111326178A (en) | Multi-mode speech emotion recognition system and method based on convolutional neural network | |
CN113266975B (en) | Vehicle-mounted refrigerator control method, device, equipment and storage medium | |
CN110633701A (en) | Driver call detection method and system based on computer vision technology | |
CN113906730A (en) | Electronic device for obtaining skin image and control method thereof | |
KR100690295B1 (en) | Method of face image normalization and face recognition system for a mobile terminal | |
CN113128284A (en) | Multi-mode emotion recognition method and device | |
CN112644375B (en) | Mood perception-based in-vehicle atmosphere lamp adjusting method, system, medium and terminal | |
Wati et al. | Real time face expression classification using convolutional neural network algorithm | |
CN114694199A (en) | Media content recommendation method and device, vehicle-mounted terminal and storage medium | |
CN117012205B (en) | Voiceprint recognition method, graphical interface and electronic equipment | |
CN117198335A (en) | Voice interaction method and device, computer equipment and intelligent home system | |
CN111833854A (en) | Man-machine interaction method, terminal and computer readable storage medium | |
DE102019133133A1 (en) | Assistance system through which the output of at least one media content is controlled in a room, motor vehicle and operating method for the assistance system | |
CN110620879A (en) | Dynamic light supplementing device and method for face recognition | |
CN109740422B (en) | Method and device for identifying automobile | |
CN210983432U (en) | Driver detection system that makes a call based on computer vision technique | |
CN112183457B (en) | Method, device and equipment for controlling atmosphere lamp in vehicle and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication |