CN117082700A - LED lamplight interaction control system - Google Patents

LED lamplight interaction control system Download PDF

Info

Publication number
CN117082700A
CN117082700A CN202311330750.0A CN202311330750A CN117082700A CN 117082700 A CN117082700 A CN 117082700A CN 202311330750 A CN202311330750 A CN 202311330750A CN 117082700 A CN117082700 A CN 117082700A
Authority
CN
China
Prior art keywords
semantic
voice control
text
light
effect voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202311330750.0A
Other languages
Chinese (zh)
Inventor
林启程
邱国梁
曾剑峰
唐勇
谭琪琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yonglin Electronics Co Ltd
Original Assignee
Yonglin Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yonglin Electronics Co Ltd filed Critical Yonglin Electronics Co Ltd
Priority to CN202311330750.0A priority Critical patent/CN117082700A/en
Publication of CN117082700A publication Critical patent/CN117082700A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • H05B47/115Controlling the light source in response to determined parameters by determining the presence or movement of objects or living beings
    • H05B47/12Controlling the light source in response to determined parameters by determining the presence or movement of objects or living beings by detecting audible sound
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B45/00Circuit arrangements for operating light-emitting diodes [LED]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The LED lamplight interactive control system optimizes and analyzes the voice control signal by receiving the lamplight effect voice control signal input by a user and introducing a data processing and analyzing algorithm at the rear end to generate a control instruction of the LED lamplight, so that the intelligent control of the LED lamplight effect is realized to receive the lamplight effect voice control signal input by the user; performing voice recognition on the light effect voice control signal to obtain a light effect voice control text; carrying out semantic optimization and semantic understanding on the light effect voice control text to obtain the light effect voice control text semantic understanding characteristics; and generating an LED light control instruction based on the light effect voice control text semantic understanding characteristics. Through the mode, intelligent interaction experience of controlling the LED light effect through voice input can be achieved, the use convenience of a user and individuation of the light effect are improved, and therefore experience feeling of the user is improved.

Description

LED lamplight interaction control system
Technical Field
The application relates to the technical field of intelligent control, in particular to an LED lamplight interaction control system.
Background
Conventional LED light control schemes typically employ physical switches, remote controls, or simple touch panels to control the switching, brightness, and color of the light. This approach meets the basic light control requirements to some extent, but suffers from some drawbacks. For example, a physical switch, a remote controller or a touch panel in the conventional LED light control scheme requires manual operation of a user, which requires the user to personally contact a device or operate the remote controller, so that the degree of intelligence and convenience are low, resulting in limitation of user experience. In addition, the light control mode of the traditional scheme is mainly unidirectional, and a user can only send instructions through a switch or a remote controller and cannot interact with a light system in real time. At the same time, only limited control options, such as switches, brightness adjustment and color selection, are typically provided. The user cannot realize more complicated and personalized light effects, so that the interaction between the user and the light is limited, and the experience of the user is reduced.
Accordingly, an optimized LED light interactive control system is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides an LED lamplight interactive control system, which optimizes and analyzes a lamplight voice control signal input by a user by receiving the lamplight effect voice control signal and introducing a data processing and analyzing algorithm at the rear end to generate a control instruction of LED lamplight, so that the intelligent control of the LED lamplight effect is realized to receive the lamplight effect voice control signal input by the user; performing voice recognition on the light effect voice control signal to obtain a light effect voice control text; carrying out semantic optimization and semantic understanding on the light effect voice control text to obtain the light effect voice control text semantic understanding characteristics; and generating an LED light control instruction based on the light effect voice control text semantic understanding characteristics. Through the mode, intelligent interaction experience of controlling the LED light effect through voice input can be achieved, the use convenience of a user and individuation of the light effect are improved, and therefore experience feeling of the user is improved.
In a first aspect, an LED light interactive control system is provided, including:
the voice control signal receiving module is used for receiving a light effect voice control signal input by a user;
the voice recognition module is used for carrying out voice recognition on the light effect voice control signal to obtain a light effect voice control text;
the semantic optimization and understanding module is used for carrying out semantic optimization and semantic understanding on the light effect voice control text so as to obtain the semantic understanding characteristics of the light effect voice control text;
the light control instruction generation module is used for controlling text semantic understanding characteristics based on the light effect voice and generating an LED light control instruction;
the semantic optimization and understanding module comprises:
the voice signal semantic optimization and perfection unit is used for enabling the lamplight effect voice control text to pass through an instruction semantic optimizer based on an AIGC model to obtain a semantic perfected lamplight effect voice control text;
and the voice signal semantic understanding unit is used for carrying out word segmentation processing on the semantic perfect lamplight effect voice control text and then obtaining lamplight effect voice control text semantic understanding feature vectors serving as lamplight effect voice control text semantic understanding features through a context encoder comprising a word embedding layer.
The beneficial effects are that: by the method, intelligent interaction experience of controlling the LED light effect through voice input can be achieved, convenience of use of a user and individuation of the light effect are improved, and therefore experience of the user is improved. In addition, the instruction semantic optimizer based on the AIGC model can perform deep understanding and optimization on the light effect voice control text, so that the generated text is more in line with the intention of a user, and the accuracy and the user experience of voice control are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a block diagram of an LED light interactive control system according to an embodiment of the present application.
Fig. 2 is a flowchart of an LED light interactive control method according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an LED light interactive control method according to an embodiment of the present application.
Fig. 4 is an application scenario diagram of an LED light interactive control system according to an embodiment of the present application.
Detailed Description
The following description of the technical solutions according to the embodiments of the present application will be given with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Unless defined otherwise, all technical and scientific terms used in the embodiments of the application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application.
In describing embodiments of the present application, unless otherwise indicated and limited thereto, the term "connected" should be construed broadly, for example, it may be an electrical connection, or may be a communication between two elements, or may be a direct connection, or may be an indirect connection via an intermediate medium, and it will be understood by those skilled in the art that the specific meaning of the term may be interpreted according to circumstances.
It should be noted that, the term "first\second\third" related to the embodiment of the present application is merely to distinguish similar objects, and does not represent a specific order for the objects, it is to be understood that "first\second\third" may interchange a specific order or sequence where allowed. It is to be understood that the "first\second\third" distinguishing objects may be interchanged where appropriate such that embodiments of the application described herein may be practiced in sequences other than those illustrated or described herein.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Conventional LED light control schemes typically employ physical switches, remote controls, or simple touch panels to control the switching, brightness, and color of the light. Traditional LED lights are usually equipped with physical switches, and users can turn on or off the lights by manually operating the switches, and the control mode is simple and direct, but lacks intelligent and remote control functions. Many LED fixtures are equipped with remote controls, and a user can control parameters such as on-off, brightness, and color of the light by pressing a button on the remote control. Remote controls typically use radio frequency or infrared signals to communicate with the light fixtures. The remote control provides a degree of convenience, but requires the user to hold the remote control and point to the light fixture, and the distance and angle limit the control range. Some LED lamps are equipped with a touch panel, and a user can control the on-off, brightness and color of the light through buttons or sliding bars on the touch panel, and the touch panel generally has the functions of sensitivity adjustment, color selection and the like, but the operation mode is relatively limited, and cannot realize more complex light effects. Some LED lamps are connected to a control device (e.g. dimmer or control panel) via a wired connection, and control various parameters of the light via wired signal transmission, which is commonly used in professional lighting systems requiring more advanced control functions, such as stage lighting.
Advantages of conventional LED light control schemes include simplicity and ease of use, lower cost, and high reliability. However, they have some drawbacks. For example, a user needs to physically contact a physical switch, a remote controller or a touch panel, so that the operation is not intelligent and convenient enough, and remote control and automation cannot be realized. In addition, the control mode of the traditional scheme is usually unidirectional, and a user can only send instructions and cannot interact with the lighting system in real time. The control options are also relatively limited, so that the requirements of the user on complex and personalized light effects cannot be met, and the interaction and experience between the user and the light are limited.
Further, the conventional LED light control scheme has some drawbacks: the traditional scheme needs the user to manually operate a physical switch, a remote controller or a touch panel to control the light, so that the user needs to personally contact equipment or control the remote controller, the intelligent degree and the convenience are low, the user needs to be close to a lamp or carry the remote controller, and the control flexibility and convenience are limited. The control mode of traditional scheme is mainly unidirectional, and the user can only send the instruction to lighting system through switch, remote controller or touch panel, can't realize real-time interaction, and the user can't obtain the feedback information about light state or executive condition, leads to the feedback of control and confirm the limited ability. Traditional schemes generally provide limited control options such as switching, brightness adjustment, and color selection, and users cannot achieve more complex, personalized lighting effects, which limit interactions between users and lighting and reduce user experience. Traditional solutions lack the ability to be intelligent and automated. The user needs manual adjustment light's parameter, can't adjust light effect according to environmental change or user's demand automatically, and this leads to the user to need constantly to carry out manual operation, can't enjoy the convenience and the comfort that intelligent and automation brought.
Therefore, in the application, an optimized LED lamplight interaction control system is provided.
In one embodiment of the present application, fig. 1 is a block diagram of an LED light interactive control system according to an embodiment of the present application. As shown in fig. 1, an LED light interactive control system 100 according to an embodiment of the present application includes: the voice control signal receiving module 110 is configured to receive a light effect voice control signal input by a user; the voice recognition module 120 is configured to perform voice recognition on the light effect voice control signal to obtain a light effect voice control text; the semantic optimization and understanding module 130 is configured to perform semantic optimization and semantic understanding on the light effect voice control text to obtain a semantic understanding feature of the light effect voice control text; and the light control instruction generating module 140 is used for generating an LED light control instruction based on the light effect voice control text semantic understanding feature.
In the voice control signal receiving module 110, it is ensured that the voice control signal input by the user can be accurately received, including the clarity of sound, interference of noise, and the like. The convenient control mode is provided, the user can control the light through the voice command without manual operation, and the flexibility and convenience of the user are improved.
In the voice recognition module 120, it is ensured that the voice signal can be accurately converted into text, and the voice control instruction of the user is recognized. The method realizes conversion of voice and text, provides a basis for subsequent semantic understanding and control instruction generation, and enables the system to understand voice input of a user.
In the semantic optimization and understanding module 130, the resulting speech control text is semantically optimized and understood, ensuring accurate understanding of the intent and needs of the user. Through semantic optimization and understanding, the system can better understand voice input of a user, identify key information and operation instructions, and improve accuracy and intelligent degree of the system.
In the lighting control instruction generation module 140, appropriate LED lighting control instructions are generated based on semantic understanding features, including brightness adjustment, color conversion, dynamic effects, and the like. According to voice input and semantic understanding of a user, a specific light control instruction is generated, the light effect expected by the user is achieved, and control experience of the user and achievement of personalized light effect are improved.
The voice control signal receiving module, the voice recognition module, the semantic optimization and understanding module and the lamplight control instruction generating module play important roles in the optimized LED lamplight interaction control system respectively. Through accurate receipt, discernment, understanding and generating light control instruction, the system can realize operating light through speech control, promotes user's control experience and intelligent degree.
Aiming at the technical problems, the technical conception of the application is that the voice control signal of the LED light effect is optimized and analyzed by receiving the voice control signal of the light effect input by a user and introducing a data processing and analyzing algorithm at the rear end, so that the control instruction of the LED light is generated, the intelligent control of the LED light effect is realized, the intelligent interaction experience of controlling the LED light effect by voice input can be realized in such a way, the use convenience of the user and the individuation of the light effect are improved, and the experience sense of the user is improved.
Specifically, in the technical scheme of the application, firstly, a light effect voice control signal input by a user is received. The voice control signal of the light effect input by the user is received, and the voice control signal plays a vital role in finally generating the LED light control instruction. The voice control signal receiving module is responsible for receiving voice signals input by a user and converting the voice signals into processable digital signals or texts, which are the basis of subsequent processing, so that the system can understand the voice input of the user. After the voice recognition module converts the voice signal into the text, the semantic optimization and understanding module performs semantic optimization and understanding on the text to recognize the intention and the demand of the user, and the system can understand the specific lighting effect which the user wants to realize by analyzing the key information and the operation instruction in the text. Based on the user's intent and needs, the light control instruction generation module is capable of generating appropriate LED light control instructions, which may include adjusting the brightness, color, fade effect, etc. of the light to achieve the light effect desired by the user.
Therefore, the receiving of the light effect voice control signal input by the user is a key link in the whole system, and an interaction mode between the user and the system is provided, so that the user can control the light through voice instructions. Through conversion of voice signals, recognition of user intention and generation of control instructions, the system can accurately understand the requirements of users and generate corresponding LED light control instructions, so that light effects expected by the users are achieved.
The LED light control instruction is an instruction for controlling the LED light effect by processing and analyzing voice input. These instructions may include control of:
brightness adjustment: the brightness level of the LED lamp light is controlled, and dimming or brightening of the lamp light can be realized through voice instructions.
Color transformation: the color of the LED light is controlled, and the color switching or gradual change effect can be realized through voice instructions. For example, the LED lights may be designated as red, blue, green, etc.
Dynamic effects: the dynamic effect of the LED light is controlled, such as flickering, breathing, running water, etc. The switching and control of different dynamic effects can be realized through voice instructions.
Scene mode: the scene mode of the LED light is set, such as night mode, reading mode, gathering mode and the like. Different scene modes can be switched through voice instructions, and different light effects and atmospheres are achieved.
Time planning: a time schedule for the LED lights is set, such as turning the lights on or off at regular intervals. The light can be set to be automatically turned on or turned off in a specific time period through the voice command.
The LED light control instructions can be generated according to voice input and semantic understanding of a user and are transmitted to the LED light equipment through the control system, so that the light effect expected by the user is achieved. Through voice control, a user can conveniently adjust and customize the LED light, and individuation and user experience of the light are improved.
It should be appreciated that speech is a common way of communication for humans, but that computer systems cannot directly understand and process speech signals to accomplish semantic understanding of speech. Therefore, in the technical scheme of the application, the light effect voice control signal is required to be subjected to voice recognition to obtain the light effect voice control text, so that the light effect voice control signal is converted into a text form, and subsequent semantic processing and analysis are facilitated.
In one embodiment of the present application, the semantic optimization and understanding module 130 includes: the voice signal semantic optimization and perfection unit is used for enabling the lamplight effect voice control text to pass through an instruction semantic optimizer based on an AIGC model to obtain a semantic perfected lamplight effect voice control text; the voice signal semantic understanding unit is used for obtaining the semantic understanding feature vector of the voice control text of the light effect as the semantic understanding feature of the voice control text of the light effect through a context encoder comprising a word embedding layer after word segmentation processing is carried out on the voice control text of the light effect.
And the instruction semantic optimizer is based on the AIGC model and performs semantic optimization on the lamplight effect voice control text. Through the AIGC model, the voice control text can be deeply understood and optimized, so that the generated instruction is more accurate, clear and accords with the intention of a user, and the accuracy and the user experience of voice control can be improved.
The semantically optimized light effect voice control text is processed, and the text comprises word segmentation and application of a context encoder. First, word segmentation is performed on the voice-controlled text, and the text is divided into meaningful words or phrases to better understand the meaning. Then, the text after word segmentation is converted into a semantic understanding feature vector through a context encoder comprising a word embedding layer, and the feature vector comprises deep understanding of the lamplight effect voice control text, so that semantic information and context relation in the text can be captured.
It should be understood that the semantic optimization perfecting unit optimizes the voice control text through the AIGC model, so that the generated instruction is more accurate, clear and accords with the user intention, which can reduce the control error caused by the voice recognition error or the semantic ambiguity. The semantic understanding unit can better understand the meaning of the light effect voice control text through word segmentation and processing of the context encoder, so that the understanding capability of the system on the intention of a user is improved, and the system can generate corresponding light control instructions more accurately. Through semantic optimization and understanding, the system can better understand the demands of users and generate corresponding light control instructions, which provides more intelligent, convenient and personalized user experience, and the users can easily realize the expected light effect through voice instructions.
Then, it is considered that the converted text may have some semantic ambiguity or error due to the different speaking habits and ways of each user and the fact that the speech recognition technology may have some misrecognition or inaccuracy. Therefore, in the technical scheme of the application, the lamplight effect voice control text needs to be further processed through an instruction semantic optimizer based on an AIGC model to obtain the lamplight effect voice control text with improved semantics. It should be understood that the command semantic optimizer based on the AIGC model may further analyze and understand the semantic content of the light effect voice control text, so as to perform semantic analysis, grammar correction and command analysis on the text, thereby completing the optimization and perfection of the command text, and improving the accuracy and semantic integrity of text generation.
The AIGC (Artificial Intelligence for Generalized Conversational Systems) model-based instruction semantic optimizer is an artificial intelligence technology-based system and is used for performing semantic optimization on voice control texts to obtain lamplight effect voice control texts with perfect semantics.
The AIGC model is a powerful dialogue system model, combines the technologies of natural language processing, deep learning, semantic understanding and the like, can deeply understand and optimize input texts, and has powerful semantic analysis and generation capacity after large-scale training and learning.
In the instruction semantic optimizer, the AIGC model is used to process light effect speech control text, semantic optimization can be achieved by the following steps: the AIGC model first performs semantic parsing on the speech control text, extracts and analyzes key information and semantic structures in the text, which can help understand the intent and requirements of the user. Based on the semantic information obtained by analysis, the AIGC model optimizes the voice control text, so that ambiguity can be eliminated, errors can be corrected, missing information can be supplemented, and the generated text is more accurate, clear and accords with semantic logic. The AIGC model can be further understood and optimized according to the context information, and the factors such as the previous dialogue history, the preference of the user, the environmental conditions and the like can be considered, so that the requirements of the user can be better understood, and corresponding light effect voice control texts can be generated.
Through the steps, the instruction semantic optimizer based on the AIGC model can perform deep understanding and optimization on the lamplight effect voice control text, so that the generated text is more in line with the intention of a user, and the accuracy and the user experience of voice control are improved.
In one embodiment of the present application, the speech signal semantic understanding unit includes: the word segmentation subunit is used for carrying out word segmentation processing on the semantic perfect light effect voice control text so as to convert the semantic perfect light effect voice control text into a word sequence consisting of a plurality of words; a mapping subunit, configured to map each word in the word sequence to a word vector using a word embedding layer of the context encoder that includes the word embedding layer to obtain a sequence of word vectors; and the coding subunit is used for carrying out global context semantic coding on the sequence of the word vectors by using the context coder comprising the word embedding layer so as to obtain the lamplight effect voice control text semantic understanding feature vector.
Wherein the coding subunit is configured to: one-dimensional arrangement is carried out on the sequence of the word vectors to obtain global word vectors; a self-attention unit, configured to calculate a product between the global word vector and a transpose vector of each word vector in the sequence of word vectors to obtain a plurality of self-attention association matrices; respectively carrying out standardization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of standardized self-attention correlation matrices; each normalized self-attention correlation matrix in the normalized self-attention correlation matrices is subjected to a Softmax classification function to obtain a plurality of probability values; and weighting each word vector in the sequence of word vectors by taking each probability value in the plurality of probability values as weight to obtain the semantic understanding feature vector of the lamplight effect voice control text.
Through word segmentation and the application of a context encoder, the context Wen Yuyi association between words in the light effect speech control text can be captured, meaning that the system can understand the meaning and relationship of the words in a particular context, thereby grasping the user's intent more accurately. By extracting the semantic understanding feature vector of the light effect voice control text, the system can understand the semantic information of the text more deeply, so that the understanding capability of the system on the intention of a user can be improved, and the system can generate corresponding light control instructions more accurately. Based on the global context semantic association feature information, the system can understand and encode the whole light effect voice control text as a whole, so that semantic consistency is maintained, and the condition that the understanding of single words is inconsistent with the whole semantic is avoided. By obtaining more accurate semantic understanding feature vectors, the system can better understand the demands of users and generate corresponding light control instructions, so that the control accuracy can be improved, and incorrect operations caused by misunderstanding of the intention of the users can be avoided.
Word segmentation processing is carried out on the lamplight effect voice control text with perfect meaning, and the text is encoded through a context encoder comprising a word embedding layer, so that global context semantic association characteristic information can be extracted, and semantic understanding characteristic vectors of the lamplight effect voice control text are obtained. The processing mode can enhance semantic understanding capability, improve control accuracy and improve user experience.
Further, after the light effect voice control signal input by the user is converted into the light effect voice control text with complete semantics, the text is further subjected to semantic understanding, so that corresponding control of the LED light is performed. Based on the above, in the technical scheme of the application, after word segmentation is further carried out on the semantic perfect light effect voice control text, the text is encoded by a context encoder comprising a word embedding layer, so that all words in the semantic perfect light effect voice control text are extracted based on global context semantic association characteristic information, and thus, the semantic understanding characteristic vector of the light effect voice control text is obtained.
A context encoder that includes a word embedding layer is a neural network model that converts a sequence of text (e.g., speech control text) into a semantically understood feature vector representation. The context encoder is typically composed of multiple recurrent neural networks (e.g., long and short term memory networks, LSTM) or self-attention mechanisms (e.g., transducers). The word embedding layer is used for converting the input discrete words into continuous vector representations and capturing semantic information of the words.
Each word is mapped to a continuous low-dimensional vector representation that captures semantic information of the word, and distances in vector space may reflect semantic similarity between words. The word embedded vectors are input into a recurrent neural network or self-attention mechanism to model the entire text sequence, which models take into account the context information of each word and encode the context information into a hidden state. The context information of the whole text sequence is gradually encoded into the hidden state through the iterative process of the cyclic neural network or the self-attention mechanism, so that each word can obtain the global context semantic association characteristic. Finally, a feature vector for semantic understanding can be extracted from the last hidden state or the whole hidden state sequence, and the feature vector can be used for subsequent tasks such as semantic analysis, instruction generation and the like.
The light effect speech control text can be converted into a semantically understood feature vector representation by a context encoder comprising a word embedding layer. Such feature vectors can capture semantic information of terms and global contextual semantic association features, providing useful semantic representations for subsequent processing and analysis.
In one embodiment of the present application, the light control instruction generating module 140 includes: the feature gain unit is used for carrying out distribution gain based on a probability density feature imitation paradigm on the semantic understanding feature vector of the lamplight effect voice control text so as to obtain the semantic understanding feature vector of the lamplight effect voice control text after gain; the light mode detection unit is used for enabling the text semantic understanding feature vector controlled by the gain back light effect voice to pass through the classifier to obtain a classification result, and the classification result is used for representing the LED light mode label; and the light control unit is used for generating LED light control instructions based on the classification result.
In particular, in the technical scheme of the application, in the process of obtaining the semantic understanding feature vector of the light effect voice control text through a context encoder comprising a word embedding layer after word segmentation processing is carried out on the semantic perfect light effect voice control text, the context encoder carries out context semantic encoding on the sequence of the semantic perfect light effect voice control text word embedding vector by using a converter mechanism so as to obtain the sequence of the context semantic association feature vector of the semantic perfect light effect voice control text word, and then the sequence of the context semantic association feature vector of the semantic perfect light effect voice control text word is fused in a cascading mode so as to obtain the semantic understanding feature vector of the light effect voice control text. Considering that although the semantic perfect light effect voice control text is subjected to semantic expression optimization through the instruction semantic optimizer based on the AIGC model, the situation that the semantic ambiguity or text noise still exists in the semantic perfect light effect voice control text, if the sequence of the semantic perfect light effect voice control text word context semantic association feature vectors is used as a foreground object feature, the feature fusion mode of feature cascade also introduces background fusion noise related to feature distribution interference of the sequence of the semantic perfect light effect voice control text word context semantic association feature vectors, and therefore, the expression effect is expected to be enhanced based on the distribution characteristic of the semantic understanding feature vectors of the light effect voice control text.
Therefore, the applicant of the application performs the distribution gain based on the probability density characteristic imitation paradigm on the semantic understanding feature vector of the lamplight effect voice control text, and the distribution gain is specifically expressed as follows: carrying out distribution gain based on probability density characteristic imitation paradigm on the semantic understanding feature vector of the lamplight effect voice control text by using the following optimization formula to obtain the semantic understanding feature vector of the lamplight effect voice control text after gain; wherein, the optimization formula is:
wherein,is the semantic understanding feature vector of the light effect voice control text,>is the +.f. of the semantic understanding feature vector of the light effect voice control text>Characteristic value of individual position->Is the length of the semantic understanding feature vector of the light effect voice control text, < >>Square of two norms representing the semantic understanding feature vector of the light effect speech control text, and +.>Is a weighted superparameter,/->Representing an exponential operation, ++>Is the +.f. of the text semantic understanding feature vector for the gain backlight effect voice control>Characteristic values of the individual positions.
Here, based on the characteristic simulation paradigm of the standard cauchy distribution on the probability density for the natural gaussian distribution, the distribution gain based on the probability density characteristic simulation paradigm can use the characteristic scale as a simulation mask to distinguish foreground object characteristics and background distribution noise in a high-dimensional characteristic space, so that semantic cognition distribution soft matching of characteristic space mapping is carried out on the high-dimensional space based on hierarchical semantics of the high-dimensional characteristics, unconstrained distribution gain of the high-dimensional characteristic distribution is obtained, the expression effect of the lamplight effect voice control text semantic understanding characteristic vector based on characteristic distribution characteristics is improved, and the accuracy of a classification result obtained by the lamplight effect voice control text semantic understanding characteristic vector through a classifier is improved. Like this, can be based on the control command of automatic generation LED light of user's light effect pronunciation control signal, realize the intelligent control to the LED light effect, through such mode, can realize controlling the intelligent interactive experience of LED light effect through the pronunciation input, promote user's convenience of use and individuation of light effect to improve user's experience sense.
In one embodiment of the present application, the light pattern detection unit includes: the full-connection coding subunit is used for carrying out full-connection coding on the text semantic understanding feature vector controlled by the light effect voice after gain by using a plurality of full-connection layers of the classifier so as to obtain a classification feature vector; and the classification subunit is used for passing the classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
And then, the semantic understanding feature vector of the voice control text of the gain backlight effect passes through a classifier to obtain a classification result, and the classification result is used for representing the LED light mode label. Specifically, the classification label of the classifier is an LED light mode label, so after the classification result is obtained, control and identification of the LED light mode expressed by the user voice signal can be performed based on the classification result, and an LED light control instruction is generated, so that intelligent control of the LED light effect is realized.
In summary, the LED light interactive control system 100 according to the embodiment of the present application is illustrated, and is capable of implementing an intelligent interactive experience of controlling an LED light effect through voice input, thereby improving the convenience of use and individuation of the light effect for a user, so as to improve the experience of the user, by receiving a light effect voice control signal input by the user, introducing a data processing and analysis algorithm to optimize and analyze the voice control signal at the rear end, generating a control instruction of the LED light, implementing an intelligent control of the LED light effect, and by this way, implementing an intelligent interactive experience of controlling the LED light effect through voice input, and improving the convenience of use and individuation of the light effect for the user, thereby improving the experience of the user.
As described above, the LED light interactive control system 100 according to the embodiment of the present application may be implemented in various terminal devices, for example, a server for LED light interactive control, etc. In one example, the LED light interactive control system 100 according to an embodiment of the present application may be integrated into a terminal device as a software module and/or a hardware module. For example, the LED light interactive control system 100 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the LED light interactive control system 100 can also be one of a plurality of hardware modules of the terminal device.
Alternatively, in another example, the LED light interactive control system 100 and the terminal device may be separate devices, and the LED light interactive control system 100 may be connected to the terminal device through a wired and/or wireless network and transmit interactive information according to an agreed data format.
In one embodiment of the present application, fig. 2 is a flowchart of an LED light interactive control method according to an embodiment of the present application. Fig. 3 is a schematic diagram of an LED light interactive control method according to an embodiment of the present application. As shown in fig. 2 and 3, the LED light interaction control method includes: 210, receiving a light effect voice control signal input by a user; 220, performing voice recognition on the light effect voice control signal to obtain a light effect voice control text; 230, carrying out semantic optimization and semantic understanding on the light effect voice control text to obtain light effect voice control text semantic understanding characteristics; and 240, generating LED light control instructions based on the light effect voice control text semantic understanding characteristics.
It will be appreciated by those skilled in the art that the specific operation of each step in the above-described LED light interactive control method has been described in detail in the above description of the LED light interactive control system with reference to fig. 1, and thus, repetitive description thereof will be omitted.
Fig. 4 is an application scenario diagram of an LED light interactive control system according to an embodiment of the present application. As shown in fig. 4, in the application scenario, first, a light effect voice control signal (e.g., C as illustrated in fig. 4) input by a user is received; the obtained light effect voice control signal is then input to a server (e.g., S as illustrated in fig. 4) deployed with an LED light interactive control algorithm, wherein the server is capable of processing the light effect voice control signal based on the LED light interactive control algorithm to generate an LED light control instruction.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not intended to be limiting, and these advantages, benefits, effects, etc. are not to be considered as essential to the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not necessarily limited to practice with the above described specific details.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (6)

1. An LED light interactive control system, comprising:
the voice control signal receiving module is used for receiving a light effect voice control signal input by a user;
the voice recognition module is used for carrying out voice recognition on the light effect voice control signal to obtain a light effect voice control text;
the semantic optimization and understanding module is used for carrying out semantic optimization and semantic understanding on the light effect voice control text so as to obtain the semantic understanding characteristics of the light effect voice control text;
the light control instruction generation module is used for controlling text semantic understanding characteristics based on the light effect voice and generating an LED light control instruction;
the semantic optimization and understanding module comprises:
the voice signal semantic optimization and perfection unit is used for enabling the lamplight effect voice control text to pass through an instruction semantic optimizer based on an AIGC model to obtain a semantic perfected lamplight effect voice control text;
and the voice signal semantic understanding unit is used for carrying out word segmentation processing on the semantic perfect lamplight effect voice control text and then obtaining lamplight effect voice control text semantic understanding feature vectors serving as lamplight effect voice control text semantic understanding features through a context encoder comprising a word embedding layer.
2. The LED light interactive control system according to claim 1, wherein the speech signal semantic understanding unit comprises:
the word segmentation subunit is used for carrying out word segmentation processing on the semantic perfect light effect voice control text so as to convert the semantic perfect light effect voice control text into a word sequence consisting of a plurality of words;
a mapping subunit, configured to map each word in the word sequence to a word vector using a word embedding layer of the context encoder that includes the word embedding layer to obtain a sequence of word vectors;
and the coding subunit is used for carrying out global-based context semantic coding on the sequence of the word vectors by using the context encoder comprising the word embedding layer so as to obtain the lamplight effect voice control text semantic understanding feature vector.
3. The LED light interactive control system of claim 2, wherein said encoding subunit is configured to:
one-dimensional arrangement is carried out on the sequence of the word vectors to obtain global word vectors;
calculating the product between the global word vector and the transpose vector of each word vector in the sequence of word vectors to obtain a plurality of self-attention association matrices;
respectively carrying out standardization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of standardized self-attention correlation matrices;
each normalized self-attention correlation matrix in the normalized self-attention correlation matrices is subjected to a Softmax classification function to obtain a plurality of probability values;
and weighting each word vector in the sequence of word vectors by taking each probability value in the plurality of probability values as a weight to obtain the semantic understanding feature vector of the light effect voice control text.
4. The LED light interactive control system of claim 3, wherein said light control command generation module comprises:
the feature gain unit is used for carrying out distribution gain based on a probability density feature imitation paradigm on the semantic understanding feature vector of the lamplight effect voice control text so as to obtain the semantic understanding feature vector of the lamplight effect voice control text after gain;
the light mode detection unit is used for enabling the text semantic understanding feature vector controlled by the gain back light effect voice to pass through the classifier to obtain a classification result, and the classification result is used for representing the LED light mode label;
and the light control unit is used for generating LED light control instructions based on the classification result.
5. The LED light interactive control system according to claim 4, wherein the characteristic gain unit is configured to: carrying out distribution gain based on probability density characteristic imitation paradigm on the semantic understanding feature vector of the lamplight effect voice control text by using the following optimization formula to obtain the semantic understanding feature vector of the lamplight effect voice control text after gain;
wherein, the optimization formula is:
wherein,is the semantic understanding feature vector of the light effect voice control text,>is the +.f. of the semantic understanding feature vector of the light effect voice control text>Features of individual positionsValue of->Is the length of the semantic understanding feature vector of the light effect voice control text, < >>Square of two norms representing the semantic understanding feature vector of the light effect speech control text, and +.>Is a weighted superparameter,/->Representing an exponential operation, ++>Is the +.f. of the text semantic understanding feature vector for the gain backlight effect voice control>Characteristic values of the individual positions.
6. The LED light interactive control system according to claim 5, wherein the light pattern detection unit comprises:
the full-connection coding subunit is used for carrying out full-connection coding on the text semantic understanding feature vector controlled by the light effect voice after gain by using a plurality of full-connection layers of the classifier so as to obtain a classification feature vector;
and the classification subunit is used for passing the classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
CN202311330750.0A 2023-10-16 2023-10-16 LED lamplight interaction control system Withdrawn CN117082700A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311330750.0A CN117082700A (en) 2023-10-16 2023-10-16 LED lamplight interaction control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311330750.0A CN117082700A (en) 2023-10-16 2023-10-16 LED lamplight interaction control system

Publications (1)

Publication Number Publication Date
CN117082700A true CN117082700A (en) 2023-11-17

Family

ID=88706414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311330750.0A Withdrawn CN117082700A (en) 2023-10-16 2023-10-16 LED lamplight interaction control system

Country Status (1)

Country Link
CN (1) CN117082700A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117310591A (en) * 2023-11-28 2023-12-29 广州思林杰科技股份有限公司 Small-size equipment for testing equipment calibration accuracy detection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108804667A (en) * 2018-06-08 2018-11-13 百度在线网络技术(北京)有限公司 The method and apparatus of information for rendering
US20220300711A1 (en) * 2021-03-18 2022-09-22 Augmented Intelligence Technologies, Inc. System and method for natural language processing for document sequences
CN115580967A (en) * 2022-10-12 2023-01-06 湖北文理学院 Sound control integrated control system and method for vehicle light
CN116797417A (en) * 2023-05-15 2023-09-22 贵州大学 Intelligent auxiliary system based on large language model
CN116842964A (en) * 2023-07-18 2023-10-03 杭州鑫策科技有限公司 Business process generation method and system based on semantic analysis
CN116844217A (en) * 2023-08-30 2023-10-03 成都睿瞳科技有限责任公司 Image processing system and method for generating face data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108804667A (en) * 2018-06-08 2018-11-13 百度在线网络技术(北京)有限公司 The method and apparatus of information for rendering
US20220300711A1 (en) * 2021-03-18 2022-09-22 Augmented Intelligence Technologies, Inc. System and method for natural language processing for document sequences
CN115580967A (en) * 2022-10-12 2023-01-06 湖北文理学院 Sound control integrated control system and method for vehicle light
CN116797417A (en) * 2023-05-15 2023-09-22 贵州大学 Intelligent auxiliary system based on large language model
CN116842964A (en) * 2023-07-18 2023-10-03 杭州鑫策科技有限公司 Business process generation method and system based on semantic analysis
CN116844217A (en) * 2023-08-30 2023-10-03 成都睿瞳科技有限责任公司 Image processing system and method for generating face data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117310591A (en) * 2023-11-28 2023-12-29 广州思林杰科技股份有限公司 Small-size equipment for testing equipment calibration accuracy detection
CN117310591B (en) * 2023-11-28 2024-03-19 广州思林杰科技股份有限公司 Small-size equipment for testing equipment calibration accuracy detection

Similar Documents

Publication Publication Date Title
CN117082700A (en) LED lamplight interaction control system
WO2016112634A1 (en) Voice recognition system and method of robot system
CN109359293A (en) Mongolian name entity recognition method neural network based and its identifying system
CN112599124A (en) Voice scheduling method and system for power grid scheduling
CN110265012A (en) It can interactive intelligence voice home control device and control method based on open source hardware
CN111666381B (en) Task type question-answer interaction system oriented to intelligent control
WO2021147041A1 (en) Semantic analysis method and apparatus, device, and storage medium
CN101794126A (en) Wireless intelligent home appliance voice control system
CN106023995A (en) Voice recognition method and wearable voice control device using the method
CN109542233A (en) A kind of lamp control system based on dynamic gesture and recognition of face
CN117234341B (en) Virtual reality man-machine interaction method and system based on artificial intelligence
CN112331183A (en) Non-parallel corpus voice conversion method and system based on autoregressive network
CN112634918B (en) System and method for converting voice of any speaker based on acoustic posterior probability
CN113611306A (en) Intelligent household voice control method and system based on user habits and storage medium
CN111640435A (en) Method and device for controlling infrared household appliances based on intelligent sound box
CN105700359A (en) Method and system for controlling smart home through speech recognition
CN109949803B (en) Building service facility control method and system based on semantic instruction intelligent identification
CN109767767A (en) A kind of voice interactive method, system, electronic equipment and storage medium
CN113239166B (en) Automatic man-machine interaction method based on semantic knowledge enhancement
WO2023035397A1 (en) Speech recognition method, apparatus and device, and storage medium
CN118019187A (en) Remote control system and method for LED projection lamp
CN112420053A (en) Intelligent interactive man-machine conversation system
CN114627859A (en) Method and system for recognizing electronic photo frame in offline semantic manner
TW201516756A (en) Intelligent voice control system and method therefor
CN111933139A (en) Off-line voice recognition method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20231117