CN113963696B - Voice control method and system for curtain motor - Google Patents

Voice control method and system for curtain motor Download PDF

Info

Publication number
CN113963696B
CN113963696B CN202111208398.4A CN202111208398A CN113963696B CN 113963696 B CN113963696 B CN 113963696B CN 202111208398 A CN202111208398 A CN 202111208398A CN 113963696 B CN113963696 B CN 113963696B
Authority
CN
China
Prior art keywords
curtain
mac address
keywords
output result
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111208398.4A
Other languages
Chinese (zh)
Other versions
CN113963696A (en
Inventor
李乔娜
郭方斌
于洪志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qianhai Fangrui Technology Co ltd
Original Assignee
Shenzhen Qianhai Fangrui Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianhai Fangrui Technology Co ltd filed Critical Shenzhen Qianhai Fangrui Technology Co ltd
Priority to CN202111208398.4A priority Critical patent/CN113963696B/en
Publication of CN113963696A publication Critical patent/CN113963696A/en
Application granted granted Critical
Publication of CN113963696B publication Critical patent/CN113963696B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/25Mapping addresses of the same type
    • H04L61/2503Translation of Internet protocol [IP] addresses
    • H04L61/255Maintenance or indexing of mapping tables
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Power-Operated Mechanisms For Wings (AREA)

Abstract

The application relates to a curtain motor voice control method and a system, wherein the method comprises the following steps: the curtain device receives the original voice message and extracts the source MAC address of the original voice message; the curtain device inquires a first curtain motor corresponding to the source MAC address according to the mapping relation between the MAC address and the position; the curtain device identifies the original voice message to determine text information, identifies natural language of the text information to determine a plurality of keywords of the text information, matches the plurality of keywords with the keywords corresponding to the control command to determine that the plurality of keywords have the keywords corresponding to the control command, and sends the control command corresponding to the keywords to the first curtain motor. The technical scheme that this application provided has the advantage that the accuracy is high.

Description

Voice control method and system for curtain motor
Technical Field
The application relates to the technical field of communication and electronics, in particular to a curtain motor voice control method and system.
Background
The voice controller is a kind of controller, and the controller is driven by language in the man-machine system. The main structure is a speech recognition system of a computer. The speech recognition method is generally characterized by that the key characteristics of frequency spectrum or pronunciation of speech signal and speech data of every item of vocabulary stored in computer are compared chicken roost, and then the speech data are recognized, and according to the predefined program the different control functions can be implemented.
The existing curtain voice control mainly realizes the opening and closing of the curtain through voice recognition of a user, but when the curtain is opened and closed, as a plurality of curtains are arranged at home, the curtain is possibly opened and closed wrongly, the accuracy of the voice control is influenced, and the user experience is influenced.
Disclosure of Invention
The embodiment of the application provides a curtain motor voice control method and system, which can identify a specific curtain control method through recognition, further realize accurate voice control on a curtain and improve user experience.
In a first aspect, an embodiment of the present application provides a curtain motor voice control method, including the following steps:
the curtain device receives the original voice message and extracts the source MAC address of the original voice message; the curtain device inquires a first curtain motor corresponding to the source MAC address according to the mapping relation between the MAC address and the position; the curtain device identifies the original voice message to determine text information, identifies natural language of the text information to determine a plurality of keywords of the text information, matches the plurality of keywords with the keywords corresponding to the control command to determine that the plurality of keywords have the keywords corresponding to the control command, and sends the control command corresponding to the keywords to the first curtain motor.
Optionally, the source MAC address is an MAC address of a device that forwards the original voice information or an MAC address of an original voice information collection device.
Optionally, the mapping relationship between the MAC address and the location is configured by a user or automatically.
Optionally, the identifying, by the curtain device, the original voice message to determine the text information specifically includes:
and forming input data by the original voice, inputting the input data into an RNN model or an LSTM model for recognition to obtain an output result, and determining the text information according to the output result.
Alternatively, if the output result is obtained by RNN model recognition,
inputting the input data into an RNN model to obtain an initial output result, determining whether the initial output result has a curtain or not when the initial output result does not have a keyword corresponding to a control command, if the initial output result has the curtain, determining a time t corresponding to the curtain, acquiring n confidence rates of the output result n times before the time t, and extracting a time t-i corresponding to the minimum confidence rate from the n confidence rates; and if i is equal to n or n-i is equal to 1, replacing the hidden layer input at the time t-i from the hidden layer output result at the time t-i-1 to the average value of the hidden layer output results at the time between the time t-i and the time t, then recalculating to obtain an updated calculation result, and if 3 words corresponding to the first 3 confidence rates of the updated calculation result have on or off, replacing the output result at the time t-i with the on or off.
In a second aspect, there is provided a window covering motor voice control system, the system comprising:
a receiving unit for receiving an original voice message;
a processing unit for extracting a source MAC address of the original voice message; inquiring a first curtain motor corresponding to the source MAC address according to the mapping relation between the MAC address and the position; the original voice message is identified to determine text information, the text information is identified by natural language to determine a plurality of keywords of the text information, and when the keywords are matched with the keywords corresponding to the control command to determine that the keywords have the keywords corresponding to the control command, the control command corresponding to the keywords is sent to the first curtain motor.
Optionally, the source MAC address is an MAC address of a device that forwards the original voice information or an MAC address of an original voice information collection device.
Optionally, the mapping relationship between the MAC address and the location is configured by a user or automatically.
Alternatively to this, the first and second parts may,
the processing unit is specifically used for inputting the input data into the RNN model to obtain an initial output result if the output result is obtained through RNN model recognition, determining whether the initial output result has a curtain or not when the initial output result is determined not to have a keyword corresponding to a control command, determining a time t corresponding to the curtain if the initial output result has the curtain, acquiring n confidence rates of the output result at n times before the time t, and extracting a time t-i corresponding to the minimum confidence rate from the n confidence rates; and if i is equal to n or n-i is equal to 1, replacing the hidden layer input at the time t-i from the hidden layer output result at the time t-i-1 to the average value of the hidden layer output results at the time between the time t-i and the time t, then recalculating to obtain an updated calculation result, and if 3 words corresponding to the first 3 confidence rates of the updated calculation result have on or off, replacing the output result at the time t-i with the on or off.
In a third aspect, an embodiment of the present application provides a server, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
it can be seen that, the curtain device in the technical scheme of the application receives the original voice message and extracts the source MAC address of the original voice message; the curtain device inquires a first curtain motor corresponding to the source MAC address according to the mapping relation between the MAC address and the position; the curtain device identifies the original voice message to determine text information, identifies natural language of the text information to determine a plurality of keywords of the text information, matches the plurality of keywords with the keywords corresponding to the control command to determine that the plurality of keywords have the keywords corresponding to the control command, and sends the control command corresponding to the keywords to the first curtain motor. Therefore, the specific curtain switch can be determined, the accuracy of the curtain switch is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a curtain device according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a voice control method for a curtain motor according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a curtain motor voice control system according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 provides a curtain device, which may include a curtain motor, and may also be connected to an intelligent device, and of course, the curtain device may also be integrated with the intelligent device, and the intelligent device has a schematic structural diagram, as shown in fig. 1, and may include: the mobile phone comprises a processor, a memory, an audio component (such as a microphone), a camera, a communication component (such as a bluetooth module, wifi, mobile communication module), and a display screen, wherein the processor, the memory, the audio component (such as a microphone), the camera, and the display screen may be connected through a bus, and may of course be connected in other ways, and the application does not limit the specific display manner of the above components. The intelligent equipment can be equipment possibly used by intelligent homes such as intelligent sound boxes, intelligent mobile phones, intelligent televisions and intelligent refrigerators.
Referring to fig. 2, fig. 2 provides a voice control method for a curtain motor, which may be implemented by the curtain device shown in fig. 1, as shown in fig. 2, and the voice control method for a curtain motor shown in fig. 2 includes the following steps:
step S201, curtain equipment receives an original voice message and extracts a source MAC address of the original voice message;
the source MAC address may be a device that forwards the original voice information, may also be a MAC address of the original voice information acquisition device, and may also be a MAC address of a device that forwards the original voice information.
Step S202, the curtain device inquires a first curtain motor corresponding to the source MAC address according to the mapping relation between the MAC address and the position;
for example, the mapping relationship between the MAC address and the position may be configured by the user, for example, the MAC address of the mobile phone configured by zhang san configures a curtain motor for lying in the home, and the MAC address of the mobile phone configured by li san configures a curtain motor for living room.
In practical applications, the mapping relationship between the MAC address and the position may be generated by itself, for example, the MAC address of the living room router corresponds to the curtain motor in the living room, and the MAC address of the master-sleeping router corresponds to the curtain motor in the master-sleeping room.
Step S203, the curtain device identifies the original voice message to determine text information, identifies the text information by natural language to determine a plurality of keywords of the text information, matches the plurality of keywords with the keywords corresponding to the control command to determine that the plurality of keywords have the keywords corresponding to the control command, and sends the control command corresponding to the keywords to the first curtain motor.
By way of example, the keywords corresponding to the control command include, but are not limited to: open, close, open, window shade, close, etc., and the corresponding control command may be open or closed.
The way of voice control is explained in the following by practical situations, which are relatively private matters for the opening and closing of the curtain, for example, a person in a house opens the curtain, but opens the curtain in a bedroom, which is not understandable to the user. Therefore, a mode for distinguishing which curtain is opened by a user is needed, for the user, voice generally means opening a curtain, a bedroom curtain is not opened, a living room curtain is opened, and the like, so that other auxiliary modes are needed to determine the specific curtain of the user, and the method is implemented through the MAC address of voice information, because the user can connect different routers at home when opening the APP curtain or directly opening the curtain opposite to intelligent equipment, and the router can easily distinguish the specific curtain of the intelligent equipment, so that the curtain equipment in the technical scheme of the method receives an original voice message and extracts the source MAC address of the original voice message; the curtain device inquires a first curtain motor corresponding to the source MAC address according to the mapping relation between the MAC address and the position; the curtain device identifies the original voice message to determine text information, identifies natural language of the text information to determine a plurality of keywords of the text information, matches the plurality of keywords with the keywords corresponding to the control command to determine that the plurality of keywords have the keywords corresponding to the control command, and sends the control command corresponding to the keywords to the first curtain motor. Therefore, the specific curtain switch can be determined, the accuracy of the curtain switch is improved, and the user experience is improved.
For example, the recognizing the original voice message by the curtain device to determine the text information may specifically include:
and forming input data by the original voice, inputting the input data into an RNN model or an LSTM model for recognition to obtain an output result, and determining the text information according to the output result.
For example, if the output result is obtained through RNN model recognition, the identifying, by the window covering device, the original voice message to determine the text information may specifically include:
inputting the input data into an RNN model to obtain an initial output result, determining whether the initial output result has a curtain or not when the initial output result is determined not to have a keyword corresponding to a control command, if the initial output result has the curtain, determining a time t corresponding to the curtain, acquiring n confidence rates of the output result n (generally 3-5) times before the time t, and extracting a time t-i corresponding to the minimum confidence rate from the n confidence rates; and if i is equal to n or n-i is equal to 1, replacing the hidden layer input at the time t-i from the hidden layer output result at the time t-i-1 to the average value of the hidden layer output results at the time between the time t-i and the time t, then recalculating to obtain an updated calculation result, and if 3 words corresponding to the first 3 confidence rates of the updated calculation result have 'on' or 'off', replacing the output result at the time t-i with the on or off.
For example, the method may further include:
and if n-i is larger than 1, replacing the hidden layer input at the time t-i with the hidden layer output result at the time t-i-1, then recalculating to obtain an updated calculation result, and if 3 words corresponding to the first 3 confidence rates of the updated calculation result have on or off, replacing the output result at the time t-i with the on or off.
The principle is that if the initial result has a window shade but no control command exists, it is likely that the result of the voice recognition does not match the actual result, so that the voice recognition needs to be re-recognized, and the matching criterion is lowered, i.e. whether the word corresponding to the first 3 confidence rates has a specific word or not is adopted, the lowest confidence rate is adopted because the higher the confidence rate is, the higher the recognition accuracy is, so the lowest confidence rate of the first n times needs to be selected, and the recognition accuracy is further improved.
Referring to fig. 3, fig. 3 provides a voice control system for a curtain motor, the system comprising:
a receiving unit 301, configured to receive an original voice message;
a processing unit 302, configured to extract a source MAC address of the original voice message; inquiring a first curtain motor corresponding to the source MAC address according to the mapping relation between the MAC address and the position; the original voice message is identified to determine text information, the text information is identified by natural language to determine a plurality of keywords of the text information, and when the keywords are matched with the keywords corresponding to the control command to determine that the keywords have the keywords corresponding to the control command, the control command corresponding to the keywords is sent to the first curtain motor.
Optionally, the source MAC address is an MAC address of a device that forwards the original voice information or an MAC address of an original voice information collection device.
Optionally, the mapping relationship between the MAC address and the location is configured by a user or automatically.
Optionally, the processing unit is specifically configured to, if an output result is obtained through RNN model recognition, input the input data into the RNN model to obtain an initial output result, determine whether the initial output result has a curtain when it is determined that the initial output result does not have a keyword corresponding to the control command, determine a time t corresponding to the curtain if the initial output result has a curtain, obtain n confidence rates of the output result at n times before the time t, and extract a time t-i corresponding to a minimum confidence rate from the n confidence rates; and if i is equal to n or n-i is equal to 1, replacing the hidden layer input at the time t-i from the hidden layer output result at the time t-i-1 to the average value of the hidden layer output results at the time between the time t-i and the time t, then recalculating to obtain an updated calculation result, and if 3 words corresponding to the first 3 confidence rates of the updated calculation result have on or off, replacing the output result at the time t-i with the on or off.
The processing unit shown in fig. 3 may also be configured to perform the exemplary scheme or the refinement scheme of the embodiment shown in fig. 2, which is not described herein again.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods as described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (7)

1. A curtain motor voice control method is characterized by comprising the following steps:
the curtain device receives the original voice message and extracts the source MAC address of the original voice message;
the curtain device inquires a first curtain motor corresponding to the source MAC address according to the mapping relation between the MAC address and the position;
the curtain device identifies the original voice message to determine text information, identifies natural language of the text information to determine a plurality of keywords of the text information, matches the plurality of keywords with the keywords corresponding to the control command to determine that the plurality of keywords have the keywords corresponding to the control command, and sends the control command corresponding to the keywords to the first curtain motor;
the curtain device identifies the original voice message and determines text information specifically comprising:
the original voice is combined into input data, the input data is input into an RNN model for recognition to obtain an output result, and the text information is determined according to the output result;
if the output result is obtained through RNN model recognition, the input data is input into the RNN model to obtain an initial output result, whether the initial output result has a curtain or not is determined when the initial output result does not have a keyword corresponding to a control command, if the initial output result has the curtain, the time t corresponding to the curtain is determined, n confidence rates of the output result at n times before the time t are obtained, and the time t-i corresponding to the minimum confidence rate is extracted from the n confidence rates; and if i = n or n-i =1, replacing the hidden layer input at the time t-i by the hidden layer output result at the time t-i-1 into the average value of the hidden layer output results between the time t-i and the time t, then recalculating to obtain an updated calculation result, and if 3 words corresponding to the first 3 confidence rates of the updated calculation result have on or off, replacing the output result at the time t-i by adopting the on or off.
2. The method of claim 1,
the source MAC address is the MAC address of the equipment for forwarding the original voice information or the MAC address of the original voice information acquisition equipment.
3. The method of claim 2,
the mapping relation between the MAC address and the position is configured by a user or automatically.
4. A curtain motor voice control system, the system comprising:
a receiving unit for receiving an original voice message;
a processing unit for extracting a source MAC address of the original voice message; inquiring a first curtain motor corresponding to the source MAC address according to the mapping relation between the MAC address and the position; identifying the original voice message to determine text information, identifying the text information by natural language to determine a plurality of keywords of the text information, matching the plurality of keywords with the keywords corresponding to the control command to determine that the plurality of keywords have the keywords corresponding to the control command, and sending the control command corresponding to the keywords to the first curtain motor;
the processing unit is specifically used for inputting the input data into the RNN model to obtain an initial output result if the output result is obtained through RNN model recognition, determining whether the initial output result has a curtain or not when the initial output result is determined not to have a keyword corresponding to a control command, determining a time t corresponding to the curtain if the initial output result has the curtain, acquiring n confidence rates of the output result at n times before the time t, and extracting a time t-i corresponding to the minimum confidence rate from the n confidence rates; and if i = n or n-i =1, replacing the hidden layer input at the time t-i by the hidden layer output result at the time t-i-1 into the average value of the hidden layer output results between the time t-i and the time t, then recalculating to obtain an updated calculation result, and if 3 words corresponding to the first 3 confidence rates of the updated calculation result have on or off, replacing the output result at the time t-i with the on or off.
5. The system of claim 4,
the source MAC address is the MAC address of the equipment for forwarding the original voice information or the MAC address of the original voice information acquisition equipment.
6. The system of claim 5,
the mapping relation between the MAC address and the position is configured by a user or automatically.
7. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any of the claims 1-4.
CN202111208398.4A 2021-10-18 2021-10-18 Voice control method and system for curtain motor Active CN113963696B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111208398.4A CN113963696B (en) 2021-10-18 2021-10-18 Voice control method and system for curtain motor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111208398.4A CN113963696B (en) 2021-10-18 2021-10-18 Voice control method and system for curtain motor

Publications (2)

Publication Number Publication Date
CN113963696A CN113963696A (en) 2022-01-21
CN113963696B true CN113963696B (en) 2022-07-08

Family

ID=79464903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111208398.4A Active CN113963696B (en) 2021-10-18 2021-10-18 Voice control method and system for curtain motor

Country Status (1)

Country Link
CN (1) CN113963696B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10325599B1 (en) * 2016-12-28 2019-06-18 Amazon Technologies, Inc. Message response routing
CN107689904A (en) * 2017-10-23 2018-02-13 深圳市敢为软件技术有限公司 Sound control method, device, Internet of things system and readable storage medium storing program for executing
CN108615528B (en) * 2018-03-30 2021-08-17 联想(北京)有限公司 Information processing method and electronic equipment
CN110727206A (en) * 2019-11-27 2020-01-24 广东瑞克斯智能科技有限公司 Curtain motor control system, method and device based on mobile internet
CN111128156A (en) * 2019-12-10 2020-05-08 上海雷盎云智能技术有限公司 Intelligent household equipment voice control method and device based on model training
CN112185373A (en) * 2020-09-07 2021-01-05 珠海格力电器股份有限公司 Method and device for controlling intelligent household equipment and sound box
CN113270104B (en) * 2021-07-19 2021-10-15 深圳市思特克电子技术开发有限公司 Artificial intelligence processing method and system for voice

Also Published As

Publication number Publication date
CN113963696A (en) 2022-01-21

Similar Documents

Publication Publication Date Title
US11114099B2 (en) Method of providing voice command and electronic device supporting the same
CN108447480B (en) Intelligent household equipment control method, intelligent voice terminal and network equipment
US20210118463A1 (en) Interactive server, control method thereof, and interactive system
CN107135443B (en) Signal processing method and electronic equipment
CN106663430B (en) Keyword detection for speaker-independent keyword models using user-specified keywords
CN105118257B (en) Intelligent control system and method
US20180102125A1 (en) Electronic device and method for controlling the same
CN106601248A (en) Smart home system based on distributed voice control
CN108182944A (en) Control the method, apparatus and intelligent terminal of intelligent terminal
CN108470568B (en) Intelligent device control method and device, storage medium and electronic device
US9984563B2 (en) Method and device for controlling subordinate electronic device or supporting control of subordinate electronic device by learning IR signal
CN111462741B (en) Voice data processing method, device and storage medium
CN110992937B (en) Language off-line identification method, terminal and readable storage medium
CN105100672A (en) Display apparatus and method for performing videotelephony using the same
CN109712610A (en) The method and apparatus of voice for identification
US20200257254A1 (en) Progressive profiling in an automation system
CN104575509A (en) Voice enhancement processing method and device
CN107172258A (en) A kind of method, device, terminal and storage medium for preserving associated person information
KR101595090B1 (en) Information searching method and apparatus using voice recognition
CN113963696B (en) Voice control method and system for curtain motor
JP2019032387A (en) Controller, program and control method
CN112700770A (en) Voice control method, sound box device, computing device and storage medium
KR20050049977A (en) Ubiquitous home-network system and the control method
CN115576216B (en) Information filling method and device based on voice control intelligent household appliance
CN113270099B (en) Intelligent voice extraction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant