CN115542759A - Intelligent household control method, system and storage medium based on multi-mode recognition - Google Patents
Intelligent household control method, system and storage medium based on multi-mode recognition Download PDFInfo
- Publication number
- CN115542759A CN115542759A CN202211181832.9A CN202211181832A CN115542759A CN 115542759 A CN115542759 A CN 115542759A CN 202211181832 A CN202211181832 A CN 202211181832A CN 115542759 A CN115542759 A CN 115542759A
- Authority
- CN
- China
- Prior art keywords
- family
- information
- home control
- neural network
- script
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000003062 neural network model Methods 0.000 claims abstract description 29
- 238000011217 control strategy Methods 0.000 claims abstract description 17
- 238000005286 illumination Methods 0.000 claims description 18
- 238000013528 artificial neural network Methods 0.000 claims description 13
- 238000001514 detection method Methods 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 11
- 230000007613 environmental effect Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 230000003993 interaction Effects 0.000 abstract description 6
- 230000006870 function Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B15/00—Systems controlled by a computer
- G05B15/02—Systems controlled by a computer electric
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/418—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/26—Pc applications
- G05B2219/2642—Domotique, domestic, home control, automation, smart house
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Manufacturing & Machinery (AREA)
- Quality & Reliability (AREA)
- Selective Calling Equipment (AREA)
Abstract
The invention discloses an intelligent home control method based on multi-modal recognition, which comprises the following steps: acquiring a daily family information set in a family scene, and constructing a neural network model according to the daily family information set; the intelligent home control method has the advantages that the neural network model is used for adaptively predicting the real-time home information, the optimal prediction result in the home scene can be obtained, the intelligent home control strategy is compiled into the reasoning script to reason the optimal prediction result and the real-time environment information, the reasoning result is obtained, and the intelligent home equipment is controlled according to the reasoning result. Therefore, the intelligent household equipment control information with high accuracy is obtained, the mutual cooperation and the function exertion between the automatically controlled intelligent household equipment are realized, and the human-computer interaction experience of the user and the intelligent household equipment is improved.
Description
Technical Field
The invention relates to the field of intelligent home control, in particular to an intelligent home control method, an intelligent home control system and a storage medium based on multi-mode recognition.
Background
With the advent of the 5G Internet of things era, embedded equipment is widely applied to daily household life of people. In order to meet the demand of people on good life, the consumption concept and the living idea of people are changed due to economic development and technological progress. Therefore, the intelligent household man-machine interaction experience technology improves the life experience with high quality for the user.
In the current family scene, the intelligent home man-machine interaction mainly depends on a remote controller or a mobile phone to carry out infrared remote control, and is operated through keys or touch; and the intelligent household equipment is controlled by voice and vision, so that non-contact control is realized. However, the above control methods have limitations: the intelligent home equipment has the problems of single control information source, low accuracy and the like, and cannot provide more comfortable and convenient home experience for users.
Disclosure of Invention
In order to overcome the defects that the control information of the intelligent home equipment in the background art has single source, low accuracy and the like and cannot provide more comfortable and convenient home experience for users, the invention aims to provide an intelligent home control method based on multi-mode recognition.
In order to achieve the above purpose, the invention adopts the following technical scheme:
the invention provides an intelligent home control method based on multi-mode recognition, which comprises the following steps:
acquiring a daily family information set in a family scene, and constructing a neural network model according to the daily family information set;
acquiring real-time family information in a family scene, inputting the real-time family information into the neural network model for prediction, and acquiring a prediction result;
constructing an intelligent home control strategy according to user habits, and compiling the intelligent home control strategy into an inference script;
acquiring real-time environment information in a family scene, performing multi-mode fusion recognition on the prediction result and the real-time environment information through a reasoning script to obtain a reasoning result, and controlling the intelligent home equipment according to the reasoning result.
In some possible embodiments, the family information includes image information and audio information, and a YOLO target detection neural network model is trained from the set of image information; and training a DNN deep neural network acoustic model according to the audio information set.
In some possible embodiments, the environmental information includes temperature, coordinates, humidity, and illumination intensity.
In some possible embodiments, the inference script is a Prolog language script.
In a second aspect of the invention, a smart home control system based on multi-modal recognition is provided, which comprises
A model training module: acquiring a daily family information set in a family scene, and constructing a neural network model according to the daily family information set;
a model prediction module: acquiring real-time family information in a family scene, inputting the real-time family information into the neural network model for prediction, and acquiring a prediction result;
the script compiling module: constructing an intelligent home control strategy according to user habits, and compiling the intelligent home control strategy into an inference script;
the reasoning control module: acquiring real-time environment information in a family scene, performing multi-mode fusion recognition on the prediction result and the real-time environment information through a reasoning script to obtain a reasoning result, and controlling the intelligent home equipment according to the reasoning result.
In some possible embodiments, the family information includes image information and audio information, and a YOLO target detection neural network model is trained according to the image information set; and training a DNN deep neural network acoustic model according to the audio information set.
In some possible embodiments, the environmental information includes temperature, coordinates, humidity, and illumination intensity.
In some possible embodiments, the inference script is a Prolog language script.
In a third aspect of the present invention, a computer storage medium is provided, where a computer program is stored on the computer storage medium, and the computer program, when executed by a processor, implements the steps of the above-mentioned smart home control method based on multi-modal recognition.
The invention has the beneficial effects that:
according to the intelligent home control method based on multi-mode recognition, the neural network model is used for conducting adaptive prediction on real-time home information, the optimal prediction result in a home scene can be obtained, the intelligent home control strategy is compiled into a reasoning script to carry out reasoning on the optimal prediction result and the real-time environment information, so that the intelligent home equipment control information with high accuracy is obtained, the automatic control intelligent home equipment is mutually cooperated and matched to play a function, and the human-computer interaction experience of a user and the intelligent home equipment is improved.
Drawings
FIG. 1 is a flowchart illustrating the overall steps of an intelligent home control method based on multi-modal recognition according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a hardware configuration according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an intelligent home control system based on multi-modal recognition according to an embodiment of the present invention.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
The technical problem is as follows: in the current family scene, the intelligent home man-machine interaction mainly depends on a remote controller or a mobile phone to carry out infrared remote control, and is operated through keys or touch; and the intelligent household equipment is controlled by voice and vision, so that non-contact control is realized. However, the above control methods have limitations: the intelligent home equipment has the problems of single control information source, low accuracy and the like, and cannot provide more comfortable and convenient home experience for users.
In order to solve the problems, automatic control of the intelligent household equipment in a home is achieved by a multi-mode information fusion mode, and the intelligent household equipment has the functions of AI voice, visual recognition and the like. The invention provides an embodiment 1, and embodiment 1 provides an intelligent home control method based on multi-modal recognition, which specifically includes the following steps:
s1, acquiring a daily family information set in a family scene, and constructing a neural network model according to the daily family information set. The family information includes image information, audio information (including video or photo, audio recording, etc.), and the like. S1 is specifically as follows: the method comprises the steps that a daily image information set and a daily audio information set in a family scene are obtained through a robot, wherein the robot is used for achieving functions of image information acquisition, audio information acquisition and the like in the family scene.
A YOLO target detection neural network model is constructed according to a daily image information set, and the YOLO target detection neural network model is mainly divided into three modules: a Backbone feature extraction network (Backbone) module for extracting the features of the input image information; a reinforced feature network (Neck) module is used for realizing fusion extraction of features of feature graphs of different scales; a prediction network (Head) module to predict a target anchor frame position. A YOLO target detection model is built by setting different convolution layers, pooling layers, activation functions and the like, and the model is continuously trained in an iterative mode by utilizing image data, so that the YOLO target detection neural network model for families is finally obtained.
The DNN deep neural network acoustic model is trained according to a daily audio information set, is a multi-layer perceptron with multiple hidden layers, and structurally comprises an input layer, the hidden layers and an output layer. And training the model by using the acquired audio information, and simultaneously performing noise training, so that the recognition performance of the DNN model is improved in a complex environment, and finally the DNN deep neural network acoustic model for families is obtained.
And deploying the YOLO target detection neural network model and the DNN deep neural network acoustic model in a family information box, and connecting the robot and the family information box in a communication manner.
And S2, acquiring real-time family information in a family scene, inputting the real-time family information into the neural network model for prediction, and acquiring a prediction result. S2 specifically comprises the following steps: and the robot patrols the living room, and respectively sends the image information and the audio information collected at the time t to the YOLO target detection neural network model and the DNN deep neural network acoustic model for prediction. The prediction process is specifically as follows: for example, 16 pm on a certain day, the user and the sofa are identified as targets, when the coordinates of the user are not changed, the central coordinates of the user are (x 1, y 1), the user is in a stop moving state, the coordinates of the sofa are not changed, the sofa is in a fixed position state, the central coordinates of the sofa are in a form of (x 2, y 2), the sofa is in a stationary state, the distance between the central coordinates of the user and the central coordinates of the sofa is calculated to be L, and when the distance L1 between the user and the sofa is within a certain threshold interval, for example, L1 is less than 20cm, the user is in a state of sitting on the sofa;
and judging whether the sound source information is human sound through the transform language model identification, and if not, judging the type of the sound source, for example, judging that the sound source is television audio, wherein the position of the sofa and the position of the television are relatively fixed, namely, the prediction result is that the user sits on the sofa to watch television.
The DNN deep neural network acoustic model is formed by a plurality of layers of perceptrons, collected home audio data are input into a network after being labeled, and audio acoustic identification information in a suitable home is obtained through iterative training of the DNN deep neural network acoustic model. And deploying the trained acoustic model into a family information box for identifying the audio in the family.
In the speech model based on the Transformer, an attention mechanism is introduced, which is inspired by that people only pay attention to important objects, only important information in an input sequence is learned, acoustic information, pronunciation information and a speech model are integrated into a neural network to form a single-pair single-speech recognition model, and the Transformer speech model trained through the acoustic information set and the pronunciation information set is deployed into a family information box and used for recognizing speech in a family.
S3, constructing an intelligent home control strategy according to user habits, and compiling the intelligent home control strategy into an inference script; the inference script requires that a user sets the temperature range, the humidity range, the illumination range, the user position coordinate and other environmental sensor ranges and the like according to the script compiling rule by the user according to the living habit of the user. And the Prolog language carries out logical reasoning according to facts, rules, queries and results to obtain a final control result, uploads the edited script to the family information box and adds the script name.
And S4, acquiring real-time environment information in a family scene, performing multi-mode fusion recognition on the prediction result and the real-time environment information through a reasoning script to obtain a reasoning result, and controlling the intelligent household equipment according to the reasoning result. Wherein the environmental information includes temperature, humidity, illumination intensity and the like. S3 specifically comprises the following steps: for example, at 16 pm on a certain day, the temperature when the temperature sensor acquires the living room 16 is a1, the humidity when the humidity sensor acquires the living room 16 is b1, and the illumination intensity when the illumination sensor acquires the living room 16 is c1; and performing fusion recognition on the prediction result (when a user sits on a sofa to watch television) and the temperature is a1 at 16, the humidity is b1 at 16 and the illumination intensity is c1 at 16 through an inference script of the intelligent home control strategy to obtain an inference result. And controlling household equipment such as lamplight, curtains, air conditioners and the like according to the inference result, adjusting the indoor temperature, humidity and illumination intensity to accord with the habit of a user, and simultaneously meeting the threshold set by the inference script, wherein the temperature threshold interval is 20-25 ℃, the humidity threshold interval is 30-60%, and the illumination intensity threshold interval is 100-300 lx. And then the user can watch the television by sitting on the sofa in a healthy and comfortable environment.
Referring to fig. 2, the robot is in communication connection with a home information box, the home information box is connected with a router, the router is connected with a plurality of wireless APs (Access points), each wireless AP is connected with a sensor, and the sensors are deployed in a home scene.
According to the intelligent home control method based on multi-mode recognition, the neural network model is used for conducting adaptive prediction on real-time home information, the optimal prediction result in a home scene can be obtained, the intelligent home control strategy is compiled into a reasoning script to carry out reasoning on the optimal prediction result and the real-time environment information, so that the intelligent home equipment control information with high accuracy is obtained, the automatic control intelligent home equipment is mutually cooperated and matched to play a function, and the human-computer interaction experience of a user and the intelligent home equipment is improved.
Example 2
And S2, acquiring real-time family information in a family scene, inputting the real-time family information into the neural network model for prediction, and acquiring a prediction result. S2 specifically comprises the following steps: and the robot patrols the living room, and respectively sends the image information and the audio information collected at the time t to the YOLO target detection neural network model and the DNN deep neural network acoustic model for prediction. The prediction process is specifically as follows: for example, at 16 pm of a certain day, for example, the recognition target is a user, if the coordinates of the user do not change, it indicates that the user is in a state of stopping moving in the living room, and if the coordinates of the television are fixed, it indicates that the television is in a stationary state, and it is determined whether the sound source information is a human voice through the transform language model recognition, and if the human voice is not recognized, it is determined that the type of the sound source is a television audio, for example, it is determined that the sound source is a television audio; namely, the prediction result is that the user is in the living room and the television is playing.
And S4, acquiring real-time environment information in a family scene, performing multi-mode fusion recognition on the prediction result and the real-time environment information through a reasoning script to obtain a reasoning result, and controlling the intelligent household equipment according to the reasoning result. Wherein the environmental information includes temperature, humidity, illumination intensity and the like. S3 specifically comprises the following steps: for example, at 16 pm on a certain day, the temperature when the temperature sensor acquires the living room 16 is a2, the humidity when the humidity sensor acquires the living room 16 is b2, the illumination intensity when the illumination sensor acquires the living room 16 is c2, the real position of the user is secondarily located by the infrared sensor on the television, the infrared sensor has two transmitting and receiving ends, the infrared sensor is located according to the working principle that infrared light transmitted by the infrared transmitting tube is blocked by an object, so that the infrared receiving tube cannot receive the infrared light, thereby triggering an action signal, the real position of the user is judged by capturing whether the infrared light of the infrared sensor beside the television is blocked, at this time, the central coordinates of the user predicted by the YOLO target detection neural network model are (x 3, y 3), the central coordinates of the television are (x 4, y 4), the distance between the central coordinates of the user and the central coordinates of the television is calculated to be L2, and when the distance between the user and the sofa is in a certain threshold interval, for example, if L2 is less than 200cm, it is said that the user watches the television beside the television.
And then, performing fusion recognition on the prediction result (the user is in the living room and the television is playing), the distance L2 between the center coordinate of the user and the center coordinate of the television, the infrared light of the infrared sensor beside the television is shielded, the temperature at 16 is a2, the humidity at 16 is b2, and the illumination intensity at 16 is c2 through an inference script of the intelligent home control strategy to obtain an inference result. And controlling household equipment such as lamplight, curtains, air conditioners and the like according to the inference result, adjusting the indoor temperature, humidity and illumination intensity to accord with the habit of a user, and simultaneously meeting the threshold set by the inference script, wherein the temperature threshold interval is 20-25 ℃, the humidity threshold interval is 30-60%, and the illumination intensity threshold interval is 100-300 lx. And then the user can watch the television beside the television in a healthy and comfortable environment.
The invention also provides an intelligent home control system based on multi-modal recognition, and the intelligent home control system realizes the intelligent home control method based on multi-modal recognition when executing, which is shown in the attached figure 3 and specifically comprises
A model training module: acquiring a daily family information set in a family scene, and constructing a neural network model according to the daily family information set; the family information comprises image information, audio information and the like (including videos, photos, sound recordings and the like), and a YOLO target detection neural network model is trained according to the image information set; and training a DNN deep neural network acoustic model according to the audio information set.
A model prediction module: acquiring real-time family information in a family scene, inputting the real-time family information into the neural network model for prediction, and acquiring a prediction result; the environmental information includes temperature, coordinates, humidity, illumination intensity, and the like.
The script compiling module: constructing an intelligent home control strategy according to user habits, and compiling the intelligent home control strategy into an inference script; the reasoning script is a Prolog language script.
The reasoning control module: acquiring real-time environment information in a family scene, performing multi-mode fusion recognition on the prediction result and the real-time environment information through a reasoning script to obtain a reasoning result, and controlling the intelligent home equipment according to the reasoning result.
The invention also provides a computer storage medium, wherein a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the steps of the intelligent home control method based on multi-modal recognition are realized.
The storage medium stores program instructions capable of implementing all the methods described above, wherein the program instructions may be stored in the storage medium in the form of a software product, and include instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
The processor may also be referred to as a CPU (Central Processing Unit). The processor may be an integrated circuit chip having signal processing capabilities. The processor may be:
DSP (Digital Signal Processor, DSP is a Processor composed of large-scale or super-large-scale integrated circuit chips and used for completing certain Signal processing task, it is gradually developed for adapting to the need of high-speed real-time Signal processing task
An ASIC (Application Specific Integrated Circuit) refers to an Integrated Circuit designed and manufactured according to the requirements of a Specific user and the requirements of a Specific electronic system.
An FPGA (Field Programmable Gate Array) is a product of further development based on Programmable devices such as PAL (Programmable Array Logic) and GAL (general Array Logic). The circuit is a semi-custom circuit in the field of Application Specific Integrated Circuits (ASIC), not only overcomes the defects of the custom circuit, but also overcomes the defect that the number of gate circuits of the original programmable device is limited.
A general purpose processor, which may be a microprocessor or the processor may be any conventional processor or the like.
The above embodiments are merely illustrative of the technical concept and features of the present invention, and the present invention is not limited thereto, and any equivalent changes or modifications made according to the spirit of the present invention should be included in the scope of the present invention.
Claims (9)
1. The intelligent home control method based on multi-modal recognition is characterized by comprising the following steps:
acquiring a daily family information set in a family scene, and constructing a neural network model according to the daily family information set;
acquiring real-time family information in a family scene, inputting the real-time family information into the neural network model for prediction, and acquiring a prediction result;
constructing an intelligent home control strategy according to user habits, and compiling the intelligent home control strategy into an inference script;
acquiring real-time environment information in a family scene, performing multi-mode fusion recognition on the prediction result and the real-time environment information through a reasoning script to obtain a reasoning result, and controlling the intelligent home equipment according to the reasoning result.
2. The smart home control method based on multi-modal recognition as claimed in claim 1, wherein the family information comprises image information and audio information, and a YOLO target detection neural network model is trained according to the image information set; and training a DNN deep neural network acoustic model according to the audio information set.
3. The smart home control method based on multi-modal recognition as recited in claim 1, wherein the environmental information comprises temperature, coordinates, humidity, and illumination intensity.
4. The smart home control method based on multi-modal recognition as recited in claim 1, wherein the inference script is a Prolog language script.
5. The intelligent home control system based on multi-mode recognition is characterized by comprising
A model training module: acquiring a daily family information set in a family scene, and constructing a neural network model according to the daily family information set;
a model prediction module: acquiring real-time family information in a family scene, inputting the real-time family information into the neural network model for prediction, and acquiring a prediction result;
the script compiling module: constructing an intelligent home control strategy according to user habits, and compiling the intelligent home control strategy into an inference script;
the reasoning control module: acquiring real-time environment information in a family scene, performing multi-mode fusion recognition on the prediction result and the real-time environment information through a reasoning script to obtain a reasoning result, and controlling the intelligent home equipment according to the reasoning result.
6. The smart home control system based on multi-modal recognition as recited in claim 5, wherein the family information comprises image information and audio information, and a YOLO target detection neural network model is trained according to the image information set; and training a DNN deep neural network acoustic model according to the audio information set.
7. The smart home control system based on multi-modal recognition as recited in claim 5, wherein the environmental information comprises temperature, coordinates, humidity, and illumination intensity.
8. The smart home control system based on multi-modal recognition as recited in claim 5, wherein the inference script is a Prolog language script.
9. A computer storage medium, characterized in that the computer readable storage medium has a computer program stored thereon, and the computer program, when executed by a processor, implements the steps of the smart home control method based on multi-modal recognition according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211181832.9A CN115542759A (en) | 2022-09-27 | 2022-09-27 | Intelligent household control method, system and storage medium based on multi-mode recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211181832.9A CN115542759A (en) | 2022-09-27 | 2022-09-27 | Intelligent household control method, system and storage medium based on multi-mode recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115542759A true CN115542759A (en) | 2022-12-30 |
Family
ID=84730394
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211181832.9A Pending CN115542759A (en) | 2022-09-27 | 2022-09-27 | Intelligent household control method, system and storage medium based on multi-mode recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115542759A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116400601A (en) * | 2023-05-22 | 2023-07-07 | 深圳普菲特信息科技股份有限公司 | Scene self-adaptive control method, system and storage medium for environment change equipment |
-
2022
- 2022-09-27 CN CN202211181832.9A patent/CN115542759A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116400601A (en) * | 2023-05-22 | 2023-07-07 | 深圳普菲特信息科技股份有限公司 | Scene self-adaptive control method, system and storage medium for environment change equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11482000B2 (en) | Image processing apparatus and control method thereof | |
US20220317641A1 (en) | Device control method, conflict processing method, corresponding apparatus and electronic device | |
EP3734432A1 (en) | Signal processing device, signal processing method and related products | |
US9821470B2 (en) | Apparatus and methods for context determination using real time sensor data | |
US9860077B2 (en) | Home animation apparatus and methods | |
WO2020062392A1 (en) | Signal processing device, signal processing method and related product | |
US9579790B2 (en) | Apparatus and methods for removal of learned behaviors in robots | |
US20210334544A1 (en) | Computer Vision Learning System | |
US11825278B2 (en) | Device and method for auto audio and video focusing | |
US20160075015A1 (en) | Apparatus and methods for remotely controlling robotic devices | |
US11776092B2 (en) | Color restoration method and apparatus | |
US11784845B2 (en) | System and method for disambiguation of Internet-of-Things devices | |
US20240169687A1 (en) | Model training method, scene recognition method, and related device | |
CN113947702A (en) | Multi-modal emotion recognition method and system based on context awareness | |
Ye et al. | A novel active object detection network based on historical scenes and movements | |
KR20210061146A (en) | Electronic apparatus and control method thereof | |
CN115542759A (en) | Intelligent household control method, system and storage medium based on multi-mode recognition | |
CN109986553B (en) | Active interaction robot, system, method and storage device | |
CN117877125B (en) | Action recognition and model training method and device, electronic equipment and storage medium | |
CN110633689B (en) | Face recognition model based on semi-supervised attention network | |
TW202004522A (en) | Smart engine with dynamic profiles | |
CN104460991A (en) | Gesture interaction control system based on digital household equipment | |
Tang | An action recognition method for volleyball players using deep learning | |
KR20230103671A (en) | Method and apparatus for providing a metaverse virtual reality companion animal capable of forming a relationship based on a user's emotions in a communication system | |
CN114047901B (en) | Man-machine interaction method and intelligent device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |