CN112558778A - User action recognition control method and device under multi-terminal environment and user terminal - Google Patents

User action recognition control method and device under multi-terminal environment and user terminal Download PDF

Info

Publication number
CN112558778A
CN112558778A CN202011509725.5A CN202011509725A CN112558778A CN 112558778 A CN112558778 A CN 112558778A CN 202011509725 A CN202011509725 A CN 202011509725A CN 112558778 A CN112558778 A CN 112558778A
Authority
CN
China
Prior art keywords
user
terminal
information
current control
imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011509725.5A
Other languages
Chinese (zh)
Inventor
刘旭
李辉光
唐政清
李涛
李治成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202011509725.5A priority Critical patent/CN112558778A/en
Publication of CN112558778A publication Critical patent/CN112558778A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The application relates to a user action recognition control method and device in a multi-terminal environment and a user terminal, and belongs to the technical field of user action recognition control in the multi-terminal environment. The application includes: when a terminal detects a user, determining the terminal closest to the user to serve as a current control terminal, and acquiring the action recognition control right of other equipment by the current control terminal; and the current control terminal analyzes and identifies the action information of the user and controls according to the action information. By the method and the device, any terminal can be used as a control terminal of other equipment, so that the action recognition control of a user is more convenient.

Description

User action recognition control method and device under multi-terminal environment and user terminal
Technical Field
The application belongs to the technical field of user action recognition control in a multi-terminal environment, and particularly relates to a user action recognition control method and device in a multi-terminal environment and a user terminal.
Background
In the related art, motion recognition control techniques are gradually emerging to control devices. Taking application to smart home as an example, in the related art, motion recognition control on a smart home appliance needs to be performed within a detection range of the smart home appliance, and along with movement of a user, after the user leaves the detection range of the smart home appliance, the user cannot perform motion recognition control on the smart home appliance, for example, after the user moves to a room a, the user cannot perform motion recognition control on equipment in a room B.
Disclosure of Invention
In order to overcome the problems in the related art at least to a certain extent, the application provides a user action recognition control method and device and a user terminal in a multi-terminal environment, and aims to enable any terminal to be used as a control terminal of other equipment so as to enable action recognition control of a user to be more convenient.
In order to achieve the purpose, the following technical scheme is adopted in the application:
in a first aspect,
the application provides an interactive identification control method under a multi-terminal environment, which comprises the following steps:
when a terminal detects a user, determining a terminal closest to the user to serve as a current control terminal, wherein the current control terminal obtains the action recognition control right of other equipment;
and the current control terminal analyzes and identifies the action information of the user and controls according to the action information.
Further, the determining a terminal closest to the user includes:
and determining the terminal closest to the user according to ultrasonic or infrared detection.
Further, the method further comprises:
and when the current control terminal is determined, keeping the other terminals in silent detection.
Further, the method further comprises:
and the current control terminal also carries out biological feature verification on the user, analyzes and identifies the action information of the user after the verification is passed, and controls according to the action information.
Further, the biometric features include: fingerprint features, or, facial features.
Further, the action information includes: gesture motion information.
Further, the controlling the current control terminal according to the action information includes:
and the current control terminal performs instruction matching according to the analyzed action information, and determines the controlled equipment according to the matched instruction and sends the matched instruction to the controlled equipment when a matching result exists.
Further, the method further comprises:
and the current control terminal receives the information returned by the controlled equipment and displays the information.
Further, the parsing, by the current control terminal, the action information identifying the user includes:
respectively obtaining imaging information of the user through ultrasonic detection and infrared detection;
and analyzing and identifying according to the imaging information of the user respectively obtained by ultrasonic detection and infrared detection to obtain the action information.
Further, the analyzing and identifying according to the imaging information of the user respectively obtained by the ultrasonic wave and the infrared detection to obtain the action information includes:
and respectively analyzing the imaging related to the user part according to the imaging information of the user respectively obtained by ultrasonic detection and infrared detection, then synthesizing the two analysis results to obtain a synthesized imaging, and analyzing the action information of the user according to the synthesized imaging.
Further, the synthesizing the two analysis results includes:
and performing imaging repair synthesis processing based on the action contour information of the user part in the infrared imaging and by combining the enhancement information of the user part in the ultrasonic imaging.
Further, the enhancement information includes: color depth, brightness, and sharpness.
In a second aspect of the present invention,
the application provides an interactive recognition controlling means under multi-terminal environment, includes:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining a terminal closest to a user as a current control terminal when the terminal detects the user, and the current control terminal obtains the action identification control right of other equipment;
and the analysis control module is used for analyzing and identifying the action information of the user by the current control terminal and controlling according to the action information.
In a third aspect,
the application provides a user terminal, including:
one or more memories having executable programs stored thereon;
one or more processors configured to execute the executable program in the memory to implement the steps of any of the methods described above.
This application adopts above technical scheme, possesses following beneficial effect at least:
the application discloses interactive recognition control under multi-terminal environment, wherein, each terminal all has human body detection function, when the terminal detects the user, confirm the terminal nearest to this user, as current control terminal, then current control terminal analysis discerns this user's action information, and control according to the action information that the analysis discerned, through this scheme, realize making arbitrary terminal can all regard as the control terminal of other equipment, along with user's removal, a certain terminal is when nearest to the user, this terminal gains the action recognition control right to other equipment, it is unnecessary that the user gets back to original action recognition terminal department, thereby make user's action recognition control more convenient.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart illustrating a method for interactive recognition control in a multi-terminal environment, according to an exemplary embodiment;
FIG. 2 is a block diagram illustrating an interactive recognition control apparatus in a multi-terminal environment according to an exemplary embodiment;
fig. 3 is a block diagram illustrating a user terminal according to an example embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail below. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart illustrating an interactive recognition control method in a multi-terminal environment according to an exemplary embodiment, where as shown in fig. 1, the method includes the following steps:
step S101, when a terminal detects a user, determining a terminal closest to the user to serve as a current control terminal, and acquiring the action recognition control right of other equipment by the current control terminal;
and S102, the current control terminal analyzes and identifies the action information of the user and controls according to the action information.
Specifically, in practical application of the method, each terminal has a human body detection function, action interaction identification control can be carried out, and the terminals can communicate with each other. Taking an intelligent home as an example, each terminal may be an intelligent household appliance (a television, an air conditioner, a refrigerator, etc.) distributed at each place and various intelligent electronic products (e.g., a mobile phone, a PAD, etc.) of a user, and each intelligent household appliance and each intelligent electronic product may form a local area communication network. Each terminal has a human body detection function, and can perform silence detection when applied, and each terminal is likely to detect a user at the placement position of the terminal along with the movement of the user. In the related art, the detection range of the intelligent household appliance needs to be within the detection range of the intelligent household appliance, for example, a user needs to perform action recognition control on a television in a living room within the detection range of the television in the living room, and after the user enters a bedroom, the user cannot perform action recognition control on the television in the living room. In order to facilitate the action recognition control of the user, the method determines the terminal closest to the user as the current control terminal when the terminal detects the user, for example, when the user moves from a living room to a bedroom, the terminal in the living room no longer detects the user, the terminal in the bedroom can detect the user, the terminal closest to the user is triggered and determined, so that the method can obtain that one terminal in the bedroom is determined to be closest to the user and is used as the current control terminal to obtain the action recognition control right of other equipment, for example, a bedroom air conditioner is determined to be the current control terminal to obtain the action recognition control right of a living room television, the user can perform television-related interactive action on the bedroom air conditioner, the bedroom air conditioner analyzes and recognizes the action information of the user, and controls the living room television according to the analyzed and recognized action information, for example, the user makes an interactive action of turning up television sound to the bedroom air conditioner, and after the bedroom air conditioner analyzes the interactive action, the living room television sound is controlled to be turned up, so that the user can hear the television sound in the bedroom, and the user does not need to return to the living room television to make the interactive action of turning up the television sound.
The current control terminal obtains the action recognition control right of other equipment, and in practical application, the other equipment can be other terminals or equipment controlled by other terminals.
By the scheme, any terminal can be used as a control terminal of other equipment, and along with the movement of the user, when one terminal is closest to the user, the terminal obtains the action recognition control right of the other equipment, so that the user does not need to return to the original action recognition terminal, and the action recognition control of the user is more convenient.
In one embodiment, the action information includes: the gesture motion information may also include other limb motions, or facial motions, etc.
In one embodiment, determining the closest terminal to the user comprises:
and determining the terminal closest to the user according to ultrasonic or infrared detection.
Specifically, the terminal may measure the distance to the user by using ultrasonic waves or infrared waves, determine the distance from the user to the terminal, and then determine the terminal closest to the user.
In one embodiment, the method further comprises:
and when the current control terminal is determined, keeping the silence detection of other terminals.
Specifically, when the current control terminal is determined, the other terminals still keep silent detection, wait for the user to move, and re-determine the terminal closest to the user to serve as the current control terminal to obtain the action recognition control right of other devices.
In one embodiment, the method further comprises:
the current control terminal also carries out biological feature verification on the user, analyzes and identifies the action information of the user after the verification is passed, and controls according to the action information.
Specifically, in some application scenarios, the control of the controlled device may be performed only after the permission is verified, for example, information in the controlled device is important and can be acquired only by a certain permission. In contrast, according to the method of the present invention, after the current control terminal obtains the operation recognition control right of the other device, it is first verified whether the user is an authorized user during recognition control, and only after the verification is passed, the operation information of the recognized user is analyzed and controlled according to the operation information.
Further, the biometric features include: fingerprint features, or, facial features.
Specifically, the biometric features include fingerprint features, facial features, and the like, and the biometric features may be verified by using ultrasonic waves, infrared detection, and the like, for example, ultrasonic fingerprint verification and infrared fingerprint verification.
In one embodiment, wherein the current control terminal performs control according to the action information, the method includes:
and the current control terminal performs instruction matching according to the analyzed action information, and determines the controlled equipment according to the matched instruction and sends the matched instruction to the controlled equipment when a matching result exists.
Specifically, after the user performs an interactive action with respect to the current control terminal, the current control terminal performs instruction matching according to the analyzed action information, for example, the current control terminal may traverse the interactive action database of each terminal through networking with each terminal to perform instruction matching on the action information, and when there is a matching result, may determine a corresponding controlled device, and simultaneously send the matched instruction to the controlled device, so that the controlled device executes the instruction.
In one embodiment, the method further comprises:
and the current control terminal receives the information returned by the controlled equipment and displays the information.
Specifically, according to the scheme, when the current control terminal has a display function, information of the controlled device can be displayed on a display interface of the current control terminal, for example, after a user performs an interactive action on the current control terminal, the current control terminal performs instruction matching according to the analyzed action information, determines the controlled device according to the matched instruction and sends the matched instruction to the controlled device, the controlled device executes the instruction, and simultaneously returns related information to the current control terminal for confirmation of the user, for example, volume information currently regulated and controlled by a television in a living room is displayed on a display screen of an air conditioner in the living room.
In the related art, the motion information of the user may be recognized by an ultrasonic or infrared detection method. However, the problem that the success rate of recognition is not high exists when the user action information is recognized under the condition of single use of ultrasonic wave or infrared imaging. Specifically, for infrared imaging to identify user action information, the infrared imaging definition is not high, the remote identification effect is poor, although the imaging is clearer under the condition of weaker light, the imaging color is not clear enough; for the ultrasonic imaging to identify the user action information, the ultrasonic signal noise is more, so that the identification effect is poor.
In order to improve the recognition success rate of the action information, particularly under the condition that the recognition success rate is low when the light intensity is weak, the following scheme is provided in the application:
the current control terminal analyzes and identifies the action information of the user, and the method comprises the following steps:
respectively obtaining imaging information of a user through ultrasonic detection and infrared detection;
and analyzing and identifying according to the imaging information of the user respectively obtained by ultrasonic detection and infrared detection to obtain action information.
Specifically, the method integrates ultrasonic waves and infrared imaging to analyze and obtain the action information of the user, so that the success rate of user action information identification is improved, and particularly the success rate of identification when the relation light intensity is weak is improved. Meanwhile, compared with the image formation of the visible light of a common camera, the user information leakage is easily caused, and the information safety is low. The method combines ultrasonic imaging and infrared imaging, and can normally display the imaged image only by adding a certain algorithm calculation on hardware, even if information leakage is caused, the stored data cannot be analyzed and imaged if no special corresponding decoding hardware and decoding algorithm exist, so that the method is far superior to the traditional visible light camera imaging image identification in the aspect of information safety.
In one embodiment, the analyzing and identifying according to the imaging information of the user obtained by the ultrasonic wave and the infrared detection respectively to obtain the action information comprises:
and respectively analyzing the imaging of the part related to the user according to the imaging information of the user respectively obtained by ultrasonic detection and infrared detection, then synthesizing the two analysis results to obtain a synthesized imaging, and analyzing the action information of the user according to the synthesized imaging.
Specifically, the method comprises the steps of respectively analyzing images related to a user part according to user imaging information respectively obtained by ultrasonic detection and infrared detection, then synthesizing two analysis results to obtain a synthesized image, and analyzing the user action information according to the synthesized image.
Further, the two analysis results are subjected to synthesis processing, which comprises the following steps:
and performing imaging restoration synthesis processing based on the action contour information of the user part in the infrared imaging and by combining the enhancement information of the user part in the ultrasonic imaging.
Further, the enhancement information includes: color depth, brightness, and sharpness.
Specifically, infrared imaging has an obvious outline, particularly under the condition of weak light, imaging is clearer, although the defects are that imaging colors are not clear enough, the weakness of infrared imaging can be made up by utilizing the enhancement information of color depth, brightness, sharpness and the like brought by ultrasonic imaging. Meanwhile, the ultrasonic imaging is easily influenced by ambient noise when the distance is long, image distortion is caused, and the weakness of the ultrasonic imaging can be overcome by utilizing the obvious outline of the infrared imaging. Therefore, on the basis of an infrared imaging picture, enhanced information such as color depth, brightness, sharpness and the like brought by ultrasonic imaging is combined, abnormal data with obvious deviation can be eliminated by calculating the average value, the median and the like of adjacent pixel points, meanwhile, color restoration and image synthesis mutual analysis and comparison of imaging frame information are carried out according to the relation between the variance and the average value and the median of the adjacent pixel points, a synthesized image is obtained, and then the action information of a user is analyzed according to the synthesized image.
Referring to fig. 2, fig. 2 is a block diagram illustrating a schematic structure of an interactive recognition control apparatus in a multi-terminal environment according to an exemplary embodiment, and as shown in fig. 2, the interactive recognition control apparatus 2 in the multi-terminal environment includes:
a determining module 201, configured to determine, when a terminal detects a user, a terminal closest to the user to serve as a current control terminal;
and the analysis control module 202 is used for analyzing and identifying the action information of the user by the current control terminal and controlling according to the action information.
Further, in the determining module 201, determining a terminal closest to the user includes:
and determining the terminal closest to the user according to ultrasonic or infrared detection.
Further, the determining module 201 is further configured to:
and when the current control terminal is determined, keeping the silence detection of other terminals.
Further, the parsing control module 202 is further configured to:
the current control terminal also carries out biological feature verification on the user, analyzes and identifies the action information of the user after the verification is passed, and controls according to the action information.
Further, the biometric features include: fingerprint features, or, facial features.
Further, the action information includes: gesture motion information.
Further, in the parsing control module 202, the current control terminal performs control according to the action information, including:
and the current control terminal performs instruction matching according to the analyzed action information, and determines the controlled equipment according to the matched instruction and sends the matched instruction to the controlled equipment when a matching result exists.
Further, the parsing control module 202 is further configured to:
and the current control terminal receives the information returned by the controlled equipment and displays the information.
Further, in the parsing control module 202, the parsing and identifying the action information of the user by the current control terminal includes:
respectively obtaining imaging information of a user through ultrasonic detection and infrared detection;
and analyzing and identifying according to the imaging information of the user respectively obtained by ultrasonic detection and infrared detection to obtain action information.
Further, analyzing and identifying the imaging information of the user respectively obtained by ultrasonic wave and infrared detection to obtain action information, comprising:
and respectively analyzing the imaging of the part related to the user according to the imaging information of the user respectively obtained by ultrasonic detection and infrared detection, then synthesizing the two analysis results to obtain a synthesized imaging, and analyzing the action information of the user according to the synthesized imaging.
Further, the two analysis results are subjected to synthesis processing, which comprises the following steps:
and performing imaging restoration synthesis processing based on the action contour information of the user part in the infrared imaging and by combining the enhancement information of the user part in the ultrasonic imaging.
Further, the enhancement information includes: color depth, brightness, and sharpness.
With regard to the interactive recognition control device 2 in the multi-terminal environment in the above embodiment, the specific manner in which each module performs operations has been described in detail in the above embodiment of the method, and will not be described in detail here.
Referring to fig. 3, fig. 3 is a block diagram illustrating a user terminal according to an exemplary embodiment, and as shown in fig. 3, the user terminal 3 includes:
one or more memories 301 having executable programs stored thereon;
one or more processors 302 for executing the executable programs in the memory 301 to implement the steps of any of the methods described above.
Specifically, the user terminal 3 has a human body interaction detection function and a networking function, and taking an intelligent home as an example, the user terminal 3 may be various intelligent appliances (such as a television, an air conditioner, a refrigerator, etc.), and also various intelligent electronic products (such as a mobile phone, a PAD, etc.) of a user
With respect to the user terminal 3 in the above embodiment, the specific manner of executing the program in the memory 301 by the processor 302 thereof has been described in detail in the above embodiment related to the method, and will not be elaborated herein.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present application, the meaning of "plurality" means at least two unless otherwise specified.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or intervening elements may also be present; when an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present, and further, as used herein, connected may include wirelessly connected; the term "and/or" is used to include any and all combinations of one or more of the associated listed items.
Any process or method descriptions in flow charts or otherwise described herein may be understood as: represents modules, segments or portions of code which include one or more executable instructions for implementing specific logical functions or steps of a process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware that is related to instructions of a program, and the program may be stored in a computer-readable storage medium, and when executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (14)

1. An interactive identification control method under a multi-terminal environment is characterized by comprising the following steps:
when a terminal detects a user, determining a terminal closest to the user to serve as a current control terminal, wherein the current control terminal obtains the action recognition control right of other equipment;
and the current control terminal analyzes and identifies the action information of the user and controls according to the action information.
2. The method of claim 1, wherein the determining the terminal closest to the user comprises:
and determining the terminal closest to the user according to ultrasonic or infrared detection.
3. The method of claim 1, further comprising:
and when the current control terminal is determined, keeping the other terminals in silent detection.
4. The method of claim 1, further comprising:
and the current control terminal also carries out biological feature verification on the user, analyzes and identifies the action information of the user after the verification is passed, and controls according to the action information.
5. The method of claim 4, wherein the biometric features comprise: fingerprint features, or, facial features.
6. The method of claim 1, wherein the action information comprises: gesture motion information.
7. The method of claim 1, wherein the controlling of the current control terminal according to the action information comprises:
and the current control terminal performs instruction matching according to the analyzed action information, and determines the controlled equipment according to the matched instruction and sends the matched instruction to the controlled equipment when a matching result exists.
8. The method of claim 7, further comprising:
and the current control terminal receives the information returned by the controlled equipment and displays the information.
9. The method according to any one of claims 1-8, wherein parsing the action information identifying the user by the current control terminal comprises:
respectively obtaining imaging information of the user through ultrasonic detection and infrared detection;
and analyzing and identifying according to the imaging information of the user respectively obtained by ultrasonic detection and infrared detection to obtain the action information.
10. The method of claim 9, wherein the analyzing and identifying the user's imaging information obtained from the ultrasonic and infrared detection respectively to obtain the motion information comprises:
and respectively analyzing the imaging related to the user part according to the imaging information of the user respectively obtained by ultrasonic detection and infrared detection, then synthesizing the two analysis results to obtain a synthesized imaging, and analyzing the action information of the user according to the synthesized imaging.
11. The method according to claim 10, wherein the synthesizing the two analysis results comprises:
and performing imaging repair synthesis processing based on the action contour information of the user part in the infrared imaging and by combining the enhancement information of the user part in the ultrasonic imaging.
12. The method of claim 11, wherein the enhancement information comprises: color depth, brightness, and sharpness.
13. An interactive recognition control device under a multi-terminal environment, comprising:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining a terminal closest to a user as a current control terminal when the terminal detects the user, and the current control terminal obtains the action identification control right of other equipment;
and the analysis control module is used for analyzing and identifying the action information of the user by the current control terminal and controlling according to the action information.
14. A user terminal, comprising:
one or more memories having executable programs stored thereon;
one or more processors configured to execute the executable program in the memory to implement the steps of the method of any one of claims 1-12.
CN202011509725.5A 2020-12-18 2020-12-18 User action recognition control method and device under multi-terminal environment and user terminal Pending CN112558778A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011509725.5A CN112558778A (en) 2020-12-18 2020-12-18 User action recognition control method and device under multi-terminal environment and user terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011509725.5A CN112558778A (en) 2020-12-18 2020-12-18 User action recognition control method and device under multi-terminal environment and user terminal

Publications (1)

Publication Number Publication Date
CN112558778A true CN112558778A (en) 2021-03-26

Family

ID=75031875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011509725.5A Pending CN112558778A (en) 2020-12-18 2020-12-18 User action recognition control method and device under multi-terminal environment and user terminal

Country Status (1)

Country Link
CN (1) CN112558778A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003027942A1 (en) * 2001-09-28 2003-04-03 Bellsouth Intellectual Property Corporation Gesture activated home appliance
CN105510787A (en) * 2016-01-26 2016-04-20 国网上海市电力公司 Portable ultrasonic, infrared and ultraviolet detector based on image synthesis technology
CN105738779A (en) * 2016-01-26 2016-07-06 国网上海市电力公司 Partial discharge detection method based on multi-source image fusion
US20170192513A1 (en) * 2015-12-31 2017-07-06 Microsoft Technology Licensing, Llc Electrical device for hand gestures detection
US20200110928A1 (en) * 2018-10-09 2020-04-09 Midea Group Co., Ltd. System and method for controlling appliances using motion gestures
CN110989423A (en) * 2019-11-13 2020-04-10 珠海格力电器股份有限公司 Method, device, terminal and computer readable medium for controlling multiple intelligent devices

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003027942A1 (en) * 2001-09-28 2003-04-03 Bellsouth Intellectual Property Corporation Gesture activated home appliance
US20170192513A1 (en) * 2015-12-31 2017-07-06 Microsoft Technology Licensing, Llc Electrical device for hand gestures detection
CN105510787A (en) * 2016-01-26 2016-04-20 国网上海市电力公司 Portable ultrasonic, infrared and ultraviolet detector based on image synthesis technology
CN105738779A (en) * 2016-01-26 2016-07-06 国网上海市电力公司 Partial discharge detection method based on multi-source image fusion
US20200110928A1 (en) * 2018-10-09 2020-04-09 Midea Group Co., Ltd. System and method for controlling appliances using motion gestures
CN110989423A (en) * 2019-11-13 2020-04-10 珠海格力电器股份有限公司 Method, device, terminal and computer readable medium for controlling multiple intelligent devices

Similar Documents

Publication Publication Date Title
US9965860B2 (en) Method and device for calibration-free gaze estimation
RU2668408C2 (en) Devices, systems and methods of virtualising mirror
WO2019080797A1 (en) Living body detection method, terminal, and storage medium
US20140294254A1 (en) Display apparatus for performing user certification and method thereof
CN113329545B (en) Intelligent lighting method and device, intelligent control device and storage medium
CN111814520A (en) Skin type detection method, skin type grade classification method, and skin type detection device
US10540538B2 (en) Body information analysis apparatus and blush analysis method thereof
CN106303156B (en) To the method, device and mobile terminal of video denoising
CN108154111B (en) Living body detection method, living body detection system, electronic device, and computer-readable medium
CN108875839A (en) Article reminding method, system and equipment and storage medium are lost in a kind of vehicle
US11354882B2 (en) Image alignment method and device therefor
KR20160052309A (en) Electronic device and method for analysis of face information in electronic device
CN114641982B (en) System for performing ambient light image correction
US20130188836A1 (en) Method and apparatus for providing hand detection
CN105245683A (en) Method and device for adaptively displaying applications of terminal
WO2016158001A1 (en) Information processing device, information processing method, program, and recording medium
CN110822641A (en) Air conditioner, control method and device thereof and readable storage medium
CN109688465A (en) Video source modeling control method, device and electronic equipment
CN111639582A (en) Living body detection method and apparatus
US9684828B2 (en) Electronic device and eye region detection method in electronic device
US9501710B2 (en) Systems, methods, and media for identifying object characteristics based on fixation points
CN112558778A (en) User action recognition control method and device under multi-terminal environment and user terminal
CN111325058B (en) Driving behavior detection method, device, system and storage medium
CN111568233A (en) Water dispenser control method, water dispenser and computer readable storage medium
CN113168700A (en) Electronic device and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination