CN112180849A - Sign language control method, computer equipment and readable storage medium - Google Patents

Sign language control method, computer equipment and readable storage medium Download PDF

Info

Publication number
CN112180849A
CN112180849A CN202010885965.9A CN202010885965A CN112180849A CN 112180849 A CN112180849 A CN 112180849A CN 202010885965 A CN202010885965 A CN 202010885965A CN 112180849 A CN112180849 A CN 112180849A
Authority
CN
China
Prior art keywords
control
control strategy
sign language
target object
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010885965.9A
Other languages
Chinese (zh)
Inventor
葛友杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xingluo Intelligent Technology Co Ltd
Original Assignee
Xingluo Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xingluo Intelligent Technology Co Ltd filed Critical Xingluo Intelligent Technology Co Ltd
Priority to CN202010885965.9A priority Critical patent/CN112180849A/en
Publication of CN112180849A publication Critical patent/CN112180849A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house

Abstract

The invention discloses a sign language control method, which comprises the following steps: analyzing the video image acquired in real time, and detecting and recording the input condition of the hand language when the triggering condition is met; analyzing the content of the input gesture, and confirming a target object and a control strategy; acquiring current environmental parameters; and judging the rationality of the control strategy by combining the acquired environmental parameters and the historical control records, and executing the control strategy on the target object when the control strategy is determined to be reasonable. Compared with the prior art, firstly, the judgment of the trigger condition is added, so that the condition of false detection of the gesture information can be avoided to a great extent; secondly, the operation intention of the user can be made clear by analyzing the gesture input by the user from the target object and the control strategy; thirdly, the analyzed control strategy is comprehensively judged by combining the environment parameters acquired in real time and the historical control records, so that the control strategy can be ensured to meet the rationality requirement.

Description

Sign language control method, computer equipment and readable storage medium
Technical Field
The invention relates to the field of intelligent home control, in particular to a sign language control method, computer equipment and a readable storage medium.
Background
Along with the continuous improvement of the living standard of people, the intelligent and scientific concepts are deeply concentrated. In recent years, various major manufacturers enter the field of intelligent home furnishing by military disputes, and aim to provide a home furnishing scheme which is more accordant with modern aesthetic feeling and convenient to control for users. The intelligent home can connect various devices supporting networking in the home through the Internet of things technology, and realizes a series of linkage control on other intelligent devices. In addition, the existing smart home can support a user to input a voice-form instruction, can provide a great convenient control way for the user, and can provide accurate and efficient service.
However, the voice information input mode cannot be suitable for people with sound production disorder (such as deaf-mutes or people who are not suitable for sound production due to physical reasons), and how to enable the target people to enjoy the convenience brought by the intelligent home is also a problem which needs to be solved urgently by various intelligent home manufacturers. At present, some smart home products on the market have a gesture recognition function, but only one gesture action is processed, and then a simple home control instruction is generated. Effective identification of sign language input information cannot be formed, and intelligent judgment on the rationality of a control scheme is lacked in the identification and control processes. In addition, the existing gesture recognition scheme also has the problems of false detection and low accuracy.
Disclosure of Invention
The present invention provides a sign language control method, a computer device and a readable storage medium for solving the above technical problems.
According to an aspect of the present invention, there is provided a sign language control method, including:
analyzing a video image acquired in real time in advance, and detecting and recording the input condition of the hand language when a triggering condition is met;
analyzing the content of the input gesture, and confirming a target object and a control strategy;
acquiring current environmental parameters;
and judging the rationality of the control strategy by combining the acquired environmental parameters and historical control records, and executing the control strategy on the target object when the control strategy is determined to be rational.
Further, the video images acquired in real time are analyzed, and when a trigger condition is met, sign language input conditions are detected and recorded; the method comprises the following steps:
acquiring the video image in real time through video acquisition equipment;
when a user appears in the video image, analyzing the video image frame by frame;
judging whether set characteristics exist or not;
counting the continuous frame number;
judging whether the number of the continuous frames with the set characteristics reaches a set number;
if yes, the video is cached and recorded.
Further, the analyzing the content of the input gesture, and confirming the target object and the control policy includes:
acquiring gesture information, and recording a running track corresponding to each gesture;
comparing the sign language information with prestored sign language information;
judging whether the target object is clear or not;
if so, continuously judging whether a complete control scheme is formed in a set time period;
if yes, generating the control strategy by the control scheme;
further, the analyzing the content of the input gesture, and confirming the target object and the control policy includes:
acquiring gesture information, and recording a running track corresponding to each gesture;
comparing the sign language information with prestored sign language information;
judging whether the target object is clear or not;
if so, continuously judging whether a complete first control scheme is formed in a set time period;
performing error correction analysis on the first control scheme by combining the cached video to generate a second control scheme;
and generating the control strategy by the second control scheme.
Further, the setting feature is at least one of a posture feature, an expression feature and a gesture feature of the person.
Further, the reasonability of the control strategy is judged by combining the acquired environmental parameters and historical control records, and the control strategy is executed on the target object when the control strategy is determined to be reasonable; the method comprises the following steps:
calling a historical control record corresponding to the target object under the current environmental parameters;
calculating a control mode and a control parameter according to the historical control record;
judging whether the control strategy is matched with the control mode and the control parameter interval;
and if the control strategy is matched with the target object, controlling the target object to execute the control strategy.
Further, when the control strategy is not matched with the control mode and the control parameter interval, reference feedback is performed on a user, and after user confirmation is obtained, the target object is controlled to execute the control strategy.
Further, the information of the reference feedback is displayed to the user in a text form through a display screen.
According to another aspect of the present invention, there is provided a computer device comprising a processor and a memory, the processor being coupled to the memory and the processor executing instructions when in operation to implement the sign language control method described above.
According to another aspect of the present invention, there is provided a readable storage medium having stored thereon a computer program to be executed by a processor to implement the sign language control method described above.
Compared with the prior art, the sign language control method, the computer equipment and the readable storage medium provided by the invention have the advantages that firstly, the judgment of the trigger condition is added, so that the situation of false detection of the gesture information can be avoided to a great extent; secondly, the operation intention of the user can be made clear by analyzing the gesture input by the user from the target object and the contents of the control strategy in two aspects; thirdly, the analyzed control strategy is further comprehensively judged by combining the environment parameters acquired in real time and the historical control records, so that the control strategy input by the user can meet the rationality requirement, and the intelligent degree is high.
Drawings
FIG. 1 is a flowchart of a method for providing cross-scene sign language control according to an embodiment of the present invention;
FIG. 2 is a detailed flowchart of step S100 in FIG. 1;
FIG. 3 is a detailed flowchart of step S200 in FIG. 1;
FIG. 4 is a detailed flowchart of step S200' provided in the second embodiment of the present invention;
FIG. 5 is a detailed flowchart of step S400 in FIG. 1;
FIG. 6 is a schematic block diagram of a computer apparatus provided by an embodiment of the present invention;
FIG. 7 is a schematic block diagram of a readable storage medium according to an embodiment of the present invention.
Detailed Description
Various embodiments of the present invention will be described more fully hereinafter. The invention is capable of various embodiments and of modifications and variations therein. However, it should be understood that: there is no intention to limit various embodiments of the invention to the specific embodiments disclosed herein, but on the contrary, the intention is to cover all modifications, equivalents, and/or alternatives falling within the spirit and scope of various embodiments of the invention.
Hereinafter, the term "includes" or "may include" used in various embodiments of the present invention indicates the existence of the disclosed functions, operations or elements, and does not limit the addition of one or more functions, operations or elements. Furthermore, as used in various embodiments of the present invention, the terms "comprises," "comprising," "includes," "including," "has," "having" and their derivatives are intended to mean that the specified features, numbers, steps, operations, elements, components, or combinations of the foregoing, are only meant to indicate that a particular feature, number, step, operation, element, component, or combination of the foregoing, is not to be understood as first excluding the presence of, or adding to the possibility of, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
In various embodiments of the invention, the expression "a or/and B" includes any or all combinations of the words listed simultaneously, e.g., may include a, may include B, or may include both a and B.
Expressions (such as "first", "second", and the like) used in various embodiments of the present invention may modify various constituent elements in various embodiments, but may not limit the respective constituent elements. For example, the above description does not limit the order and/or importance of the elements described. The foregoing description is for the purpose of distinguishing one element from another. For example, the first user device and the second user device indicate different user devices, although both are user devices. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of various embodiments of the present invention.
It should be noted that: in the present invention, unless otherwise explicitly stated or defined, the terms "mounted," "connected," "fixed," and the like are to be construed broadly, e.g., as being fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; the connection can be direct or indirect through an intermediate; there may be communication between the interiors of the two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, it should be understood by those skilled in the art that the terms indicating an orientation or a positional relationship herein are based on the orientations and the positional relationships shown in the drawings and are only for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the device or the element referred to must have a specific orientation, be constructed in a specific orientation and be operated, and thus should not be construed as limiting the present invention.
The terminology used in the various embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the various embodiments of the invention. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the present invention belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments of the present invention.
As shown in fig. 1, a flowchart of a method for providing a sign language control method according to an embodiment of the present invention includes:
s100, analyzing the video image acquired in real time, and detecting and recording the input condition of the hand when the triggering condition is met;
the aim of the step is to definitely start the condition for detecting the sign language so as to increase the identification accuracy and avoid the situations of false detection and false response. In the embodiment of the invention, as the object to be detected and identified is sign language, the corresponding intelligent device is provided with the video acquisition device.
Please refer to fig. 2, which is a detailed flowchart of step S100 in fig. 1; the step S100 includes:
s110, acquiring a video image in real time through video acquisition equipment;
it will be appreciated that the video capture device should also remain operational when the smart device is in operation. In the embodiment of the present invention, the video capture device is specifically a camera.
S120, when a user appears in the video image, analyzing the video image frame by frame;
in the step, the existence condition of the user is identified as a first-stage condition for sign language detection, if no user exists in the video image, the subsequent detection action cannot be executed, and the detection of the user can be realized by an infrared body temperature detection mode. In other embodiments of the present invention, when all face information of the user appears in the video image, the condition of the first stage may also be used.
S130, judging whether set characteristics exist or not; if the setting feature exists, go to step S140; if the setting feature does not exist, the flow is directly ended.
The second stage condition of sign language detection is that the advanced analysis is realized by extracting the features in the video image through the image processing technology and then comparing the extracted features with the pre-stored features. In an embodiment of the present invention, the setting feature may be at least one of a gesture feature, an expression feature and a gesture feature of the person.
S140, counting the number of continuous frames;
when the setting feature is determined in step S130, statistics of the number of consecutive frames of the setting feature in the video image is performed.
S150, judging whether the number of the continuous frames with the set characteristics reaches the set number; if the set number is reached, the process goes to step S160; if the set number is not reached, the flow is directly ended.
Since the user may inadvertently exhibit a certain setting characteristic, the step determines whether the setting characteristic exhibited by the user is a real intention of the user through the accumulated number of consecutive frames, which is a third stage condition for sign language detection according to the embodiment of the present invention.
And S160, carrying out cache recording on the video.
In the embodiment of the invention, the video is cached and recorded by taking the picture frames reaching the set number as the starting points. It should be noted that the cached video will be automatically deleted after the control process is completed.
S200, analyzing the content of the input gesture, and confirming a target object and a control strategy;
please refer to fig. 3, which is a detailed flowchart of step S200 in fig. 1; the step S200 includes:
s210, acquiring gesture information, and recording a running track corresponding to each gesture;
in the embodiment of the present invention, this step may be performed simultaneously with step S160, that is, after the third stage condition for detecting the sign language is satisfied, gesture information appearing in the subsequent video image and a track corresponding to each gesture are recorded.
S220, comparing the sign language information with pre-stored sign language information;
through comparison with the pre-stored sign language information, the target object and the control scheme which are expected to be controlled by the user can be defined.
S230, judging whether a definite target object exists or not; if yes, go to step S240; if not, the flow is directly ended.
S240, judging whether a complete control scheme is formed in a set time period; if yes, go to step S250; if not, the process returns to step S210 again.
Since the sign language input requires the cooperation of both hands of the user, the situation that the user needs to handle other things with the hands in the sign language input process is inevitable, and therefore in the step, the control scheme is required to be formed within a set time period. If the control scheme is not formed within the set time period, the user is required to perform sign language input again.
And S250, generating a control strategy for the control scheme.
When it is determined in step S240 that the complete control scheme exists, the control strategy is generated together with the target object specified in step S230.
Referring to fig. 4, a detailed flowchart of step S200' is provided for the second embodiment of the present invention; the step S200' includes:
s210', acquiring gesture information, and recording a running track corresponding to each gesture;
s220', comparing with pre-stored sign language information;
s230', judging whether an explicit target object exists; if yes, go to step S240'; if not, the flow is directly ended.
The steps S210 '-S230' are the same as the steps S210-S230 in the first embodiment, and are not described here.
S240', judging whether a complete first control scheme is formed in a set time period; if yes, go to step S250'; if not, the process returns to step S210'.
Since there are some similar situations between the part of gesture and the moving track of the sign language input, the first control scheme in this step is to instantly decode the read content.
S250', performing error correction analysis on the first control scheme by combining the cached video to generate a second control scheme;
in this step, the sign language input by the user is decoded again in combination with the video cached in step S160, so as to correct the error of the first control scheme read immediately after decoding, thereby further improving the understanding degree of the user' S needs. The generated second control scheme should be an error corrected control scheme. In the embodiment of the present invention, there is also a case where the second control scheme is the same as the first control scheme.
S260', generating a control strategy by the second control scheme.
After the second control scheme is generated in step S250 ', a control strategy will be generated in conjunction with the target object specified in step S230'.
S300, acquiring current environmental parameters;
in this step, in order to ensure the rationality of the generated control strategy, current environmental parameters need to be introduced for reference. For example, if the generated control strategy is "adjust the temperature of the air-conditioning apparatus by 5 degrees celsius" but the acquired current ambient temperature is already 34 degrees celsius, the generated control strategy may have a certain problem in terms of rationality.
And S400, judging the rationality of the control strategy by combining the acquired environmental parameters and the historical control records, and executing the control strategy on the target object when the control strategy is judged to be rational.
Please refer to fig. 5, which is a detailed flowchart of step S400 in fig. 1; the step S400 includes:
s410, calling a historical control record of a corresponding target object under the current environmental parameters;
in conjunction with the environmental parameters acquired in step S300, the control record of the target object specified in step S230 or step S230' is recalled in the history data. In the embodiment of the present invention, the environment parameter may be embodied as a parameter interval.
S420, calculating a control mode and a control parameter interval according to the historical control record;
by retrieving the contents in the historical control records in step S420, the corresponding control mode and control parameter interval in the environmental parameter can be calculated. The control parameter interval is a control parameter average value interval obtained by randomly sampling a set number (for example, 10) of control parameter intervals in all history records.
This is illustrated by the following example: taking the air conditioning equipment as a target object, and when the obtained environmental parameter is a temperature value and the current temperature value is 32 ℃, the environmental parameter can be in a parameter interval of 30-34 ℃. In the aspect of control records, through comparison query in historical records, when the ambient temperature is 30-34 ℃, the total number of control records of the air conditioning equipment is 200, wherein the cooling control is 190, each cooling control record corresponds to the regulated temperature, and the ventilation control is 10. The control mode can be temperature reduction control and ventilation control, and the control parameter interval is an interval of adjusting the average of temperature values obtained by randomly selecting 10 historical control records each time in 10 sampling. For the ventilation control, no calculation is performed because there is no parameter.
S430, judging whether the control strategy is matched with the control mode and the control parameter interval; if yes, go to step S440; if not, the process proceeds to step S450.
Referring to the above example, if the control strategy confirmed in step S200 is to perform temperature raising control on the air conditioner, it is obviously not matched with the temperature lowering control and the ventilation control obtained in step S420. If the control strategy confirmed in step S200 is to perform the temperature reduction control of 5 ℃ on the air conditioner, and the control parameter interval calculated in step S420 is a temperature interval of 2 ℃ to 4 ℃, it is determined that the controllable mode is matched, but the control parameter intervals are not matched.
And S440, executing the control strategy by the control target object.
In this step, it is explained that the control strategy generated in step S200 satisfies the rationality requirement, and therefore, the target object can be controlled and executed in accordance with the control strategy.
And S450, performing reference feedback to the user, and executing the control strategy by the control target object after the user is confirmed.
In this step, after it is determined in step S430 that the control strategy is not matched with the control mode and the control parameter interval, the user is required to confirm the control strategy, and since the object to which the present solution is directed is a person whose sound production is obstructed, the information of the reference feedback is displayed to the user in the form of characters through the display screen of the smart device, and the user can confirm the displayed feedback information through specific actions (such as nodding), body gestures, or the like. After the intelligent device receives the feedback, the intelligent device determines that the control strategy is not matched with the historical control record, but is the current actual use requirement of the user, and the reasonable requirement is met, so that the target object can be controlled and executed according to the control strategy.
Referring to FIG. 6, a schematic block diagram of a computer apparatus is provided according to an embodiment of the present invention. The computer device in this embodiment comprises a processor 610 and a memory 620, the processor 610 is coupled to the memory 620, and the processor 610 executes instructions to implement the sign language control method in any of the above embodiments when operating.
The processor 610 may also be referred to as a Central Processing Unit (CPU). The processor 610 may be an integrated circuit chip having signal processing capabilities. The processor 610 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor, but is not limited thereto.
In the embodiment of the present invention, the computer device may specifically be an intelligent panel. In addition, the computer equipment is provided with an image pickup device for acquiring video images and a display screen for displaying the video images.
Referring to fig. 7, a schematic block diagram of a readable storage medium according to an embodiment of the invention is shown. The readable storage medium in this embodiment stores a computer program 710, and the computer program 710 can be executed by a processor to implement the sign language control method in any of the above embodiments.
Alternatively, the readable storage medium may be various media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or may be a terminal device such as a computer, a server, a mobile phone, a tablet, or the like.
Compared with the prior art, the sign language control method, the computer equipment and the readable storage medium provided by the invention have the advantages that firstly, the judgment of the trigger condition is added, so that the situation of false detection of the gesture information can be avoided to a great extent; secondly, the operation intention of the user can be made clear by analyzing the gesture input by the user from the target object and the contents of the control strategy in two aspects; thirdly, the analyzed control strategy is further comprehensively judged by combining the environment parameters acquired in real time and the historical control records, so that the control strategy input by the user can meet the rationality requirement, and the intelligent degree is high.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Any modification, equivalent replacement and improvement made within the technical idea of using the present invention should be within the scope of the right of the present invention.

Claims (10)

1. A sign language control method is characterized by comprising the following steps:
analyzing the video image acquired in real time, and detecting and recording the input condition of the hand language when the triggering condition is met;
analyzing the content of the input gesture, and confirming a target object and a control strategy;
acquiring current environmental parameters;
and judging the rationality of the control strategy by combining the acquired environmental parameters and historical control records, and executing the control strategy on the target object when the control strategy is determined to be rational.
2. The sign language control method according to claim 1, wherein the video image obtained in real time is analyzed, and when a trigger condition is satisfied, the input condition of the sign language is detected and recorded; the method comprises the following steps:
acquiring the video image in real time through video acquisition equipment;
when a user appears in the video image, analyzing the video image frame by frame;
judging whether set characteristics exist or not;
counting the continuous frame number;
judging whether the number of the continuous frames with the set characteristics reaches a set number;
if yes, the video is cached and recorded.
3. The sign language control method of claim 2, wherein the analyzing the content of the input gesture, and the confirming the target object and the control strategy comprises:
acquiring gesture information, and recording a running track corresponding to each gesture;
comparing the sign language information with prestored sign language information;
judging whether the target object is clear or not;
if so, continuously judging whether a complete control scheme is formed in a set time period;
if yes, the control scheme is generated into the control strategy.
4. The sign language control method of claim 2, wherein the analyzing the content of the input gesture, and the confirming the target object and the control strategy comprises:
acquiring gesture information, and recording a running track corresponding to each gesture;
comparing the sign language information with prestored sign language information;
judging whether the target object is clear or not;
if so, continuously judging whether a complete first control scheme is formed in a set time period;
performing error correction analysis on the first control scheme by combining the cached video to generate a second control scheme;
and generating the control strategy by the second control scheme.
5. A sign language control method according to any one of claims 2 to 4, characterized in that: the setting characteristic is at least one of a posture characteristic, an expression characteristic and a gesture characteristic of the person.
6. The sign language control method according to claim 1, wherein the reasonability of the control strategy is judged in combination with the acquired environmental parameters and a history control record, and the control strategy is executed on the target object when the control strategy is deemed reasonable; the method comprises the following steps:
calling a historical control record corresponding to the target object under the current environmental parameters;
calculating a control mode and a control parameter interval according to the historical control record;
judging whether the control strategy is matched with the control mode and the control parameter interval;
and if the control strategy is matched with the target object, controlling the target object to execute the control strategy.
7. The sign language control method according to claim 6, wherein when the control strategy does not match the control pattern and the control parameter interval, reference feedback is given to a user, and after confirmation by the user, the target object is controlled to execute the control strategy.
8. The sign language control method of claim 7, wherein the information of the reference feedback is presented to the user in a text form through a display screen.
9. A computer device comprising a processor and a memory, the processor coupled to the memory, the processor in operation executing instructions to implement a sign language control method according to any one of claims 1-8.
10. A readable storage medium having stored thereon a computer program for execution by a processor to implement a sign language control method according to any one of claims 1 to 8.
CN202010885965.9A 2020-08-28 2020-08-28 Sign language control method, computer equipment and readable storage medium Pending CN112180849A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010885965.9A CN112180849A (en) 2020-08-28 2020-08-28 Sign language control method, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010885965.9A CN112180849A (en) 2020-08-28 2020-08-28 Sign language control method, computer equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN112180849A true CN112180849A (en) 2021-01-05

Family

ID=73924494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010885965.9A Pending CN112180849A (en) 2020-08-28 2020-08-28 Sign language control method, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112180849A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103576848A (en) * 2012-08-09 2014-02-12 腾讯科技(深圳)有限公司 Gesture operation method and gesture operation device
CN105867158A (en) * 2016-05-30 2016-08-17 北京百度网讯科技有限公司 Smart-home control method and device based on artificial intelligence and system
US20170090582A1 (en) * 2015-09-24 2017-03-30 Intel Corporation Facilitating dynamic and intelligent geographical interpretation of human expressions and gestures
CN109032356A (en) * 2018-07-27 2018-12-18 深圳绿米联创科技有限公司 Sign language control method, apparatus and system
CN109269041A (en) * 2018-09-05 2019-01-25 广东美的制冷设备有限公司 Air-conditioning, air conditioning control method and computer readable storage medium
CN109974237A (en) * 2019-04-01 2019-07-05 珠海格力电器股份有限公司 Air conditioner, the method for adjustment of air conditioner operation reserve and device
CN109991859A (en) * 2017-12-29 2019-07-09 青岛有屋科技有限公司 A kind of gesture instruction control method and intelligent home control system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103576848A (en) * 2012-08-09 2014-02-12 腾讯科技(深圳)有限公司 Gesture operation method and gesture operation device
US20150153836A1 (en) * 2012-08-09 2015-06-04 Tencent Technology (Shenzhen) Company Limited Method for operating terminal device with gesture and device
US20170090582A1 (en) * 2015-09-24 2017-03-30 Intel Corporation Facilitating dynamic and intelligent geographical interpretation of human expressions and gestures
CN105867158A (en) * 2016-05-30 2016-08-17 北京百度网讯科技有限公司 Smart-home control method and device based on artificial intelligence and system
CN109991859A (en) * 2017-12-29 2019-07-09 青岛有屋科技有限公司 A kind of gesture instruction control method and intelligent home control system
CN109032356A (en) * 2018-07-27 2018-12-18 深圳绿米联创科技有限公司 Sign language control method, apparatus and system
CN109269041A (en) * 2018-09-05 2019-01-25 广东美的制冷设备有限公司 Air-conditioning, air conditioning control method and computer readable storage medium
CN109974237A (en) * 2019-04-01 2019-07-05 珠海格力电器股份有限公司 Air conditioner, the method for adjustment of air conditioner operation reserve and device

Similar Documents

Publication Publication Date Title
US8958647B2 (en) Registration determination device, control method and control program therefor, and electronic apparatus
US11455491B2 (en) Method and device for training image recognition model, and storage medium
CN109951595A (en) Intelligence adjusts method, apparatus, storage medium and the mobile terminal of screen intensity
WO2015058381A1 (en) Method and terminal for selecting image from continuous images
US11778263B2 (en) Live streaming video interaction method and apparatus, and computer device
CN113676671B (en) Video editing method, device, electronic equipment and storage medium
CN111080665B (en) Image frame recognition method, device, equipment and computer storage medium
CN112261420A (en) Live video processing method and related device
US10984249B2 (en) Information processing apparatus, system, control method for information processing apparatus, and non-transitory computer readable storage medium
WO2021047069A1 (en) Face recognition method and electronic terminal device
CN111540355A (en) Personalized setting method and device based on voice assistant
CN112180849A (en) Sign language control method, computer equipment and readable storage medium
CN111242205B (en) Image definition detection method, device and storage medium
CN112060080A (en) Robot control method and device, terminal equipment and storage medium
CN112492397A (en) Video processing method, computer device, and storage medium
CN107832690B (en) Face recognition method and related product
CN112948630B (en) List updating method, electronic equipment, storage medium and device
CN111225250B (en) Video extended information processing method and device
CN113780083A (en) Gesture recognition method, device, equipment and storage medium
CN113923392A (en) Video recording method, video recording device and electronic equipment
JP2012226085A (en) Electronic apparatus, control method and control program
CN112762591A (en) Control method and device of air conditioner, electronic equipment and storage medium
CN109684581B (en) Evaluation method, device, equipment and storage medium based on user characteristic data
CN111343391A (en) Video capture method and electronic device using same
EP4262190A1 (en) Electronic apparatus and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210105