CN112256135A - Equipment control method and device, equipment and storage medium - Google Patents

Equipment control method and device, equipment and storage medium Download PDF

Info

Publication number
CN112256135A
CN112256135A CN202011195754.9A CN202011195754A CN112256135A CN 112256135 A CN112256135 A CN 112256135A CN 202011195754 A CN202011195754 A CN 202011195754A CN 112256135 A CN112256135 A CN 112256135A
Authority
CN
China
Prior art keywords
gesture
recognition model
information
target
gesture recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011195754.9A
Other languages
Chinese (zh)
Other versions
CN112256135B (en
Inventor
邵帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202011195754.9A priority Critical patent/CN112256135B/en
Publication of CN112256135A publication Critical patent/CN112256135A/en
Application granted granted Critical
Publication of CN112256135B publication Critical patent/CN112256135B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C17/00Arrangements for transmitting signals characterised by the use of a wireless electrical link
    • G08C17/02Arrangements for transmitting signals characterised by the use of a wireless electrical link using a radio link

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a device control method, a device control device, a device and a storage medium, wherein the method comprises the following steps: receiving a gesture detection instruction sent by second equipment; acquiring first gesture information by detecting a first gesture action; sending the first gesture information to the second device; the first gesture information is used for obtaining a target gesture recognition model, and the target gesture recognition model can recognize the first gesture as a first control instruction.

Description

Equipment control method and device, equipment and storage medium
Technical Field
The present application relates to human-computer interaction technologies, and in particular, to a device control method and apparatus, a device, and a storage medium.
Background
Human-to-machine interactions include contact interactions and non-contact interactions. The interaction mode of the contact interaction comprises the following steps: keyboard input, mouse input, touch screen input, force input, and switch touches, among other interactions based on contact with an electronic device. The non-touch interaction includes: the interaction of the control of the electronic equipment can be realized without contacting with the electronic equipment by means of a visible light camera technology, an infrared light sensor, a sensor applying a laser technology and the like.
Gesture recognition is used as a non-contact interaction, and the electronic equipment can acquire a gesture action and convert the gesture action into an executable instruction of the electronic equipment only by requiring a user to put the gesture action in a limited free space. The gesture information has individual differences, such as physiological differences of the sizes of the palm and the fingers, and habitual differences of gesture motion positions and speeds, so that when small electronic equipment such as an earphone performs gesture recognition through a gesture recognition model, the gesture recognition model with fixed parameters cannot meet the individual differences of different users.
Disclosure of Invention
The embodiment of the application provides a device control method and device, a device and a storage medium, which can meet the gesture recognition requirements of different users with individual differences.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an apparatus control method, which is applied to a first apparatus, and the method includes:
receiving a gesture detection instruction sent by second equipment;
acquiring first gesture information by detecting a first gesture action;
sending the first gesture information to the second device;
the first gesture information is used for obtaining a target gesture recognition model, and the target gesture recognition model can recognize the first gesture as a first control instruction.
In a second aspect, an embodiment of the present application provides an apparatus control method, which is applied to a second apparatus, and the method includes:
generating a detection instruction, and sending the gesture detection instruction to first equipment;
receiving first gesture information sent by first equipment;
obtaining a target gesture recognition model through the first gesture information; the target gesture recognition model is capable of recognizing the first gesture motion as a first control instruction.
In a third aspect, an embodiment of the present application provides an apparatus control device, which is applied to a first apparatus, and the apparatus includes:
the first receiving module is used for receiving a gesture detection instruction sent by the second equipment;
the detection module is used for acquiring first gesture information by detecting the first gesture action;
the first sending module is used for sending the first gesture information to the second equipment;
the first gesture information is used for obtaining a target gesture recognition model, and the target gesture recognition model can recognize the first gesture as a first control instruction.
In a fourth aspect, an embodiment of the present application provides an apparatus control device, which is applied to a second apparatus, and the apparatus includes:
the second sending module is used for generating a gesture detection instruction and sending the gesture detection instruction to the first equipment;
the second receiving module is used for receiving the first gesture information sent by the first equipment;
the obtaining module is used for obtaining a target gesture recognition model through the first gesture information; the target gesture recognition model is capable of recognizing the first gesture motion as a first control instruction.
In a fifth aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps in the device control method.
In a sixth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the device control method.
The device control method provided by the embodiment of the application comprises the following steps: the second equipment generates a gesture detection instruction and sends the gesture detection instruction to the first equipment; the first equipment starts to detect the first gesture action based on the received gesture detection instruction to obtain first gesture information, the obtained first gesture information is sent to the second equipment, and the second equipment obtains a target gesture recognition model through the first gesture information, so that the target gesture recognition model can recognize the first gesture action as a first control instruction. Therefore, the gesture operation performed by the user is detected through the first equipment to obtain the first gesture information, and the second equipment obtains the target gesture recognition model capable of recognizing the current gesture action according to the received first gesture information, so that the gesture recognition model can meet the individual difference of different users, and the accuracy of gesture recognition is improved.
Drawings
FIG. 1 is a block diagram of an alternative architecture of an information handling system provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of an alternative circuit configuration of an information handling system according to an embodiment of the present application;
FIG. 3 is a schematic flow chart diagram illustrating an alternative method for controlling a device according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart of an alternative apparatus control method according to an embodiment of the present disclosure;
FIG. 5 is a schematic flow chart diagram illustrating an alternative method for controlling a device according to an embodiment of the present disclosure;
FIG. 6 is an alternative interface diagram of a gesture guidance interface provided by embodiments of the present application;
FIG. 7 is an alternative flow chart of a device control method provided by an embodiment of the present application;
FIG. 8 is a schematic flow chart diagram illustrating an alternative method for controlling a device according to an embodiment of the present disclosure;
fig. 9 is an alternative scenario diagram of a device control method provided in an embodiment of the present application;
FIG. 10 is an alternative schematic configuration of a first device provided by an embodiment of the present application;
FIG. 11 is an alternative page view of a gesture guidance interface provided by embodiments of the present application;
FIG. 12 is a schematic flow chart diagram illustrating an alternative method for controlling a device according to an embodiment of the present disclosure;
FIG. 13 is a schematic diagram of an alternative structure of a device control apparatus provided in an embodiment of the present application;
FIG. 14 is a schematic diagram of an alternative structure of a device control apparatus provided in an embodiment of the present application;
fig. 15 is an alternative structural schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
The embodiment of the application can provide a device control method and device, a device and a storage medium. In practical applications, the device control method may be implemented by a device control apparatus, and each functional entity in the device control apparatus may be cooperatively implemented by hardware resources of a computer device (e.g., an earphone, a wearable device, and a terminal device), such as computing resources of a processor, and communication resources (e.g., for supporting communication in various manners such as optical cable and cellular).
Of course, the embodiments of the present application are not limited to being provided as methods and hardware, and may be provided as a storage medium (storing instructions for executing the device control method provided by the embodiments of the present application) in various implementations.
The device control method provided in the embodiment of the present application may be applied to the information processing system shown in fig. 1 or fig. 2, where as shown in fig. 1 or fig. 2, the information processing system includes: a first device 10 and a second device 20, wherein the first device 10 may be a small device with low computing power, such as a headset, a car key, a wearable device, etc. The second device may be an electronic device with a display screen and strong computing power, such as a mobile terminal, an AR device, a notebook, and the like. In the embodiment of the application, the first device may not have a display screen, and the second device has a display screen.
The method comprises the following steps that connection can be established between first equipment and second equipment through the modes of Bluetooth, WIFI, mobile data and the like, the second equipment generates a gesture detection instruction, and the gesture detection instruction is sent to the first equipment; the first equipment detects a first gesture action of a user based on triggering of a received gesture detection instruction, first gesture information is obtained through the detected first gesture action, the first gesture information is sent to the second equipment, the second equipment obtains a target gesture recognition model capable of recognizing the first gesture action as a first control instruction according to the received first gesture information, and accordingly the first equipment can execute corresponding operation of the first control instruction based on the received first gesture action.
In an example, the first device sends the first gesture information to the second device, and the second device obtains the target gesture recognition model by using the first gesture information and sends the target gesture recognition model to the first device.
In one example, as shown in fig. 2, the information processing system further includes: a third device 30, wherein the third device may be a server or a server cluster composed of a plurality of servers; the second device communicates with a third device over a network 40.
In the information processing system shown in fig. 2, a first device sends first gesture information to a second device, the second device sends the first gesture information to a third device, the third device obtains a target gesture recognition model by using the first gesture information and sends the target gesture recognition model to the second device, and the second device forwards the received target gesture recognition model to the first device.
In combination with the information processing system, the embodiment provides an apparatus control method, which can meet gesture recognition requirements of different users with individual differences.
Embodiments of a device control method, an apparatus, a device, and a storage medium according to embodiments of the present application are described below with reference to schematic diagrams of information processing systems shown in fig. 1 or fig. 2.
The present embodiment provides an apparatus control method, which is applied to a first apparatus, and fig. 3 is a schematic flow chart illustrating an implementation of an apparatus method according to the embodiment of the present application, and as shown in fig. 3, the method may include the following steps:
s301, receiving a gesture detection instruction sent by the second device.
The gesture detection instruction is sent by the second device with gesture guidance information being presented. In one example, the gesture guidance information is a gesture guidance interface, and the display content of the gesture guidance interface comprises: the image of the first gesture action and the control identification of the first control instruction.
A connection is established between the first device and the second device, and communication of data is enabled based on the established connection.
When the second device displays the gesture guide information through the display screen and generates a gesture detection instruction, the second device sends the gesture detection instruction to the first device through connection with the first device, and the first device receives the gesture detection instruction. Wherein the gesture guidance information may include one of: gesture guidance interfaces, gesture guidance voice, and the like.
And under the condition that the first equipment receives the gesture detection instruction, entering a gesture detection state.
In the embodiment of the application, a gesture detection sensor is arranged in the first device, and the first device starts the gesture detection sensor based on the gesture detection instruction under the condition that the gesture detection instruction is received, so that the gesture operation of a user is detected through the gesture detection sensor. Wherein the gesture detection sensor may include: the module that can detect the gesture such as radar, laser detection module, ultrasonic detection module.
In this embodiment, the first device may be an earphone unit, and the earphone includes two earphone units. In one example, one of the two earphone units is a first earphone unit, and the other earphone unit is a second earphone unit, in which case, a gesture detection sensor is disposed in the first earphone unit, and in which case, gesture detection sensors are disposed in both the first earphone unit and the second earphone unit. The first earphone unit receives the gesture detection instruction, and the gesture detection sensor is started based on the instruction of the gesture detection instruction. At this time, in the case where the gesture detection sensor is provided in the first headphone unit, the gesture detection sensor in the first headphone unit is turned on. Under the condition that the gesture detection sensors are arranged in the first earphone unit and the second earphone unit, only the gesture detection sensor in the first earphone unit can be started, only the gesture detection sensor in the second earphone unit can be started, and the gesture detection sensor in the first earphone unit and the gesture detection sensor in the second earphone unit can also be started simultaneously.
In practical applications, in a case where the first earphone unit and the second earphone unit are both provided with gesture detection sensors, the gesture guidance information may include a plurality of pieces of guidance information, and the plurality of pieces of information include: first guidance information for the first earphone unit and second guidance information for the second earphone unit.
And under the condition that the gesture guide information currently output by the second equipment is the first guide information, starting a gesture detection sensor in the first earphone unit. And under the condition that the gesture guide interface currently output by the second device is second guide information, starting a gesture detection sensor in the second earphone unit.
In practical application, when the second device outputs a plurality of pieces of gesture guidance information, a gesture detection instruction is sent to the first device only once, and the gesture detection instruction carries an identifier of an earphone unit to which a gesture detection sensor to be turned on belongs, so as to indicate that one or both of the first earphone unit and the second earphone unit are turned on.
S302, first gesture information is obtained by detecting the first gesture motion.
The first device detects the gesture operation executed by the user through the gesture detection sensor, detects the action of the gesture operation, and judges whether the detected action of the gesture operation is the first gesture action. And when the gesture operation action is a first gesture action, using the detected data as first gesture information, and when the detected gesture operation action is not the first gesture action, using the detected gesture information as invalid gesture information.
In the embodiment of the present application, the implementation of S302 includes: the first gesture is detected by the following detection form: millimeter wave, laser or ultrasonic; and converting the first gesture action into the first gesture information according to the detection form.
The gesture detection sensors in the first equipment are different, and the detection modes for detecting the first gesture action are different. Under the condition that the gesture detection sensor is a radar, the detection form is millimeter waves; under the condition that the gesture detection sensor is a laser module, the detection form is laser; in the case where the gesture detection sensor is an ultrasonic module, the detection form is ultrasonic.
After the first device detects the first gesture action through the gesture detection sensor, data of the gesture detection sensor is converted into gesture information which can be recognized and processed by the first device, namely the first gesture information.
In the embodiment of the present application, a specific algorithm for determining whether the detected gesture operation is the first gesture operation is not limited at all.
S303, sending the first gesture information to the second device.
The first gesture information is used for obtaining a target gesture recognition model, and the target gesture recognition model can recognize the first gesture as a first control instruction.
After the first gesture information is obtained by the first equipment, the first gesture information is sent to the second equipment, and therefore the target gesture recognition model is obtained by the second equipment through the first gesture information.
The first device sends the first gesture information to the second device through connection between the first device and the second device, and the second device updates parameters of the reference gesture recognition model through the first gesture information or directly forwards the first gesture information to the third device. And after the second device obtains the target gesture recognition model through updating the parameters of the reference gesture recognition model or obtains the target gesture recognition model sent by the third device, the target gesture recognition model is updated to the first device.
In the embodiment of the application, the reference gesture recognition model may be a model that cannot recognize the first gesture motion, or may be a model that can recognize the first gesture motion as the first control instruction but has a probability of recognizing the first gesture motion as the first control instruction lower than a set probability threshold.
In the embodiment of the present application, as shown in fig. 4, after S303, the following steps are further performed:
s304, receiving the target gesture recognition model sent by the second device.
In this embodiment of the application, the first device may receive a complete target gesture recognition model sent by the second device, and may also receive a model update parameter sent by the second device under the condition that the first device has the reference gesture recognition model, where the model update parameter is a parameter updated by the target gesture recognition model with respect to the reference gesture recognition model.
In the embodiment of the application, the target gesture recognition model can recognize at least one gesture action. In one example, one gesture recognition model may recognize one gesture action, at which point different gesture models recognize different gesture actions. Such as: the gesture recognition model A recognizes the gesture action 1 as a control command 1, the gesture recognition model B recognizes the gesture action 2 as a control command 2, and the gesture recognition model C recognizes the gesture action 3 as a control command 3. In an example, the first gesture recognition model may recognize a plurality of gesture actions. Such as: the gesture recognition model can recognize a gesture 1 as a control command 1, recognize a gesture 2 as a control command 2, and recognize a gesture 3 as a control command 3.
Under the condition that one gesture recognition model recognizes one gesture action, after receiving the target gesture recognition model in the first device, another gesture action can be received to obtain the target gesture recognition model capable of recognizing the gesture action. Such as: the first device detects gesture motion 1, and detects gesture motion 2 to obtain a target gesture recognition model B under the condition that the target gesture recognition model A is obtained based on gesture information of the gesture motion 1.
Under the condition that one gesture recognition model recognizes multiple gesture actions, after gesture information of one gesture action is detected in the first device, other gesture actions are continuously detected, and a target gesture recognition model capable of recognizing the multiple gesture actions is obtained. Such as: the first device detects gesture actions 1 and 2 and obtains a target gesture recognition model based on gesture information of the gesture actions 1 and gesture information of the gesture actions 2.
The device control method provided by the embodiment of the application comprises the following steps: receiving a gesture detection instruction sent by second equipment; acquiring first gesture information by detecting the first gesture action; sending the first gesture information to the second device; the first gesture information is used to obtain a target gesture recognition model, so that the gesture recognition model can recognize the first gesture motion as a first control instruction. Therefore, the first equipment starts to detect the gesture operation performed by the user under the control of the second equipment to obtain the first gesture information, and sends the obtained first gesture information to the second equipment, so that the gesture information of the gesture operation performed by the user is used as learning data to obtain the target gesture recognition model, the gesture recognition model can meet the individual difference of different users, and the accuracy of gesture recognition is improved.
In some embodiments, if the first gesture information satisfies a correction condition, the first gesture information is used to obtain the target gesture recognition model.
If the first gesture information meets the correction condition, the first device sends the first gesture information to the second device so as to obtain the target gesture recognition model through the first gesture information.
The first equipment sends the first gesture information to the second equipment when determining that the first gesture information meets the correction condition, and continues to detect the first gesture action when determining that the first gesture information does not meet the correction condition.
In the embodiment of the application, when the first gesture information meets the correction condition, the first gesture information is sent to the second device, and a first gesture action detection completion instruction is sent to the second device to indicate that the first device of the second device meets the correction condition currently, and the second device can output a gesture guide interface with display content of another gesture action.
In an embodiment of the present application, the correction condition includes at least one of the following conditions:
the method comprises the following steps that firstly, the execution times of a first gesture action corresponding to first gesture information are larger than a time threshold value;
and secondly, the target gesture recognition model is not stored in the first equipment.
The correction condition may include condition one, condition two, or a combination of both.
The threshold of the number of times in the first condition may be set according to actual requirements, for example: 3. 5, and the like.
For the second condition, the first device does not include a target gesture recognition model capable of accurately recognizing the first gesture of the current user as the first control instruction, where the reference gesture model in the first device cannot recognize the first recognized action or cannot accurately recognize the first gesture of the current user as the first control instruction. In the embodiment of the application, the reference gesture recognition model cannot accurately recognize the first gesture of the current user as the first control instruction includes the following conditions:
the method comprises the steps that firstly, a reference gesture recognition model can recognize first gestures of other users as first control instructions, and when the first gestures of the current user cannot be recognized as the first control instructions;
in case two, the reference gesture recognition model can recognize the first gesture of the current user as the first control instruction, but the probability of recognizing the first gesture of the current user as the first control instruction is smaller than the probability threshold.
In the embodiment of the application, the reference gesture recognition model has a possibility of recognizing the first gesture action of the current user as the second control instruction. Here, the second control instruction is different from the first control instruction, such as: the first control instruction is as follows: turning up the volume, and the second control instruction is starting up.
In the embodiment of the application, under the condition that the correction condition includes the first condition, the first device sends the first gesture information obtained by the user executing the same gesture for multiple times to the second device, and the second device learns the target gesture recognition model based on the data of the gesture actions input for multiple times, so that the number of the learned data is increased, the error of gesture recognition caused by the specific gesture nuance of the user is reduced, the optimization of the gesture recognition model is improved, and the accuracy of the gesture recognition is improved.
In some embodiments, after S304, further comprising: obtaining second gesture information by detecting the first gesture action; and taking the second gesture information as the input of the target gesture recognition model to obtain the first control instruction output by the target gesture recognition model so as to execute the operation corresponding to the first control instruction.
And under the condition that the target gesture recognition model is obtained, when the first equipment detects the first gesture recognition action again, obtaining second gesture information, inputting the second gesture information into the target gesture recognition model so as to recognize the first gesture action through the target gesture recognition model, wherein at the moment, the output of the target gesture recognition model is the first control command.
After the first device obtains the first control instruction output by the target gesture recognition model, the first device executes the operation corresponding to the first control instruction, so that the first device is accurately controlled by the non-contact gesture operation.
In an example, when the first device is an earphone, when the first control instruction is a power-off instruction, the first device is controlled to be powered off; and when the first control instruction is a volume adjusting instruction, sending the volume adjusting instruction to target equipment to be adjusted in volume.
The present embodiment provides an apparatus control method, which is applied to a second apparatus, and fig. 5 is a schematic flow chart illustrating an implementation of an apparatus method according to the embodiment of the present application, and as shown in fig. 5, the method may include the following steps:
s501, generating a gesture detection instruction, and sending the gesture detection instruction to first equipment.
The second equipment generates a gesture detection instruction, sends the generated gesture detection instruction to the first equipment, and instructs the first equipment to start a gesture detection sensor so as to detect a first gesture action of the user. Here, the gesture detection instruction may carry an identifier indicating the target device unit, instructing the first device to turn on the gesture detection sensor in the target device unit. In an example, the gesture detection instruction carries an identifier indicating the first device unit, and indicates to turn on a gesture detection sensor in the first device unit. In an example, the gesture detection instruction carries an identifier indicating a first device unit and an identifier indicating a second device unit, and indicates to turn on a gesture detection sensor in the first device unit and a gesture detection sensor in the second device unit.
The first equipment obtains first gesture information by detecting first gesture actions in the state that the gesture detection sensor is started, and sends the first gesture information to the second equipment.
S502, receiving first gesture information sent by the first device.
The second equipment receives first gesture information sent by the first equipment, and parameters of the reference gesture recognition model are updated through the first gesture information to obtain a target gesture recognition model, and the target gesture recognition model can recognize the first gesture action as the first control instruction.
And S503, obtaining a target gesture recognition model through the first gesture information.
The target gesture recognition model is capable of recognizing the first gesture motion as a first control instruction.
In this embodiment of the application, the manner in which the second device obtains the target gesture recognition model through the first gesture information includes one of the following obtaining manners:
the obtaining mode I and the second equipment update the parameters of the reference gesture recognition model through the first gesture information to obtain a target gesture recognition model
And in the second obtaining mode, the second equipment sends the received first gesture information to the third equipment, so that the third equipment obtains the target gesture recognition model through the first gesture information, and the second equipment receives the target gesture recognition model sent by the third equipment.
Taking the way of obtaining the target gesture recognition model as an example of the first obtaining way, the implementation of S503 includes the following steps:
and updating parameters of a reference gesture recognition model through the first gesture information and the first control instruction to obtain the target gesture recognition model.
Taking the way of obtaining the target gesture recognition model as an example of the second obtaining way, the implementation of S503 includes the following steps:
sending the first gesture information and the first control instruction to a third device; receiving the target gesture recognition model sent by the third device.
Here, the second device itself updates the parameters of the reference gesture recognition model by the first gesture information or directly forwards the first gesture information to the third device. And after the second equipment obtains the target gesture recognition model through learning or obtains the target gesture recognition model sent by the third equipment, updating the target gesture recognition model to the first equipment.
In the embodiment of the application, the second device sends the generated gesture detection instruction to the first device to instruct the first device to start detecting the gesture action of the current user; the second device receives first gesture information sent by the first device, and obtains a target gesture recognition model through the first gesture information, wherein the target gesture recognition model can recognize the first gesture as the first control instruction. Therefore, first gesture information obtained by detecting the gesture operation performed by the user through the first device is obtained, the gesture information of the gesture operation performed by the user is used as learning data, and the target gesture recognition model is obtained, so that the target gesture recognition model can meet the individual difference of different users, and the accuracy of gesture recognition is improved.
In some embodiments, the first gesture information satisfies a correction condition.
In an example, the first device determines whether the first gesture information satisfies a correction condition before transmitting the first gesture information, and transmits the first gesture information to the second device if the first gesture information satisfies the correction condition.
In this embodiment of the application, when the first device determines that the first gesture information satisfies the correction condition, a first gesture motion detection completion instruction may be sent to the second device, and the second device determines that the current first gesture information satisfies the correction condition based on the first gesture motion detection completion instruction.
In an example, after receiving first gesture information, the second device determines whether the first gesture information meets a correction condition; correspondingly, under the condition that the first gesture information meets the correction condition, a target gesture recognition model is obtained through the first gesture information.
In practical application, both ends of the first device and the second device may detect whether the first gesture information satisfies the correction condition, or one end of the first device may detect whether the first gesture information satisfies the correction condition.
In an embodiment of the present application, the correction condition includes at least one of the following conditions:
the method comprises the following steps that firstly, the execution times of a first gesture action corresponding to first gesture information are larger than a time threshold value;
conditional two, the first device is absent the target gesture recognition model.
The correction condition may include condition one, condition two, or a combination of both.
In the embodiment of the application, under the condition that the correction condition comprises the first condition, the gesture guiding interface guides the user to execute multiple times of input on the same gesture action, and the target gesture recognition model is obtained based on data learning of the multiple times of input gesture actions, so that the number of learning data is increased, the gesture recognition error caused by the specific gesture nuance of the user is reduced, the optimization of the gesture recognition model is improved, and the accuracy of gesture recognition is improved.
In this embodiment of the present application, after S503, the method further includes: and sending the target gesture recognition model to the first device.
The second device may send the complete target gesture recognition model to the first device, or may send a model update parameter to the first device in a case where the first device itself has a reference gesture model, where the model update parameter is a parameter of the target gesture recognition model updated with respect to the reference gesture recognition model.
In some embodiments, before S501, further comprising: and displaying the gesture guide interface.
The display content of the gesture guidance interface comprises: the image of the first gesture action and the control identification of the first control instruction.
The second device has a display screen through which the second device can output a gesture guidance interface. The Display type of the Display screen in the second device may be a Light Emitting Diode (LED) Display screen, a Liquid Crystal Display (LCD), and the like.
An application or browser capable of presenting the gesture guidance interface may be installed in the second device, and the gesture guidance interface is displayed based on the installed application or browser.
The gesture guiding interface can be a dynamic page or a static page. When the gesture guiding interface is a dynamic page, the image of the first gesture action is a dynamic image; when the gesture guiding interface is a static page, the image of the first gesture action is a static image.
In an example, as can be seen in fig. 6, the display content of the gesture guidance interface 601 includes: a hand 602 and a hand movement trajectory 603, wherein the movement trajectory 603 indicates a gesture motion. Also included in gesture guidance interface 601 is a first device identification 604 to indicate a spatial relationship of the gesture motion of hand 602 with the first device. The control identifier of the first control instruction in the gesture guidance interface 601 is not shown, and in practical application, the control identifier of the first control instruction may be presented in a text manner, an icon manner, or the like.
Where the first device includes multiple device units, the gesture guidance interface displaying content including the first gesture motion may include a page for each device unit. In an example, the first device is a headset, and the headset comprises two headset units: first earphone unit and second earphone unit, then the gesture guide page includes: a first page for the first earpiece unit and a second page for the second earpiece unit. At this time, the second device prompts the user to perform gesture operation on the corresponding earphone unit through the prompt content in the gesture guide interface.
In this embodiment of the application, the display content of the gesture guidance interface further includes: the method comprises the steps of inputting times and effective inputting times, wherein the inputting times are the times that a user needs to execute a first gesture action, the effective inputting times represent the times that the user has executed the effective gesture action, at the moment, when the first device detects the effective first gesture action, a notification instruction is sent to the second device, and the second device is instructed to update the effective inputting times in the gesture guiding interface.
And when the effective input times reach the input times, the second equipment determines to finish the guidance of the first gesture action, and can switch the displayed page from the gesture guidance interface corresponding to the first gesture action to other pages. And when the second equipment needs to continuously output the gesture guide interface, outputting an image with the display content of a second gesture action, wherein the second gesture action is different from the first gesture action.
In some embodiments, the second device further performs the steps of: displaying a first control instruction receiving interface; determining the first control instruction based on a selection operation or an input operation for the first control instruction receiving interface.
In this embodiment of the application, the first control instruction may be sent by the second device, and may also be set based on an input operation or a selection operation of a user. In a case where the first control instruction is based on an input operation or a selection operation setting of the user, the second device may output a first control instruction receiving interface through the display screen, and the first control instruction receiving interface may include at least one candidate control instruction therein. Here, when the second device receives the selection operation, the at least one candidate control instruction is characterized to include the first control instruction expected to be recognized by the user, and when the second device receives the input operation, the at least one candidate control instruction is characterized to not include the first control instruction expected to be recognized by the user.
When the at least one candidate control instruction comprises a first control instruction expected to be identified by the user, the user can perform selection operation based on the first control instruction receiving interface, and the second device takes the candidate control instruction selected by the selection operation of the user as the first control instruction. When the at least one candidate control instruction does not include the first control instruction expected to be identified by the user, the user can perform input operation based on the first control instruction receiving interface, and the second device takes the control instruction input by the input operation of the user as the first control instruction.
An embodiment of the present application provides an apparatus control method, which is an information processing system, wherein the information processing system includes: fig. 7 is a schematic flow chart of an implementation process of an apparatus method according to an embodiment of the present application, and as shown in fig. 7, the method may include the following steps:
s701, the second device generates a gesture detection instruction and sends the gesture detection instruction to the first device.
S702, the first device receives a gesture detection instruction sent by the second device.
S703, the first equipment acquires first gesture information by detecting the first gesture action.
S704, the first equipment sends the first gesture information to the second equipment.
S705, the second device obtains a target gesture recognition model through the first gesture information.
The target gesture recognition model is capable of recognizing the first gesture motion as a first control instruction.
The manner of obtaining the target gesture recognition model in S705 includes:
the first obtaining mode and the second equipment update the parameters of the reference gesture recognition model through the first gesture information; and obtaining a target gesture recognition model.
And in the second obtaining mode, the first equipment sends the first gesture information to the third equipment through the second equipment, so that the third equipment obtains a target gesture recognition model through the first gesture information, and the second equipment receives the target gesture recognition model sent by the third equipment.
As shown in fig. 8, before S701, the method further includes: and S706, displaying a gesture guiding interface by the second equipment.
The display content of the gesture guidance interface comprises: the image of the first gesture action and the control identification of the first control instruction.
In the device control method shown in fig. 7, the implementation of the first device may refer to the description in the device control method shown in fig. 3, and the implementation of the second device may refer to the description in the device control method shown in fig. 5, which is not described again here.
The device control method provided by the embodiment of the application comprises the following steps: the second equipment generates a gesture detection instruction and sends the gesture detection instruction to the first equipment; the first equipment detects the first gesture action based on the received gesture detection instruction to obtain first gesture information, the first gesture information is sent to the second equipment, and the second equipment obtains a target gesture recognition model capable of recognizing the first gesture action as a first control instruction through the first gesture information. Therefore, the gesture operation performed by the user is detected through the first device to obtain the first gesture information, the first gesture recognition model capable of recognizing the first gesture of the current user is obtained as the target gesture recognition model of the first control instruction according to the gesture information of the gesture operation performed by the user as the learning data, the gesture recognition model can meet the individual difference of different users, and the accuracy of gesture recognition is improved.
The device control method provided in the embodiment of the present application is further described below by taking the first device as a wireless headset as an example.
The device control method provided by the embodiment of the application can be used in the scenario shown in fig. 9, where a user puts out a gesture 902 in a space environment where a wireless headset 901 worn by the user is located, the gesture track of the gesture 902 is a track 903 or a track 904, the wireless headset 901 recognizes the gesture of the user as a corresponding instruction, the wireless headset 901 is controlled based on the recognized instruction, and the wireless headset 901 is controlled to execute a corresponding operation. Wherein the operation of the wireless headset through gesture control may include: power on and off, volume control, and play sequence modulation.
In practical application, the volume is controlled through a far or close gesture, the playing sequence is controlled through gestures of a forward fan and a backward fan, and the earphone is controlled to be started and closed through a special gesture (such as a ring finger gesture).
In some embodiments, the category of wireless headsets includes in addition to in-ear headsets as shown in fig. 9, earmuff headsets may also be included.
In some embodiments, for True Wireless Stereo (TWS) headphones, the spatial gesture recognition device may be on one of the headphone units or may be present on both headphone units.
The hardware structure of the earphone unit of the wireless earphone is shown in fig. 10, and includes: an antenna 1001, a processor 1002, a storage unit 1003, a wireless signal transmitter/receiver 1004, a gesture detection sensor 1005, a speaker and microphone 1006, and a power control device 1007. Wherein,
the number of the antennas 1001 may be one or more. A single antenna or multiple antennas may be used for a particular communication module (e.g., bluetooth). For various communication modules, "shared antenna" may be used, or a single antenna for each communication module, or both "shared antenna" and single antenna.
The processor 1002 may be a microprocessor (Micro Central Unit, MCU) or other processor chip with computing capabilities. The processor chip may integrate a machine learning accelerator for accelerating machine learning model calculation, where the machine learning accelerator may be an Application Specific Integrated Circuit (ASIC) such as a Network Processor (NPU).
The Memory unit 1003 may be one or a combination of Random Access Memory (RAM), Read-Only Memory (ROM), and Electrically Erasable Programmable Read-Only Memory (EEPROM).
The storage unit 1003 stores therein: the system comprises audio decoding software and a machine learning model for gesture recognition and instruction conversion, namely a gesture control model. Wherein, the gesture control model is a machine learning inference model. The gesture control model is used for converting the sampled gesture electronic signals into commands executable by the computer. The storage unit 1003 may further include: software for transmitting and receiving wireless signals, earphone control software and other software programs for realizing the functions of the earphones.
The wireless signal transmitter/receiver 1004 may be the following communication modules: a communication module based on the Bluetooth (Bluetooth) standard and a communication standard module utilizing the backscattering technology. In practical applications, the wireless signal transmitter/receiver 804 may be integrated with a special function module in addition to a basic communication module. The special function module may be an Ultra Wide Band (UWB) module for positioning of the electronic device, a Near Field Communication (NFC) module for identification of the electronic device, or a Wireless Fidelity (WIFI) module for mass data exchange.
The gesture detection sensor 1005 is used to convert the gesture signal into a gesture electronic signal. The gesture detection sensor 1005 may be a sensor module based on electromagnetic waves, such as a radar system of 10-100GHz, or a module based on ultrasonic wave.
In some implementations, the gesture detection sensor may employ a laser technology based module.
According to the device control method, gesture input is performed in a device guiding mode.
As shown in fig. 11, in a state where the wireless headset is connected to the mobile phone, the mobile phone displays gesture entry guidance information to the user through an embedded application program (App) to guide the gesture entry of the user. Wherein, the gesture guidance interface in fig. 11 includes: page 1101, page 1102, page 1103, page 1104 and page 1105, and page 1101, page 1102, page 1103, page 1104 and page 1105 are pages of different gesture actions, respectively. Wherein the gesture of the page 1101 is as: the right hand dials the finger to the right, and the number of inputs is two, and the gesture of page 1102 acts as: the right hand dials the finger forward, and the number of inputs is two, and the gesture of the page 1103 acts as: the right hand is near the headset and the number of inputs is two, the gesture of page 1104 acts as: the right hand is away from the headset, and the number of inputs is two, the gesture of page 1105 acts as: and (5) sounding fingers.
In the embodiment of the application, the gesture input guide information displayed on the gesture guide interface may be a static or dynamic picture.
When the input of a plurality of gesture actions is carried out, the mobile phone can carry out continuous input of a plurality of gesture actions according to a specific sequence. For example, when the entry of the gesture motion in the page 1101 is completed and the entered gesture signal meets the requirement, the mobile phone enters the page 1102 to perform the entry of the next gesture motion.
In order to increase the amount of sampled data, i.e., learning data, reduce the error of the user due to the nuances of specific gestures, and improve the optimization of the machine learning model, the user may be required to perform multiple inputs on the same gesture. As with the entry of gesture actions based on the guide interface 1101 in fig. 11, the APP may guide the user to make more than one input for the same gesture action.
The memory of the wireless earphone contains a gesture recognition model for gesture recognition, and the gesture recognition model is an inference model, namely a non-training model. In order to implement the gesture recognition model to be customized and upgraded according to the specific user, the process of entering a gesture implemented by the handset 121 and the wireless headset 122 is shown in fig. 12.
S1201, the handset 121 sends a gesture recognition detection instruction to the wireless headset 122.
The mobile phone 121 and the wireless headset 122 are wirelessly connected, the mobile phone 121 enters the guidance interface 1203 and sends a gesture recognition detection instruction to the wireless headset 122 to notify the wireless headset 122 to enter a gesture recognition state, and at this time, the wireless headset 122 starts a gesture detection sensor.
S1202, the wireless headset 122 detects a gesture of the user.
The user performs gesture actions according to guidance of the guidance interface of the mobile phone 121, and the mobile phone detection sensor in the wireless headset 122 collects gesture signals and converts the gesture signals into electric signals, that is, gesture information.
S1203, the wireless headset 122 sends the gesture information of the gesture motion to the mobile phone 121.
The wireless headset 122 transmits gesture information of the detected gesture action of the user to the handset 121. Wherein the wireless headset 122 does not itself perform data analysis on the gesture information.
S1204, the mobile phone 1001 updates the original gesture recognition model through the gesture information.
And collecting the electric signal 1004, verifying a gesture signal result corresponding to the electric signal and the gesture recognition model, updating the original machine learning model through the gesture signal to obtain an updated gesture recognition model, and increasing the result correctness probability of the updated gesture recognition model under the gesture signal of the user.
S1205, the mobile phone 121 sends the updated gesture recognition model to the wireless headset 122.
The mobile phone 121 transmits the updated gesture recognition model to the wireless headset 122, and the wireless headset 122 updates the gesture recognition model in the memory to the updated gesture recognition model and recognizes the gesture signal according to the updated gesture recognition model.
In some embodiments, in S1204 in fig. 12, after receiving the gesture information, the mobile phone 121 may send the gesture information to a cloud server with strong calculation power, and the cloud server updates the gesture recognition model through the strong calculation power to obtain an updated gesture recognition model, and then transmits the updated gesture recognition model to the mobile phone 121.
In some embodiments, the user may define the operation represented by the gesture signal according to his own habits. Such as "ring finger" action represents turning on and off the active noise reduction function. In which case the user may select a gesture among the recommended gestures. The recommended gestures are adopted by the method which is a perfect gesture recognition model of the recommended gestures. The user selects the gesture of his own heart instrument and updates the gesture recognition model in the headset according to the device control method of fig. 12.
In practical application, the gesture recognition model in the headset is a reasoning model, and the reasoning model is an ultimate product of the machine learning model, namely gesture information can be converted into executable control instructions, and the headset cannot train (training) the machine learning model, so that the original machine learning model cannot be functionally perfected.
The equipment control method provided by the embodiment of the application is a method capable of optimizing a machine learning model in small equipment, collecting gesture information of an individual user and an original machine learning model for proofreading, and optimizing the machine learning model, so that the machine learning model which is established for the individual user and is most suitable for the individual user is created. The user experience is improved. Further, the non-contact headset manipulation can complete the control of the headset without interfering with the use of the user headset. Greatly improve the earphone using experience of users. Is particularly suitable for controlling the earphone during the exercise (such as running). In addition, the accuracy of the machine learning model is improved by means of the computing power of the intelligent terminal or the cloud server, and the personalized machine learning model is created for the user. The complexity of the hardware structure of the earphone is reduced.
Fig. 13 is a schematic flow chart of an implementation of an apparatus control device according to an embodiment of the present application, and is applied to a first apparatus, as shown in fig. 13, an apparatus 1300 includes:
the first receiving module 1301 is configured to receive a gesture detection instruction sent by a second device;
the detecting module 1302 is configured to obtain a first gesture message by detecting a first gesture action;
a first sending module 1303, configured to send the first gesture information to the second device; the first gesture information is used for obtaining a target gesture recognition model, and the target gesture recognition model can recognize the first gesture as a first control instruction.
In some embodiments, the first receiving module 1301 is configured to receive the target gesture recognition model sent by the second device.
In some embodiments, if the first gesture information satisfies a correction condition, the first gesture information is used to obtain the target gesture recognition model.
In some embodiments, the correction condition includes at least one of:
the execution times of the first gesture action corresponding to the first gesture information are greater than a time threshold value;
the target gesture recognition model is absent from the first device.
In some embodiments, the detecting module 1302 is further configured to obtain second gesture information by detecting the first gesture;
and the recognition module is used for taking the second gesture information as the input of the target gesture recognition model to obtain the first control instruction output by the target gesture recognition model so as to execute the operation corresponding to the first control instruction.
In some embodiments, the detection module 1302 is further configured to:
the first gesture is detected by the following detection form: millimeter wave, laser or ultrasonic;
and converting the first gesture action into the first gesture information according to the detection form.
Fig. 14 is a schematic flow chart of an implementation of an apparatus control device according to an embodiment of the present application, and is applied to a second apparatus, as shown in fig. 14, the apparatus 1400 includes:
a second sending module 1401, configured to generate a gesture detection instruction, and send the gesture detection instruction to the first device;
a second receiving module 1402, configured to receive first gesture information sent by the first device;
an obtaining module 1403, configured to obtain a target gesture recognition model according to the first gesture information; the target gesture recognition model is capable of recognizing the first gesture motion as a first control instruction.
In some embodiments, the first gesture information satisfies a correction condition.
In some embodiments, the correction condition includes at least one of:
the execution times of the first gesture action corresponding to the first gesture information are greater than a time threshold value;
the target gesture recognition model is absent from the first device.
In some embodiments, obtaining module 1403 is further configured to:
and updating parameters of a reference gesture recognition model through the first gesture information and the first control instruction to obtain the target gesture recognition model.
In some embodiments, obtaining module 1403 is further configured to:
sending the first gesture information and the first control instruction to a third device;
receiving the target gesture recognition model sent by the third device.
In some embodiments, the second sending module 1401 is further configured to:
and sending the target gesture recognition model to the first device.
In some embodiments, the apparatus 1400 further comprises: a display module for:
displaying a gesture guide interface; the display content of the gesture guidance interface comprises: the image of the first gesture action and the control identification of the first control instruction.
It should be noted that the device control apparatus provided in the embodiment of the present application includes each included unit, and may be implemented by a processor in an electronic device; of course, the implementation can also be realized through a specific logic circuit; in the implementation process, the Processor may be a Central Processing Unit (CPU), a microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the device control method is implemented in the form of a software functional module and sold or used as a standalone product, the device control method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the related art may be embodied in the form of a software product stored in a storage medium, and including several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, an embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory stores a computer program that can be run on the processor, and the processor executes the computer program to implement the steps in the device control method provided in the foregoing embodiment. The electronic device may be a first device or a second device.
Accordingly, embodiments of the present application provide a storage medium, that is, a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the device control method provided in the above embodiments.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that fig. 15 is a schematic hardware entity diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 15, the electronic device 1500 includes: a processor 1501, at least one communication bus 1502, at least one external communication interface 1504, and memory 1505. The communication bus 1502 is configured to enable, among other things, connected communication between these components. In an example, the electronic device 1500 further includes: the user interface 1503, wherein the user interface 1503 may comprise a display screen and the external communication interface 1504 may comprise a standard wired interface and a wireless interface.
The Memory 1505 is configured to store instructions and applications executable by the processor 1501, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the processor 1501 and modules in the electronic device, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in some embodiments" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the related art may be embodied in the form of a software product stored in a storage medium, and including several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. An apparatus control method applied to a first apparatus, the method comprising:
receiving a gesture detection instruction sent by second equipment;
acquiring first gesture information by detecting a first gesture action;
sending the first gesture information to the second device;
the first gesture information is used for obtaining a target gesture recognition model, and the target gesture recognition model can recognize the first gesture as a first control instruction.
2. The method of claim 1, further comprising:
and receiving the target gesture recognition model sent by the second device.
3. The method of claim 1, wherein the first gesture information is used to obtain the target gesture recognition model if the first gesture information satisfies a correction condition.
4. The method of claim 3, wherein the correction condition comprises at least one of:
the execution times of the first gesture action corresponding to the first gesture information are greater than a time threshold value;
the target gesture recognition model is absent from the first device.
5. The method of claim 1, further comprising:
obtaining second gesture information by detecting the first gesture action;
and taking the second gesture information as the input of the target gesture recognition model to obtain the first control instruction output by the target gesture recognition model so as to execute the operation corresponding to the first control instruction.
6. The method of claim 1, wherein the detecting the first gesture action to obtain first gesture information comprises:
the first gesture is detected by the following detection form: millimeter wave, laser or ultrasonic;
and converting the first gesture action into the first gesture information according to the detection form.
7. A device control method, applied to a second device, the method comprising:
generating a detection instruction, and sending the gesture detection instruction to first equipment;
receiving first gesture information sent by first equipment;
obtaining a target gesture recognition model through the first gesture information; the target gesture recognition model is capable of recognizing the first gesture motion as a first control instruction.
8. The method of claim 7, wherein the first gesture information satisfies a correction condition.
9. The method of claim 8, wherein the correction condition comprises at least one of:
the execution times of the first gesture action corresponding to the first gesture information are greater than a time threshold value;
the target gesture recognition model is not present in the first device.
10. The method according to any one of claims 7 to 9, wherein the obtaining a target gesture recognition model through the first gesture information comprises:
and updating parameters of a reference gesture recognition model through the first gesture information and the first control instruction to obtain the target gesture recognition model.
11. The method according to any one of claims 7 to 9, wherein the obtaining a target gesture recognition model through the first gesture information comprises:
sending the first gesture information and the first control instruction to a third device;
receiving the target gesture recognition model sent by the third device.
12. The method of claim 7, further comprising:
and sending the target gesture recognition model to the first device.
13. The method of claim 7, further comprising:
displaying a gesture guide interface; the display content of the gesture guidance interface comprises: the image of the first gesture action and the control identification of the first control instruction.
14. An apparatus for controlling a device, applied to a first device, the apparatus comprising:
the first receiving module is used for receiving a gesture detection instruction sent by the second equipment;
the detection module is used for acquiring first gesture information by detecting the first gesture action;
the first sending module is used for sending the first gesture information to the second equipment;
the first gesture information is used for obtaining a target gesture recognition model, and the target gesture recognition model can recognize the first gesture as a first control instruction.
15. An apparatus for controlling a device, applied to a second device, the apparatus comprising:
the second sending module is used for generating a gesture detection instruction and sending the gesture detection instruction to the first equipment;
the second receiving module is used for receiving the first gesture information sent by the first equipment;
the obtaining module is used for obtaining a target gesture recognition model through the first gesture information; the target gesture recognition model is capable of recognizing the first gesture motion as a first control instruction.
16. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the device control method according to any one of claims 1 to 6 or implements the steps of the device control method according to any one of claims 7 to 13 when executing the computer program.
17. A storage medium storing an executable program, wherein the executable program, when executed by a processor, implements the device control method of any one of claims 1 to 6, or implements the device control method of any one of claims 7 to 13.
CN202011195754.9A 2020-10-30 2020-10-30 Equipment control method and device, equipment and storage medium Active CN112256135B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011195754.9A CN112256135B (en) 2020-10-30 2020-10-30 Equipment control method and device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011195754.9A CN112256135B (en) 2020-10-30 2020-10-30 Equipment control method and device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112256135A true CN112256135A (en) 2021-01-22
CN112256135B CN112256135B (en) 2024-08-06

Family

ID=74267346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011195754.9A Active CN112256135B (en) 2020-10-30 2020-10-30 Equipment control method and device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112256135B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112965639A (en) * 2021-03-17 2021-06-15 北京小米移动软件有限公司 Gesture recognition method and device, electronic equipment and storage medium
CN114415825A (en) * 2021-12-13 2022-04-29 珠海格力电器股份有限公司 Control method, control device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108595003A (en) * 2018-04-23 2018-09-28 Oppo广东移动通信有限公司 Function control method and relevant device
CN109862274A (en) * 2019-03-18 2019-06-07 北京字节跳动网络技术有限公司 Earphone with camera function, the method and apparatus for exporting control signal
CN110505549A (en) * 2019-08-21 2019-11-26 Oppo(重庆)智能科技有限公司 The control method and device of earphone

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108595003A (en) * 2018-04-23 2018-09-28 Oppo广东移动通信有限公司 Function control method and relevant device
CN109862274A (en) * 2019-03-18 2019-06-07 北京字节跳动网络技术有限公司 Earphone with camera function, the method and apparatus for exporting control signal
CN110505549A (en) * 2019-08-21 2019-11-26 Oppo(重庆)智能科技有限公司 The control method and device of earphone

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112965639A (en) * 2021-03-17 2021-06-15 北京小米移动软件有限公司 Gesture recognition method and device, electronic equipment and storage medium
CN114415825A (en) * 2021-12-13 2022-04-29 珠海格力电器股份有限公司 Control method, control device, electronic equipment and storage medium
CN114415825B (en) * 2021-12-13 2024-05-31 珠海格力电器股份有限公司 Control method, control device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112256135B (en) 2024-08-06

Similar Documents

Publication Publication Date Title
KR102270394B1 (en) Method, terminal, and storage medium for recognizing an image
CN108519871B (en) Audio signal processing method and related product
CN110364145B (en) Voice recognition method, and method and device for sentence breaking by voice
CN107613131B (en) Application program disturbance-free method, mobile terminal and computer-readable storage medium
WO2019052293A1 (en) Machine translation method and apparatus, computer device and storage medium
CN108646971B (en) Screen sounding control method and device and electronic device
CN110890093A (en) Intelligent device awakening method and device based on artificial intelligence
CN108008858B (en) Terminal control method and mobile terminal
KR20150087023A (en) Mobile terminal and method for controlling the same
WO2019105376A1 (en) Gesture recognition method, terminal and storage medium
CN112751648B (en) Packet loss data recovery method, related device, equipment and storage medium
CN112256135B (en) Equipment control method and device, equipment and storage medium
KR20180096182A (en) Electronic device and method for controlling the same
CN111370018A (en) Audio data processing method, electronic device and medium
CN111522592A (en) Intelligent terminal awakening method and device based on artificial intelligence
CN109189360B (en) Screen sounding control method and device and electronic device
CN109067965A (en) Interpretation method, translating equipment, wearable device and storage medium
CN113096640A (en) Voice synthesis method and device, electronic equipment and storage medium
CN110399474B (en) Intelligent dialogue method, device, equipment and storage medium
CN109164908B (en) Interface control method and mobile terminal
CN111158487A (en) Man-machine interaction method for interacting with intelligent terminal by using wireless earphone
CN114065168A (en) Information processing method, intelligent terminal and storage medium
WO2016206646A1 (en) Method and system for urging machine device to generate action
CN111639209B (en) Book content searching method, terminal equipment and storage medium
CN113326018A (en) Processing method, terminal device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant