CN108089694A - A kind of intelligent control method and equipment - Google Patents

A kind of intelligent control method and equipment Download PDF

Info

Publication number
CN108089694A
CN108089694A CN201611048815.2A CN201611048815A CN108089694A CN 108089694 A CN108089694 A CN 108089694A CN 201611048815 A CN201611048815 A CN 201611048815A CN 108089694 A CN108089694 A CN 108089694A
Authority
CN
China
Prior art keywords
data
motion
control
angle
description data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611048815.2A
Other languages
Chinese (zh)
Inventor
孙晓路
董泽贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ninebot Beijing Technology Co Ltd
Original Assignee
Ninebot Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ninebot Beijing Technology Co Ltd filed Critical Ninebot Beijing Technology Co Ltd
Priority to CN201611048815.2A priority Critical patent/CN108089694A/en
Publication of CN108089694A publication Critical patent/CN108089694A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Manipulator (AREA)
  • Toys (AREA)

Abstract

The present invention relates to field of intelligent control, disclose a kind of intelligent control method and equipment, to solve to control the technical issues of not enough facilitating to robot in the prior art.This method includes:The first equipment in the first operating mode receives the first control data that the second equipment is sent, and the first control data describe data based on the first athletic posture of the second equipment by the second equipment and determine;At least one controlled cell under the first operating mode is determined based on the first control data;The second athletic posture obtained at least one controlled cell describes data, and the first control instruction of the corresponding controlled cell of data generation is described based on the second athletic posture, wherein, the second athletic posture, which describes data, to be generated based on the first control data;The first control instruction is responded, is controlled at least one controlled cell to the first equipment.The technique effect that can be controlled by the second equipment the first equipment is reached.

Description

Intelligent control method and device
Technical Field
The invention relates to the field of intelligent control, in particular to an intelligent control method and intelligent control equipment.
Background
With the progress of society, the living standard of people is continuously improved, the requirement on the quality of life is higher and higher, the Robot leaves the head and corners in all aspects, various robots are promoted, and the Robot (Robot) is a machine device for automatically executing work. It can accept human command, run the program programmed in advance, and also can operate according to the principle outline action made by artificial intelligence technology. The task of the robot is to assist or replace human work, such as production industry, construction industry or dangerous work, and the robot plays a great role in daily life and social production.
in ② has prior art, ② has robot can be controlled in various ways, such as gesture control, voice control, remote controller control and ② has like, and ② has inventor finds that at least ② has following problems exist in ② has prior art, namely, ② has detection of ② has human body gesture by ② has robot is not convenient enough, ② has distance limitation exists by ② has voice control, ② has remote controller needs to be carried at any time when ② has control is carried by ② has remote controller, and ② has remote control form is single.
Disclosure of Invention
The invention provides a control method and control equipment, which are used for solving the technical problem that the control of a robot is not convenient enough in the prior art.
In a first aspect, an embodiment of the present invention provides an intelligent control method, applied to a first device, including:
the first device in a first working mode receives first control data sent by a second device, wherein the first control data is determined by the second device based on first motion posture description data of the second device;
determining at least one controlled unit in the first operating mode based on the first control data;
obtaining second motion gesture description data for the at least one controlled unit, and generating a first control instruction corresponding to the controlled unit based on the second motion gesture description data, wherein the second motion gesture description data is generated based on the first control data;
responding to the first control instruction to control at least one controlled unit of the first device.
Optionally, if the first control data is the first motion gesture description data, after the first device in the first operation mode receives the first control data sent by the second device, the method further includes:
determining a database of specific motion gesture description data;
judging whether the first motion posture description data is located in the database;
wherein the step of determining at least one controlled unit in the first operating mode based on the first control data is performed if the first motion profile description data is located in the database;
if the first motion profile description data is not located in the database, the step of determining at least one controlled unit in the first operating mode based on the first control data is not performed.
Optionally, the determining at least one controlled unit in the first operating mode based on the first control data includes:
and searching and obtaining the at least one controlled unit from the preset corresponding relation between the control data and the controlled unit based on the first control data.
Optionally, the first motion gesture description data includes: first angle data of the second device relative to a first preset plane and second angle data of the second device relative to a second preset plane, wherein the at least one controlled unit is a motion chassis of the first device, and the responding to the first control instruction is used for controlling the at least one controlled unit of the first device, and the method comprises the following steps:
if the first motion attitude description data shows that the first angle data is increased from a first angle value to a second angle value, and the variation of the second angle data is smaller than a first preset angle value, controlling the motion chassis to drive the first equipment to move forward towards the second equipment through the first control instruction; or,
if the first motion attitude description data shows that the first angle data is reduced from a first angle value to a third angle value, and the variation of the second angle data is smaller than a first preset angle value, controlling the motion chassis to drive the first equipment to be far away from the second equipment through the first control instruction; or,
if the first motion attitude description data shows that the variation of the first angle data is smaller than a second preset angle value, the second angle data is increased from a fourth angle value to a fifth angle value, and the motion chassis is controlled by the first control instruction to drive the first equipment to rotate to the left; or,
if the first motion attitude description data shows that the variation of the first angle data is smaller than a second preset angle value, the second angle data is increased to a sixth angle value from a fourth angle value, and the motion chassis is controlled by the first control instruction to drive the first equipment to rotate rightwards.
Optionally, the first motion gesture description data includes: third angle data of the second device relative to a third preset plane and fourth angle data of the second device relative to a fourth preset plane, wherein the at least one controlled unit is a pan-tilt head of the first device, and the responding to the first control instruction controls the at least one controlled unit of the first device, including:
if the first motion attitude description data shows that the third angle data is increased from a seventh angle value to an eighth angle value, and the variation of the fourth angle data is smaller than a third preset angle value, adjusting the pitch angle of the holder through the first control command; or,
if the first motion attitude description data shows that the variation of the third angle data is smaller than a fourth preset angle value, the fourth angle data is increased from a ninth angle value to a tenth angle value, and the yaw angle of the holder is adjusted through the first control instruction; or,
and if the first motion attitude description data shows that the variation of the third angle data is smaller than a fourth preset angle value, the fourth angle data is reduced from a ninth angle value to an eleventh angle value, and the roll angle of the holder is adjusted through the first control command.
Optionally, the method further includes:
entering a machine learning state;
detecting and obtaining motion gesture description data generated by a user of the second device in the machine learning state;
detecting and obtaining a control instruction set based on the motion attitude description data;
based on the detected motion attitude description data and the control instruction, establishing a corresponding relation between the motion attitude and the control instruction, and setting a first working mode of the first device corresponding to the corresponding relation between the motion attitude and the control instruction.
In a second aspect, an embodiment of the present invention provides an intelligent control method, applied to a second device, including:
detecting and obtaining first motion gesture description data of the second device;
and obtaining first control data based on the first motion attitude description data and sending the first control data to second equipment in a first working mode, so that the second equipment determines at least one controlled unit in the first working mode and a first control instruction for controlling the at least one controlled unit based on the first control data, wherein the first control instruction is used for controlling the at least one controlled unit by the first equipment.
Optionally, before the obtaining first control data based on the first motion gesture description data, the method further includes:
determining a database of specific motion gesture description data;
judging whether the first motion posture description data is located in the database;
wherein the step of obtaining first control data based on the first motion gesture description data is performed if the first motion gesture description data is located in the database;
if the first motion gesture description data is not located in the database, the step of obtaining first control data based on the first motion gesture description data is not performed.
Optionally, the method further includes:
entering a machine learning state;
detecting and obtaining motion gesture description data generated by a user of the second device in the machine learning state;
detecting and obtaining a control instruction set based on the motion attitude description data;
based on the detected motion attitude description data and the control instruction, establishing a corresponding relation between the motion attitude and the control instruction, and setting a first working mode of the first device corresponding to the corresponding relation between the motion attitude and the control instruction.
In a third aspect, an embodiment of the present invention provides a first device, including:
the receiving module is used for receiving first control data sent by second equipment by the first equipment in a first working mode, wherein the first control data is determined by the second equipment based on first motion posture description data of the second equipment;
a first determining module for determining at least one controlled unit in the first operating mode based on the first control data;
an obtaining module, configured to obtain second motion gesture description data for the at least one controlled unit, and generate a first control instruction corresponding to the controlled unit based on the second motion gesture description data, where the second motion gesture description data is generated based on the first control data;
and the response module is used for responding to the first control instruction so as to control at least one controlled unit of the first equipment.
Optionally, if the first control data is the first motion gesture description data, the first device further includes:
a second determination module for determining a database of specific motion gesture description data;
the first judgment module is used for judging whether the first motion posture description data is positioned in the database;
wherein the step of determining at least one controlled unit in the first operating mode based on the first control data is performed by the first determining module if the first motion profile description data is located in the database;
if the first motion gesture description data is not located in the database, the step of determining at least one controlled unit in the first operating mode based on the first control data is not performed by the first determining module.
Optionally, the first determining module is configured to:
and searching and obtaining the at least one controlled unit from the preset corresponding relation between the control data and the controlled unit based on the first control data.
Optionally, the first motion gesture description data includes: first angle data of the second device with respect to a first preset plane and second angle data with respect to a second preset plane, the at least one controlled unit being a motion chassis of the first device, the response module comprising:
the first control unit is used for controlling the motion chassis to drive the first equipment to move towards the direction of the second equipment through the first control instruction if the first motion posture description data indicate that the first angle data is increased from a first angle value to a second angle value, and the variation of the second angle data is smaller than a first preset angle value; or,
the second control unit is used for controlling the motion chassis to drive the first equipment to be far away from the second equipment through the first control instruction if the first motion attitude description data shows that the first angle data is reduced from a first angle value to a third angle value and the variation of the second angle data is smaller than a first preset angle value; or,
a third control unit, configured to, if the first motion posture description data indicates that a variation of the first angle data is smaller than a second preset angle value, increase the second angle data from a fourth angle value to a fifth angle value, and control the motion chassis to drive the first device to rotate left through the first control instruction; or,
and the fourth control unit is used for controlling the motion chassis to drive the first equipment to rotate rightwards if the first motion attitude description data shows that the variation of the first angle data is smaller than a second preset angle value and the second angle data is increased to a sixth angle value from the fourth angle value.
Optionally, the first motion gesture description data includes: third angle data of the second device relative to a third preset plane and fourth angle data of the second device relative to a fourth preset plane, the at least one controlled unit being a pan-tilt of the first device, the response module comprising:
the fifth control unit is used for adjusting the pitch angle of the holder through the first control instruction if the first motion attitude description data indicates that the third angle data is increased from a seventh angle value to an eighth angle value and the variation of the fourth angle data is smaller than a third preset angle value; or,
a sixth control unit, configured to, if the first motion attitude description data indicates that a variation of the third angle data is smaller than a fourth preset angle value, increase the fourth angle data from a ninth angle value to a tenth angle value, and adjust the yaw angle of the pan/tilt head through the first control instruction; or,
and the seventh control unit is used for adjusting the rolling angle of the holder through the first control instruction if the first motion attitude description data shows that the variation of the third angle data is smaller than a fourth preset angle value and the fourth angle data is reduced from a ninth angle value to an eleventh angle value.
Optionally, the first device further includes:
the first learning module is used for entering a machine learning state;
the first detection module is used for detecting and obtaining motion gesture description data generated by a user of the second equipment in the machine learning state;
the second detection module is used for detecting and obtaining a control instruction set based on the motion posture description data;
the first establishing module is used for establishing a corresponding relation between the motion gesture and the control instruction based on the motion gesture description data and the control instruction obtained through detection, and setting a first working mode of the first device corresponding to the corresponding relation between the motion gesture and the control instruction.
In a fourth aspect, an embodiment of the present invention provides a second device, including:
the third detection module is used for detecting and obtaining first motion posture description data of the second equipment;
the sending module is configured to obtain first control data based on the first motion posture description data and send the first control data to a second device in a first working mode, so that the second device determines at least one controlled unit in the first working mode and a first control instruction for controlling the at least one controlled unit based on the first control data, where the first control instruction is used for controlling the at least one controlled unit by the first device.
Optionally, the second device further includes:
a third determination module for determining a database of specific motion gesture description data;
the second judgment module is used for judging whether the first motion posture description data is positioned in the database;
wherein if the first motion gesture description data is located in the database, the step of obtaining first control data based on the first motion gesture description data is performed by the sending module;
and if the first motion gesture description data is not located in the database, not executing the step of obtaining first control data based on the first motion gesture description data through the sending module.
Optionally, the second device further includes:
the second learning module is used for entering a machine learning state;
the fourth detection module is used for detecting and obtaining motion gesture description data generated by a user of the second equipment in the machine learning state;
the fifth detection module is used for detecting and obtaining a control instruction set based on the motion posture description data;
and the second detection module is used for establishing a corresponding relation between the motion attitude and the control instruction based on the motion attitude description data and the control instruction obtained by detection, and setting a first working mode of the first equipment corresponding to the corresponding relation between the motion attitude and the control instruction.
The invention has the following beneficial effects:
since in the embodiment of the present invention, it may be detected that first motion posture description data of a second device is obtained by the second device and first control data obtained based on the first motion posture description data is sent to the first device, after the first device in a first operation mode receives the first control data sent by the second device, at least one controlled unit in the first operation mode may be determined based on the first control data, second motion posture description data for the at least one controlled unit is obtained, and a first control instruction corresponding to the controlled unit is generated based on the second motion posture description data, where the second motion posture description data is generated based on the first control data; and finally responding to the first control instruction to control at least one controlled unit of the first equipment. Based on the scheme, the technical effect that the first equipment can be controlled through the second equipment is achieved, and compared with gesture control, the robot does not need to have a gesture detection function; compared with voice control, the scheme has no distance limit; compared with the remote controller control, the scheme does not need to carry the remote controller, and only needs the second equipment as the equipment with the posture detection function, so that the technical effect of more convenient control of the first equipment is achieved.
Drawings
Fig. 1 is a flowchart of an intelligent control method according to a first aspect of the present invention;
FIG. 2 illustrates a first schematic diagram of controlling a first device based on a motion gesture in accordance with an embodiment of the present invention;
FIG. 3 illustrates a second schematic diagram of controlling a first device based on a motion gesture in accordance with an embodiment of the present invention;
FIG. 4 illustrates a third schematic diagram of controlling a first device based on a motion gesture in accordance with an embodiment of the present invention;
FIG. 5 illustrates a fourth schematic diagram of an embodiment of the invention for controlling a first device based on a motion gesture;
FIG. 6 is a flow chart of a method of intelligent control according to a second aspect of the present invention;
FIG. 7 is a block diagram of a first apparatus according to a third aspect of an embodiment of the invention;
fig. 8 is a block diagram of a second apparatus of the fourth aspect of the embodiment of the present invention.
Detailed Description
The invention provides a control method and control equipment, which are used for solving the technical problem that the control of a robot is not convenient enough in the prior art.
In order to solve the technical problems, the general idea of the embodiment of the present application is as follows:
after the first device in the first working mode receives first control data sent by the second device, at least one controlled unit in the first working mode can be determined based on the first control data, second motion posture description data for the at least one controlled unit is obtained, and a first control instruction corresponding to the controlled unit is generated based on the second motion posture description data, wherein the second motion posture description data is generated based on the first control data; and finally responding to the first control instruction to control at least one controlled unit of the first equipment. Based on the scheme, the technical effect that the first equipment can be controlled through the second equipment is achieved, and compared with gesture control, the robot does not need to have a gesture detection function; compared with voice control, the scheme has no distance limit; compared with the remote controller control, the scheme does not need to carry the remote controller, and only needs the second equipment as the equipment with the posture detection function, so that the technical effect of more convenient control of the first equipment is achieved.
In order to better understand the technical solutions of the present invention, the following detailed descriptions of the technical solutions of the present invention are provided with the accompanying drawings and the specific embodiments, and it should be understood that the specific features in the embodiments and the examples of the present invention are the detailed descriptions of the technical solutions of the present invention, and are not limitations of the technical solutions of the present invention, and the technical features in the embodiments and the examples of the present invention may be combined with each other without conflict.
In a first aspect, an embodiment of the present invention provides an intelligent control method applied to a first device, please refer to fig. 1, including:
step S101: the first device in a first working mode receives first control data sent by a second device, wherein the first control data is determined by the second device based on first motion posture description data of the second device;
step S102: determining at least one controlled unit in the first operating mode based on the first control data;
step S103: obtaining second motion gesture description data for the at least one controlled unit, and generating a first control instruction corresponding to the controlled unit based on the second motion gesture description data, wherein the second motion gesture description data is generated based on the first control data;
step S104: responding to the first control instruction to control at least one controlled unit of the first device.
For example, the first device is typically a controlled device, such as: unmanned aerial vehicles, balance cars, robots, etc., the second device is typically a control device, such as: cell phones, tablet computers, smart watches, other controller modalities, and the like.
In step S101, a motion attitude sensor may be mounted on the second device, and the motion attitude sensor is, for example: three-axis gyroscopes, three-axis accelerometers, three-axis electronic compasses, and the like. When a user of the second device needs to control the first device, a corresponding first motion gesture can be generated, the second device detects and obtains first motion gesture description data, the first motion gesture description data comprises two parts, one part is motion gesture description data used for specifying a controlled unit, the other part is second motion gesture description data, and the second device can directly send the first motion gesture description data serving as first control data to the first device after detecting the first motion gesture description data; the second device may also divide the first motion state description data into two parts, determine a corresponding controlled unit by the motion state description data for specifying the controlled unit, and then include: the control instruction of the specified controlled unit and the second motion posture description data are transmitted to the first device as first control data.
The controlled unit of the first device may be various controlled units, such as: motion chassis, shooting heads, etc., for example: if the first device is a balance car, a robot or the like, the controlled unit comprises, for example: a motion chassis of a balance car, a motion chassis of a robot, etc., and for another example, if the first device includes an image acquisition apparatus connected to the first device through a photographing head, the controlled unit includes a photographing head, etc.
In a specific implementation process, a corresponding relationship between the controlled unit and the control data may be preset, and taking the control data as the motion posture description data as an example, the corresponding relationship shown in table 1 may be established:
TABLE 1
Motion gesture description data Controlled unit
The second device rotates with the bottom frame as a rotation axis Sports chassis
The second device rotates with the top frame as a rotating shaft Shooting cloud platform
The second device rotates by taking the left frame as a rotating shaft Motion chassis and shooting cloud dish
Then, after the first motion posture description data is detected, the motion posture description data in which the controlled unit portion is specified may be extracted, and then the controlled unit specified by the motion posture description data may be determined in the above correspondence relationship.
The second device may start the motion gesture sensor to detect when the first device needs to be controlled, and the motion gesture sensor may also detect motion gesture description data of the second device at any time.
The first device may include a plurality of different operation modes, and the first operation mode may be any one of the plurality of different operation modes, and the following description will be given by exemplifying several operation modes of the first device, and of course, in the implementation process, the first device is not limited to the following operation modes.
First, a following mode in which the first device follows the second device to move within a preset tracking range of the second device.
For example, the preset tracking range is, for example: a preset distance range, a preset angle range, and the like, where the preset distance range is, for example: within 3m, within 5m, from 4m to 6m, etc., the angle between the first device and the second device being, for example: the angle between the first surface of the first device (for example, the plane where the head of the first device is located) and the first surface of the second device (for example, the control interface) is, for example, the preset angle range: within 90-120 degrees and 90 degrees, and the like. The preset tracking range may be set manually by a user of the second device (or the first device) or may be set by the system.
And secondly, a following snapshot mode, wherein in the following snapshot mode, the first equipment moves along with the second equipment within a preset following range of the second equipment and acquires an image.
For example, the first device is installed or externally connected with an image capturing device, and in the tracking snapshot mode, the first device may be controlled to move within a preset range of the second device, so as to capture an image of the second device, and the image capturing process may be performed automatically or after receiving a control instruction generated by the second device, which is not limited in the embodiment of the present invention.
And the third mode is a riding mode, wherein in the riding mode, a user rides the first device and controls the first device to move.
For example, a user riding the first device may carry the second device, and control the first device in the riding mode through the second device; the user riding the first device may not carry the second device, and other users carry the first device, and control over the second device is achieved through the first device.
Fourth, a remote control mode in which the first device is remotely controlled to move by the second device. In this case, the first device may be controlled by the second device to perform any operation, such as: motion, flip, image capture, etc.
In step S102, as can be seen from the foregoing description, the corresponding relationship between the control data and the controlled unit may be pre-established, and after the first control data is obtained, the first control data is directly searched in the corresponding relationship between the control data and the controlled unit, so that the corresponding controlled unit can be obtained. For example: if the first control data comprises motion attitude description data of the second equipment rotating by taking the bottom frame as a rotating shaft, determining that the corresponding controlled unit is a motion chassis; if the first control data includes: and the second device uses the left frame as the motion attitude description data of the rotation axis, so that the corresponding controlled unit can be determined to be a motion chassis, a shooting cloud disk and the like.
In step S102, if the first control data is the first motion gesture description data, the at least one controlled unit may be determined based on the first control data directly after the first control data is obtained, as an optional embodiment, if the first control data is the first motion gesture description data, after the first device in the first operation mode receives the first control data sent by the second device, the method further includes: determining a database of specific motion gesture description data; judging whether the first motion posture description data is located in the database; wherein the step of determining at least one controlled unit in the first operating mode based on the first control data is performed if the first motion profile description data is located in the database; if the first motion profile description data is not located in the database, the step of determining at least one controlled unit in the first operating mode based on the first control data is not performed.
For example, the motion posture description data detected by the second device may be used to control the first device, or may be used to control the second device, in this case, a database of specific motion posture description data may be established, the database stores the motion posture description data for controlling the first device, after the first device obtains the first motion posture description data, the first device may be matched with the motion posture description data in the database, and if the matching is successful, the first motion posture description data is used to control the first device, so that the subsequent steps may be performed; if the matching is unsuccessful, the first motion gesture description data is not used for controlling the equipment, so that subsequent steps are not needed, and the processing burden of the first equipment is reduced.
In step S103, the second motion gesture description data may be directly extracted from the first control data, so as to realize control of the controlled unit of the first device.
In step S104, the corresponding first control instruction may be obtained by searching for the corresponding relationship between the motion attitude and the control instruction through the second motion attitude description data, and in a specific implementation process, the corresponding relationship between the motion attitude and the control instruction may be established in the following manner: entering a machine learning state; detecting and obtaining motion gesture description data generated by a user of the second device in the machine learning state; detecting and obtaining a control instruction set based on the motion attitude description data; based on the detected motion attitude description data and the control instruction, establishing a corresponding relation between the motion attitude and the control instruction, and setting a first working mode of the first device corresponding to the corresponding relation between the motion attitude and the control instruction.
For example, a learning button may be disposed on the first device, and after an operation of clicking the learning button is detected, the first device may enter a machine learning state, and in the machine learning state, a control instruction corresponding to each motion gesture may be obtained in a learning manner, so as to establish a correspondence between the motion gestures and the control instruction. For example: providing a prompt interface for prompting a user to control the second equipment to generate a corresponding motion gesture, and after the user controls the second equipment to generate the corresponding motion gesture, detecting by the second equipment to obtain description data of the motion gesture and sending the description data to the first equipment; then, a prompt interface of the first device prompts a user to set a corresponding control instruction, for example: all control instructions are output through the prompt interface, and a user selects a corresponding control instruction from the control instructions; after the user finishes the selection, the corresponding relation between the motion posture description data set by the user and the control instruction can be established; then, the user is prompted to set the working mode of the first device corresponding to the corresponding relationship on the prompt interface, for example: all the working modes are provided for a user, and the working modes are set by the user based on selection operation, so that the corresponding relation between the motion posture and the control instruction in each working mode is finally established.
In the above scheme, the control instruction corresponding to each motion posture description data is determined in a user learning manner, so that each user can control the first device by using the most common motion posture description data, so as to improve the efficiency of controlling the first device.
Of course, in a specific implementation process, the corresponding relationship between the motion posture description data and the control command may also be set by default by the system, and the embodiment of the present invention is not limited.
In step S104, the at least one controlled unit may be controlled in various manners, two of which are listed below, and of course, the implementation process is not limited to the following two cases.
First, the first motion gesture description data comprises: first angle data of the second device with respect to a first preset plane and second angle data with respect to a second preset plane, the at least one controlled unit being a motion chassis of the first device, for example, the first preset plane being: horizontal plane, the first angle data of the second device with respect to the first preset plane are, for example: an angle value of the back of the second device relative to a horizontal plane; the second predetermined plane is, for example: in a plane perpendicular to the housing of the second device when the second device is not moving, the second angle data of the second device with respect to the second predetermined plane is, for example, an angle value of a frame (e.g., a right frame) of the second device with respect to the second predetermined plane, and of course, the first angle data and the second angle data may also be other angle data.
The responding to the first control instruction to control at least one controlled unit of the first device comprises one or more of the following conditions:
(1) if the first motion attitude description data shows that the first angle data is increased from a first angle value to a second angle value, and the variation of the second angle data is smaller than a first preset angle value, controlling the motion chassis to drive the first equipment to move forward towards the second equipment through the first control instruction;
in this case, the first preset angle value is, for example: 10 °, 20 °, etc., first angle values, e.g., 30 °, 40 °, etc., and second angle values, e.g., of: 100 deg., 120 deg., etc., in which case it is generally stated that there is an outward-inward arm-retracting motion by the user of the second device, thereby controlling the first device to also move from the outside-in, i.e., via the motion chassis, toward the second device, as shown in fig. 2.
(2) If the first motion attitude description data shows that the first angle data is reduced from a first angle value to a third angle value, and the variation of the second angle data is smaller than a first preset angle value, controlling the motion chassis to drive the first equipment to be far away from the second equipment through the first control instruction;
in this case, the first angle value is, for example, 30 °, 40 °, etc., and the second angle value is, for example: 10 °, 20 °, etc., in which case it is generally stated that there is an inside-out swinging motion of the arms by the user of the second device, so that the first device can be controlled to move from inside to outside, i.e. the first device is moved by the moving chassis in a direction away from the second device, as shown in fig. 3.
(3) If the first motion attitude description data shows that the variation of the first angle data is smaller than a second preset angle value, the second angle data is increased from a fourth angle value to a fifth angle value, and the motion chassis is controlled by the first control instruction to drive the first equipment to rotate to the left;
in this case, the second preset angle value is, for example: 5 °, 10 °, etc., the fourth angle value being, for example: 0 °, 10 °, 20 °, etc., and the fifth angle value is typically an acute angle, such as: 60 deg., 70 deg., etc., if the first angle data does not vary much and the second angle data varies to an acute angle, indicating that the user of the second device has this operation of swinging the arm from right to left, in which case the first device may be controlled to turn left by moving the chassis, as shown in fig. 4.
(4) If the first motion attitude description data shows that the variation of the first angle data is smaller than a second preset angle value, the second angle data is increased to a sixth angle value from a fourth angle value, and the motion chassis is controlled by the first control instruction to drive the first equipment to rotate rightwards.
In a specific implementation, the sixth angle value is generally an obtuse angle, for example: 120 deg., 150 deg., etc., if the first angle data is not changed in a large range and the second angle data is changed to an obtuse angle, it indicates that the user of the second device has this operation of swinging the arm from left to right, in which case the first device can be controlled to turn right by moving the chassis, as shown in fig. 5.
Second, the first motion gesture description data comprises: third angle data of the second device relative to a third preset plane and fourth angle data of the second device relative to a fourth preset plane, wherein the at least one controlled unit is a pan-tilt head of the first device, the third preset plane is a horizontal plane, for example, and the third angle data of the second device relative to the third preset plane is: the fourth preset plane is, for example, an angle between the back surface of the second device and the horizontal plane: a fourth angular range of the second device with respect to a fourth predetermined plane, in a plane perpendicular to the back of the second device when the second device is not moving, is, for example: the angle between the back of the second device and the fourth predetermined plane, and so on.
The responding to the first control instruction to control at least one controlled unit of the first device comprises one or more of the following conditions:
(1) if first motion gesture description data show third angle data is increased to eighth angle value by the seventh angle value, the change volume of fourth angle data is less than the third and predetermines the angle value, through first control command is right the angle of pitch of cloud platform is adjusted.
For example, the seventh angular value is, for example, 30 °, 40 °, etc., and the eighth angular value is, for example: 90 °, 100 °, etc., and the third predetermined angle is, for example, 0 °, 10 °, etc., and if the third angle data is increased and the fourth angle data is not changed, it indicates that there may be an outside-in arm-retracting motion for the user of the second device, in which case, the pitch angle of the pan/tilt head may be adjusted, for example: decreasing the pitch angle, increasing the pitch angle, etc.
(2) If the first motion attitude description data shows that the variation of the third angle data is smaller than a fourth preset angle value, the fourth angle data is increased from a ninth angle value to a tenth angle value, and the yaw angle of the holder is adjusted through the first control instruction;
in a specific implementation process, the fourth preset angle value is, for example: 10 °, 20 °, etc., and the ninth angle value is, for example: 85 °, 90 °, etc., the tenth angle value being, for example: 120 deg., 150 deg., etc., in which case the user of the second device is said to have an action of rotating the arm to the left, in which case the yaw angle of the head may be adjusted.
(3) And if the first motion attitude description data shows that the variation of the third angle data is smaller than a fourth preset angle value, the fourth angle data is reduced from a ninth angle value to an eleventh angle value, and the roll angle of the holder is adjusted through the first control command.
In a specific implementation process, the fourth preset angle value is, for example: 10 °, 20 °, etc., and the ninth angle value is, for example: 85 °, 90 °, etc., the tenth angle value being, for example: 20 °, 50 °, etc., in which case it is stated that the user of the second device has an action of rotating the arm to the right, in which case the roll angle of the head can be adjusted.
In a second aspect, based on the same inventive concept, an embodiment of the present invention provides an intelligent control method applied to a second device, please refer to fig. 6, including:
step S601: detecting and obtaining first motion gesture description data of the second device;
step S602: and obtaining first control data based on the first motion attitude description data and sending the first control data to second equipment in a first working mode, so that the second equipment determines at least one controlled unit in the first working mode and a first control instruction for controlling the at least one controlled unit based on the first control data, wherein the first control instruction is used for controlling the at least one controlled unit by the first equipment.
Optionally, before the obtaining first control data based on the first motion gesture description data, the method further includes:
determining a database of specific motion gesture description data;
judging whether the first motion posture description data is located in the database;
wherein the step of obtaining first control data based on the first motion gesture description data is performed if the first motion gesture description data is located in the database;
if the first motion gesture description data is not located in the database, the step of obtaining first control data based on the first motion gesture description data is not performed.
Optionally, the method further includes:
entering a machine learning state;
detecting and obtaining motion gesture description data generated by a user of the second device in the machine learning state;
detecting and obtaining a control instruction set based on the motion attitude description data;
based on the detected motion attitude description data and the control instruction, establishing a corresponding relation between the motion attitude and the control instruction, and setting a first working mode of the first device corresponding to the corresponding relation between the motion attitude and the control instruction.
Since the intelligent control method introduced in the second aspect of the present invention corresponds to the intelligent control method introduced in the first aspect of the present invention, based on the intelligent control method introduced in the first aspect of the present invention, persons skilled in the art can understand the intelligent control method introduced in the second aspect of the present invention, and therefore, the details are not repeated herein.
In a third aspect, based on the same inventive concept, an embodiment of the present invention provides a first apparatus, please refer to fig. 7, including:
a receiving module 70, configured to receive, by the first device in the first operating mode, first control data sent by a second device, where the first control data is determined by the second device based on first motion posture description data of the second device;
a first determining module 71, configured to determine at least one controlled unit in the first operating mode based on the first control data;
an obtaining module 72, configured to obtain second motion gesture description data for the at least one controlled unit, and generate a first control instruction corresponding to the controlled unit based on the second motion gesture description data, where the second motion gesture description data is generated based on the first control data;
a response module 73, configured to respond to the first control instruction to control at least one controlled unit of the first device.
Optionally, if the first control data is the first motion gesture description data, the first device further includes:
a second determination module for determining a database of specific motion gesture description data;
the first judgment module is used for judging whether the first motion posture description data is positioned in the database;
wherein the step of determining at least one controlled unit in the first operating mode based on the first control data is performed by the first determining module if the first motion profile description data is located in the database;
if the first motion gesture description data is not located in the database, the step of determining at least one controlled unit in the first operating mode based on the first control data is not performed by the first determining module.
Optionally, the first determining module 71 is configured to:
and searching and obtaining the at least one controlled unit from the preset corresponding relation between the control data and the controlled unit based on the first control data.
Optionally, the first motion gesture description data includes: first angle data of the second device with respect to a first preset plane and second angle data with respect to a second preset plane, the at least one controlled unit being a motion chassis of the first device, the response module 73 comprising:
the first control unit is used for controlling the motion chassis to drive the first equipment to move towards the direction of the second equipment through the first control instruction if the first motion posture description data indicate that the first angle data is increased from a first angle value to a second angle value, and the variation of the second angle data is smaller than a first preset angle value; or,
the second control unit is used for controlling the motion chassis to drive the first equipment to be far away from the second equipment through the first control instruction if the first motion attitude description data shows that the first angle data is reduced from a first angle value to a third angle value and the variation of the second angle data is smaller than a first preset angle value; or,
a third control unit, configured to, if the first motion posture description data indicates that a variation of the first angle data is smaller than a second preset angle value, increase the second angle data from a fourth angle value to a fifth angle value, and control the motion chassis to drive the first device to rotate left through the first control instruction; or,
and the fourth control unit is used for controlling the motion chassis to drive the first equipment to rotate rightwards if the first motion attitude description data shows that the variation of the first angle data is smaller than a second preset angle value and the second angle data is increased to a sixth angle value from the fourth angle value.
Optionally, the first motion gesture description data includes: third angle data of the second device with respect to a third preset plane and fourth angle data with respect to a fourth preset plane, the at least one controlled unit being a pan-tilt of the first device, the response module 73 including:
the fifth control unit is used for adjusting the pitch angle of the holder through the first control instruction if the first motion attitude description data indicates that the third angle data is increased from a seventh angle value to an eighth angle value and the variation of the fourth angle data is smaller than a third preset angle value; or,
a sixth control unit, configured to, if the first motion attitude description data indicates that a variation of the third angle data is smaller than a fourth preset angle value, increase the fourth angle data from a ninth angle value to a tenth angle value, and adjust the yaw angle of the pan/tilt head through the first control instruction; or,
and the seventh control unit is used for adjusting the rolling angle of the holder through the first control instruction if the first motion attitude description data shows that the variation of the third angle data is smaller than a fourth preset angle value and the fourth angle data is reduced from a ninth angle value to an eleventh angle value.
Optionally, the first device further includes:
the first learning module is used for entering a machine learning state;
the first detection module is used for detecting and obtaining motion gesture description data generated by a user of the second equipment in the machine learning state;
the second detection module is used for detecting and obtaining a control instruction set based on the motion posture description data;
the first establishing module is used for establishing a corresponding relation between the motion gesture and the control instruction based on the motion gesture description data and the control instruction obtained through detection, and setting a first working mode of the first device corresponding to the corresponding relation between the motion gesture and the control instruction.
Since the first device introduced in the third aspect of the embodiment of the present invention is a device used for implementing the intelligent control method in the first aspect of the embodiment of the present invention, based on the intelligent control method introduced in the first aspect of the embodiment of the present invention, a person skilled in the art can understand a specific structure and a modification of the first device, and thus details are not described here, and all devices used for implementing the intelligent control method in the first aspect of the embodiment of the present invention belong to the scope of the embodiment of the present invention to be protected.
In a fourth aspect, based on the same inventive concept, an embodiment of the present invention provides a second apparatus, please refer to fig. 8, including:
a third detecting module 80, configured to detect and obtain first motion gesture description data of the second device;
a sending module 81, configured to obtain first control data based on the first motion posture description data and send the first control data to a second device in a first working mode, so that the second device determines, based on the first control data, at least one controlled unit in the first working mode and a first control instruction for controlling the at least one controlled unit, where the first control instruction is used for the first device to control the at least one controlled unit.
Optionally, the second device further includes:
a third determination module for determining a database of specific motion gesture description data;
the second judgment module is used for judging whether the first motion posture description data is positioned in the database;
wherein, if the first motion gesture description data is located in the database, the step of obtaining first control data based on the first motion gesture description data is executed by the sending module 81;
if the first motion gesture description data is not located in the database, the step of obtaining first control data based on the first motion gesture description data is not performed by the sending module 81.
Optionally, the second device further includes:
the second learning module is used for entering a machine learning state;
the fourth detection module is used for detecting and obtaining motion gesture description data generated by a user of the second equipment in the machine learning state;
the fifth detection module is used for detecting and obtaining a control instruction set based on the motion posture description data;
and the second detection module is used for establishing a corresponding relation between the motion attitude and the control instruction based on the motion attitude description data and the control instruction obtained by detection, and setting a first working mode of the first equipment corresponding to the corresponding relation between the motion attitude and the control instruction.
Since the second device introduced in the fourth aspect of the embodiment of the present invention is a device used for implementing the intelligent control method of the second aspect of the embodiment of the present invention, based on the intelligent control method introduced in the second aspect of the embodiment of the present invention, a person skilled in the art can understand the specific structure and the modification of the second device, and therefore details are not described here, and all devices used for implementing the intelligent control method introduced in the second aspect of the embodiment of the present invention belong to the scope to be protected by the embodiment of the present invention.
One or more embodiments of the invention have at least the following beneficial effects:
since in the embodiment of the present invention, it may be detected that first motion posture description data of a second device is obtained by the second device and first control data obtained based on the first motion posture description data is sent to the first device, after the first device in a first operation mode receives the first control data sent by the second device, at least one controlled unit in the first operation mode may be determined based on the first control data, second motion posture description data for the at least one controlled unit is obtained, and a first control instruction corresponding to the controlled unit is generated based on the second motion posture description data, where the second motion posture description data is generated based on the first control data; and finally responding to the first control instruction to control at least one controlled unit of the first equipment. Based on the scheme, the technical effect that the first equipment can be controlled through the second equipment is achieved, and compared with gesture control, the robot does not need to have a gesture detection function; compared with voice control, the scheme has no distance limit; compared with the remote controller control, the scheme does not need to carry the remote controller, and only needs the second equipment as the equipment with the posture detection function, so that the technical effect of more convenient control of the first equipment is achieved.
It should be noted that, the following in the embodiment of the present invention may adopt a variety of different methods, including but not limited to a tracking method based on vision, a tracking method based on carrierless communication, and the like, where the tracking method based on carrierless communication includes but not limited to a tracking method based on Ultra Wideband (UWB) communication, UWB is a communication technology that is greatly different from the conventional communication technology, and does not need to use carrier to transmit signals, but transmits data by sending and receiving extremely narrow pulses with nanosecond or sub-second order or lower, and the tracking method has the characteristics of strong anti-interference performance, carrierless, low device transmission power, and the like, and can be used for precise positioning, and the distance precision can reach about 10 centimeters.
According to the following method based on the UWB, a UWB beacon is required to be installed on a tracked target object, a UWB anchor node is installed on first electronic equipment, signals are transmitted to the UWB beacon through the UWB anchor node, return signals of the UWB beacon are received, the relative distance d and the relative angle theta between the UWB beacon and the UWB anchor node can be obtained, and accordingly the relative position relation between the UWB beacon and the UWB anchor node is described; according to the relative position relationship between the UWB beacon and the UWB anchor node, and considering the position conversion relationship between the preset UWB beacon and the target object and between the UWB anchor node and the first electronic device, the relative position relationship between the first electronic device and the target object can be obtained, and the relative position relationship can be used for describing the relative position between the first electronic device and the target object. Here, the relative distance d between the UWB beacon and the UWB anchor node may be measured by a (TOF) technique, and the TOF is converted into the relative distance d by calculating a Time difference between transmission and reflection of a radio wave (or light, sound wave, or the like). The relative Angle θ between the UWB beacon and the UWB anchor node may be measured using an Angle of arrival (AOA) based technique, which is a signal Angle of arrival based positioning algorithm that can calculate the relative bearing or relative Angle between the receiving node and the anchor node by sensing the direction of arrival of the transmitting node signal through some hardware device.
When the UWB-based following method is employed, the UWB beacon and the second device of the embodiment of the present invention can be incorporated on the same electronic device, which simplifies the device composition and the device type even if control is facilitated.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (18)

1. An intelligent control method is applied to first equipment and is characterized by comprising the following steps:
the first device in a first working mode receives first control data sent by a second device, wherein the first control data is determined by the second device based on first motion posture description data of the second device;
determining at least one controlled unit in the first operating mode based on the first control data;
obtaining second motion gesture description data for the at least one controlled unit, and generating a first control instruction corresponding to the controlled unit based on the second motion gesture description data, wherein the second motion gesture description data is generated based on the first control data;
responding to the first control instruction to control at least one controlled unit of the first device.
2. The method of claim 1, wherein if the first control data is the first motion gesture description data, after the first device in the first operating mode receives first control data sent by a second device, the method further comprises:
determining a database of specific motion gesture description data;
judging whether the first motion posture description data is located in the database;
wherein the step of determining at least one controlled unit in the first operating mode based on the first control data is performed if the first motion profile description data is located in the database;
if the first motion profile description data is not located in the database, the step of determining at least one controlled unit in the first operating mode based on the first control data is not performed.
3. The method of claim 1, wherein said determining at least one controlled unit in the first operating mode based on the first control data comprises:
and searching and obtaining the at least one controlled unit from the preset corresponding relation between the control data and the controlled unit based on the first control data.
4. The method of claim 1, wherein the first motion gesture description data comprises: first angle data of the second device relative to a first preset plane and second angle data of the second device relative to a second preset plane, wherein the at least one controlled unit is a motion chassis of the first device, and the responding to the first control instruction is used for controlling the at least one controlled unit of the first device, and the method comprises the following steps:
if the first motion attitude description data shows that the first angle data is increased from a first angle value to a second angle value, and the variation of the second angle data is smaller than a first preset angle value, controlling the motion chassis to drive the first equipment to move forward towards the second equipment through the first control instruction; or,
if the first motion attitude description data shows that the first angle data is reduced from a first angle value to a third angle value, and the variation of the second angle data is smaller than a first preset angle value, controlling the motion chassis to drive the first equipment to be far away from the second equipment through the first control instruction; or,
if the first motion attitude description data shows that the variation of the first angle data is smaller than a second preset angle value, the second angle data is increased from a fourth angle value to a fifth angle value, and the motion chassis is controlled by the first control instruction to drive the first equipment to rotate to the left; or,
if the first motion attitude description data shows that the variation of the first angle data is smaller than a second preset angle value, the second angle data is increased to a sixth angle value from a fourth angle value, and the motion chassis is controlled by the first control instruction to drive the first equipment to rotate rightwards.
5. The method of claim 1, wherein the first motion gesture description data comprises: third angle data of the second device relative to a third preset plane and fourth angle data of the second device relative to a fourth preset plane, wherein the at least one controlled unit is a pan-tilt head of the first device, and the responding to the first control instruction controls the at least one controlled unit of the first device, including:
if the first motion attitude description data shows that the third angle data is increased from a seventh angle value to an eighth angle value, and the variation of the fourth angle data is smaller than a third preset angle value, adjusting the pitch angle of the holder through the first control command; or,
if the first motion attitude description data shows that the variation of the third angle data is smaller than a fourth preset angle value, the fourth angle data is increased from a ninth angle value to a tenth angle value, and the yaw angle of the holder is adjusted through the first control instruction; or,
and if the first motion attitude description data shows that the variation of the third angle data is smaller than a fourth preset angle value, the fourth angle data is reduced from a ninth angle value to an eleventh angle value, and the roll angle of the holder is adjusted through the first control command.
6. The method of any of claims 1-5, wherein the method further comprises:
entering a machine learning state;
detecting and obtaining motion gesture description data generated by a user of the second device in the machine learning state;
detecting and obtaining a control instruction set based on the motion attitude description data;
based on the detected motion attitude description data and the control instruction, establishing a corresponding relation between the motion attitude and the control instruction, and setting a first working mode of the first device corresponding to the corresponding relation between the motion attitude and the control instruction.
7. An intelligent control method is applied to a second device and is characterized by comprising the following steps:
detecting and obtaining first motion gesture description data of the second device;
and obtaining first control data based on the first motion attitude description data and sending the first control data to second equipment in a first working mode, so that the second equipment determines at least one controlled unit in the first working mode and a first control instruction for controlling the at least one controlled unit based on the first control data, wherein the first control instruction is used for controlling the at least one controlled unit by the first equipment.
8. The method of claim 7, wherein prior to said obtaining first control data based on said first motion gesture description data, said method further comprises:
determining a database of specific motion gesture description data;
judging whether the first motion posture description data is located in the database;
wherein the step of obtaining first control data based on the first motion gesture description data is performed if the first motion gesture description data is located in the database;
if the first motion gesture description data is not located in the database, the step of obtaining first control data based on the first motion gesture description data is not performed.
9. The method of claim 7, wherein the method further comprises:
entering a machine learning state;
detecting and obtaining motion gesture description data generated by a user of the second device in the machine learning state;
detecting and obtaining a control instruction set based on the motion attitude description data;
based on the detected motion attitude description data and the control instruction, establishing a corresponding relation between the motion attitude and the control instruction, and setting a first working mode of the first device corresponding to the corresponding relation between the motion attitude and the control instruction.
10. A first device, comprising:
the receiving module is used for receiving first control data sent by second equipment by the first equipment in a first working mode, wherein the first control data is determined by the second equipment based on first motion posture description data of the second equipment;
a first determining module for determining at least one controlled unit in the first operating mode based on the first control data;
an obtaining module, configured to obtain second motion gesture description data for the at least one controlled unit, and generate a first control instruction corresponding to the controlled unit based on the second motion gesture description data, where the second motion gesture description data is generated based on the first control data;
and the response module is used for responding to the first control instruction so as to control at least one controlled unit of the first equipment.
11. The first device of claim 10, wherein if the first control data is the first motion gesture description data, the first device further comprises:
a second determination module for determining a database of specific motion gesture description data;
the first judgment module is used for judging whether the first motion posture description data is positioned in the database;
wherein the step of determining at least one controlled unit in the first operating mode based on the first control data is performed by the first determining module if the first motion profile description data is located in the database;
if the first motion gesture description data is not located in the database, the step of determining at least one controlled unit in the first operating mode based on the first control data is not performed by the first determining module.
12. The first device of claim 10, wherein the first determination module is to:
and searching and obtaining the at least one controlled unit from the preset corresponding relation between the control data and the controlled unit based on the first control data.
13. The first device of claim 10, wherein the first motion gesture description data comprises: first angle data of the second device with respect to a first preset plane and second angle data with respect to a second preset plane, the at least one controlled unit being a motion chassis of the first device, the response module comprising:
the first control unit is used for controlling the motion chassis to drive the first equipment to move towards the direction of the second equipment through the first control instruction if the first motion posture description data indicate that the first angle data is increased from a first angle value to a second angle value, and the variation of the second angle data is smaller than a first preset angle value; or,
the second control unit is used for controlling the motion chassis to drive the first equipment to be far away from the second equipment through the first control instruction if the first motion attitude description data shows that the first angle data is reduced from a first angle value to a third angle value and the variation of the second angle data is smaller than a first preset angle value; or,
a third control unit, configured to, if the first motion posture description data indicates that a variation of the first angle data is smaller than a second preset angle value, increase the second angle data from a fourth angle value to a fifth angle value, and control the motion chassis to drive the first device to rotate left through the first control instruction; or,
and the fourth control unit is used for controlling the motion chassis to drive the first equipment to rotate rightwards if the first motion attitude description data shows that the variation of the first angle data is smaller than a second preset angle value and the second angle data is increased to a sixth angle value from the fourth angle value.
14. The first device of claim 10, wherein the first motion gesture description data comprises: third angle data of the second device relative to a third preset plane and fourth angle data of the second device relative to a fourth preset plane, the at least one controlled unit being a pan-tilt of the first device, the response module comprising:
the fifth control unit is used for adjusting the pitch angle of the holder through the first control instruction if the first motion attitude description data indicates that the third angle data is increased from a seventh angle value to an eighth angle value and the variation of the fourth angle data is smaller than a third preset angle value; or,
a sixth control unit, configured to, if the first motion attitude description data indicates that a variation of the third angle data is smaller than a fourth preset angle value, increase the fourth angle data from a ninth angle value to a tenth angle value, and adjust the yaw angle of the pan/tilt head through the first control instruction; or,
and the seventh control unit is used for adjusting the rolling angle of the holder through the first control instruction if the first motion attitude description data shows that the variation of the third angle data is smaller than a fourth preset angle value and the fourth angle data is reduced from a ninth angle value to an eleventh angle value.
15. The first device of any of claims 10-14, wherein the first device further comprises:
the first learning module is used for entering a machine learning state;
the first detection module is used for detecting and obtaining motion gesture description data generated by a user of the second equipment in the machine learning state;
the second detection module is used for detecting and obtaining a control instruction set based on the motion posture description data;
the first establishing module is used for establishing a corresponding relation between the motion gesture and the control instruction based on the motion gesture description data and the control instruction obtained through detection, and setting a first working mode of the first device corresponding to the corresponding relation between the motion gesture and the control instruction.
16. A second apparatus, comprising:
the third detection module is used for detecting and obtaining first motion posture description data of the second equipment;
the sending module is configured to obtain first control data based on the first motion posture description data and send the first control data to a second device in a first working mode, so that the second device determines at least one controlled unit in the first working mode and a first control instruction for controlling the at least one controlled unit based on the first control data, where the first control instruction is used for controlling the at least one controlled unit by the first device.
17. The second device of claim 16, wherein the second device further comprises:
a third determination module for determining a database of specific motion gesture description data;
the second judgment module is used for judging whether the first motion posture description data is positioned in the database;
wherein if the first motion gesture description data is located in the database, the step of obtaining first control data based on the first motion gesture description data is performed by the sending module;
and if the first motion gesture description data is not located in the database, not executing the step of obtaining first control data based on the first motion gesture description data through the sending module.
18. The second device of claim 16, wherein the second device further comprises:
the second learning module is used for entering a machine learning state;
the fourth detection module is used for detecting and obtaining motion gesture description data generated by a user of the second equipment in the machine learning state;
the fifth detection module is used for detecting and obtaining a control instruction set based on the motion posture description data;
and the second detection module is used for establishing a corresponding relation between the motion attitude and the control instruction based on the motion attitude description data and the control instruction obtained by detection, and setting a first working mode of the first equipment corresponding to the corresponding relation between the motion attitude and the control instruction.
CN201611048815.2A 2016-11-22 2016-11-22 A kind of intelligent control method and equipment Pending CN108089694A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611048815.2A CN108089694A (en) 2016-11-22 2016-11-22 A kind of intelligent control method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611048815.2A CN108089694A (en) 2016-11-22 2016-11-22 A kind of intelligent control method and equipment

Publications (1)

Publication Number Publication Date
CN108089694A true CN108089694A (en) 2018-05-29

Family

ID=62170250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611048815.2A Pending CN108089694A (en) 2016-11-22 2016-11-22 A kind of intelligent control method and equipment

Country Status (1)

Country Link
CN (1) CN108089694A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109625137A (en) * 2019-01-11 2019-04-16 海南大学 One kind two takes turns balance car
WO2020019319A1 (en) * 2018-07-27 2020-01-30 深圳市大疆创新科技有限公司 Control method and control apparatus for gimbal, gimbal, and mobile car
CN114153184A (en) * 2020-09-07 2022-03-08 Oppo广东移动通信有限公司 Intelligent household management method, device, equipment, system and storage medium
CN114495469A (en) * 2020-10-25 2022-05-13 纳恩博(北京)科技有限公司 Vehicle control method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005383A (en) * 2015-07-10 2015-10-28 昆山美莱来工业设备有限公司 Wearable arm band that manipulates mobile robot by using hand gesture
CN205075054U (en) * 2015-08-12 2016-03-09 哈尔滨理工大学 A robot for high -risk operation
CN205572446U (en) * 2016-04-26 2016-09-14 深圳市寒武纪智能科技有限公司 Robot of shooing
CN106094844A (en) * 2016-05-27 2016-11-09 北京小米移动软件有限公司 Balance car control method, device and balance car
CN106131413A (en) * 2016-07-19 2016-11-16 纳恩博(北京)科技有限公司 The control method of a kind of capture apparatus and capture apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005383A (en) * 2015-07-10 2015-10-28 昆山美莱来工业设备有限公司 Wearable arm band that manipulates mobile robot by using hand gesture
CN205075054U (en) * 2015-08-12 2016-03-09 哈尔滨理工大学 A robot for high -risk operation
CN205572446U (en) * 2016-04-26 2016-09-14 深圳市寒武纪智能科技有限公司 Robot of shooing
CN106094844A (en) * 2016-05-27 2016-11-09 北京小米移动软件有限公司 Balance car control method, device and balance car
CN106131413A (en) * 2016-07-19 2016-11-16 纳恩博(北京)科技有限公司 The control method of a kind of capture apparatus and capture apparatus

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020019319A1 (en) * 2018-07-27 2020-01-30 深圳市大疆创新科技有限公司 Control method and control apparatus for gimbal, gimbal, and mobile car
CN109625137A (en) * 2019-01-11 2019-04-16 海南大学 One kind two takes turns balance car
CN114153184A (en) * 2020-09-07 2022-03-08 Oppo广东移动通信有限公司 Intelligent household management method, device, equipment, system and storage medium
CN114495469A (en) * 2020-10-25 2022-05-13 纳恩博(北京)科技有限公司 Vehicle control method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2019128070A1 (en) Target tracking method and apparatus, mobile device and storage medium
US20200346753A1 (en) Uav control method, device and uav
WO2020113452A1 (en) Monitoring method and device for moving target, monitoring system, and mobile robot
CN108089694A (en) A kind of intelligent control method and equipment
WO2017197729A1 (en) Tracking system and tracking method
WO2020199589A1 (en) Recharging control method for desktop robot
CN110692027A (en) System and method for providing easy-to-use release and automatic positioning of drone applications
CN105223967B (en) A kind of camera shooting control method, device and tripod head equipment
CN109062201B (en) ROS-based intelligent navigation microsystem and control method thereof
US11472038B2 (en) Multi-device robot control
CN105058389A (en) Robot system, robot control method, and robot
US20210205976A1 (en) Apparatus and method of an interactive power tool
CN109120883B (en) Far and near scene-based video monitoring method and device and computer-readable storage medium
CN105352508A (en) Method and device of robot positioning and navigation
US11989355B2 (en) Interacting with a smart device using a pointing controller
CN110162063A (en) A kind of paths planning method and device for robot automatic charging
CN108776491A (en) Unmanned plane multiple target monitoring system and monitoring method based on dynamic image identification
WO2019019819A1 (en) Mobile electronic device and method for processing tasks in task region
WO2019047415A1 (en) Trajectory tracking method and apparatus, storage medium and processor
CN107255339A (en) Air conditioning control method, terminal device and storage medium based on terminal device
JP6735446B2 (en) Camera system and its control method, electronic device and its control program
WO2018228254A1 (en) Mobile electronic device and method for use in mobile electronic device
CN108808243A (en) Adjust the method, apparatus and unmanned machine system of antenna
CN103577789A (en) Detection method and device
US11724397B2 (en) Robot and method for controlling the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180529