CN108089695B - Method and device for controlling movable equipment - Google Patents
Method and device for controlling movable equipment Download PDFInfo
- Publication number
- CN108089695B CN108089695B CN201611051269.8A CN201611051269A CN108089695B CN 108089695 B CN108089695 B CN 108089695B CN 201611051269 A CN201611051269 A CN 201611051269A CN 108089695 B CN108089695 B CN 108089695B
- Authority
- CN
- China
- Prior art keywords
- target
- human body
- motion
- motion vector
- description information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a method and a device for controlling movable equipment, wherein the method comprises the following steps: acquiring an image through an image acquisition unit, and identifying the acquired image to obtain an identification result; the recognition result indicates that a human body exists in the acquired image, and the human body posture is a first human body posture; superposing first motion description information corresponding to the first human body posture and second motion description information currently executed by the movable equipment to obtain third motion description information; and calculating a target movement parameter corresponding to the third motion description information, and controlling the movable equipment to move according to the target movement parameter. The invention is used for realizing the technical effect of controlling the movable equipment through the human body posture, and further does not need an additional controller.
Description
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to a method and an apparatus for controlling a mobile device.
Background
At present, related art controls the movement of a mobile device, and one of them is to perform corresponding control by receiving a control signal transmitted from a controller, such as a remote controller or a mobile phone, held by a user's hand. For example, if the user presses the left key on the remote control, the mobile device moves to the left in response to the left movement command output based on the received signal.
However, the above control method requires an additional controller.
Disclosure of Invention
The embodiment of the invention provides a method and a device for controlling a movable device, which are used for realizing the technical effect of controlling the movable device through the human body posture without an additional controller.
In a first aspect, the present invention provides a method of controlling a mobile device, comprising:
acquiring an image through an image acquisition unit, and identifying the acquired image to obtain an identification result; the recognition result indicates that a human body exists in the acquired image, and the human body posture is a first human body posture;
superposing first motion description information corresponding to the first human body posture and second motion description information currently executed by the movable equipment to obtain third motion description information;
and calculating a target movement parameter corresponding to the third motion description information, and controlling the movable equipment to move according to the target movement parameter.
Optionally, identifying the acquired image to obtain an identification result includes:
carrying out human body detection in the acquired image to obtain a detection result;
when the detection result shows that a human body exists in the acquired image, inputting a target tracking area where the human body is located into a plurality of human body posture classifiers for matching, and obtaining a first matching score output by each human body posture classifier;
and determining the first human body posture corresponding to the highest first matching score as the human body posture.
Optionally, the second motion description information is motion description information of a target tracked by the mobile device, where the target is the human body, and the method further includes:
judging whether the mobile equipment loses a target or not;
when the movable device loses the target, the target is searched again.
Optionally, the determining whether the mobile device loses the target includes:
training a target model by using the target tracking area of a first frame of image in the acquired images;
matching the target tracking area of a second frame image behind the first frame image based on the target model to obtain a second matching score;
judging whether the second matching score reaches a threshold value; wherein the mobile device is indicated to miss a target when the second match score does not meet the threshold.
Optionally, after searching for the target again, the method further includes:
when the suspected target is found again, matching a suspected target tracking area where the suspected target is located based on the target model to obtain a third matching score;
determining that the suspected target is the target when the third match score reaches the threshold.
Optionally, the motion description information is a motion vector, and the obtaining of third motion description information by superimposing first motion description information corresponding to the first human body posture and second motion description information currently executed by the mobile device includes:
superposing the first motion vector corresponding to the first human body posture with the second motion vector currently executed by the movable equipment to obtain a third motion vector;
wherein a magnitude and a direction of the first motion vector are determined from the first human pose; the magnitude of the second motion vector is determined according to the speed and the direction is determined according to the motion direction.
In a second aspect, the present invention provides an apparatus for controlling a movable device, comprising:
the identification module is used for acquiring images through the image acquisition unit, identifying the acquired images and obtaining an identification result; the recognition result indicates that a human body exists in the acquired image, and the human body posture is a first human body posture;
the superposition module is used for superposing first motion description information corresponding to the first human body posture and second motion description information currently executed by the movable equipment to obtain third motion description information;
and the control module is used for calculating a target movement parameter corresponding to the third motion description information and controlling the movable equipment to move according to the target movement parameter.
Optionally, the identification module is configured to perform human body detection in the acquired image to obtain a detection result; when the detection result shows that a human body exists in the acquired image, inputting a target tracking area where the human body is located into a plurality of human body posture classifiers for matching, and obtaining a first matching score output by each human body posture classifier; and determining the first human body posture corresponding to the highest first matching score as the human body posture.
Optionally, the second motion description information is motion description information of a target tracked by the mobile device, where the target is the human body, and the apparatus further includes:
the judging module is used for judging whether the movable equipment loses a target or not;
and the searching module is used for searching the target again when the movable equipment loses the target.
Optionally, the judging module is configured to train a target model using the target tracking area of a first frame of image in the acquired images; matching the target tracking area of a second frame image behind the first frame image based on the target model to obtain a second matching score; judging whether the second matching score reaches a threshold value; wherein the mobile device is indicated to miss a target when the second match score does not meet the threshold.
Optionally, the apparatus further comprises:
the obtaining module is used for matching a suspected target tracking area where the suspected target is located based on the target model to obtain a third matching score when the suspected target is found again after the target is found again;
a determination module, configured to determine that the suspected target is the target when the third matching score reaches the threshold.
Optionally, the motion description information is a motion vector, and the superimposing module is configured to superimpose the first motion vector corresponding to the first human body posture and the second motion vector currently executed by the mobile device, so as to obtain a third motion vector; wherein a magnitude and a direction of the first motion vector are determined from the first human pose; the magnitude of the second motion vector is determined according to the speed and the direction is determined according to the motion direction.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
according to the technical scheme, firstly, an image is collected through an image collecting unit, the collected image is identified, an identification result is obtained, the identification result indicates that a human body exists in the collected image, the posture of the human body is a first human body posture, then, first motion description information corresponding to the first human body posture and current second motion description information of the movable equipment are overlapped, third motion description information is obtained, finally, a target movement parameter corresponding to the third motion is calculated, and the movable equipment is controlled to move according to the target movement parameter. Therefore, the human body posture is recognized, the first motion description information corresponding to the first human body posture is superposed on the second motion description information, and then the movable equipment is controlled to move according to the target movement parameters corresponding to the superposed third motion description information. Therefore, the movable equipment is correspondingly controlled according to the human body posture, and the user can control the movement of the movable equipment by making different postures without adding an additional controller.
Drawings
FIG. 1 is a flow chart of a method for controlling a mobile device in an embodiment of the present invention;
FIG. 2 is a diagram of a second motion vector according to an embodiment of the present inventionA schematic view of the target in a geodetic coordinate system;
3 a-3 e are schematic diagrams of motion vector superposition according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an apparatus for controlling a movable device according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a method and a device for controlling a movable device, which are used for realizing the technical effect of controlling the movable device through the human body posture without an additional controller.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
according to the technical scheme, firstly, an image is collected through an image collecting unit, the collected image is identified, an identification result is obtained, the identification result indicates that a human body exists in the collected image, the posture of the human body is a first human body posture, then, first motion description information corresponding to the first human body posture and current second motion description information of the movable equipment are overlapped, third motion description information is obtained, finally, a target movement parameter corresponding to the third motion is calculated, and the movable equipment is controlled to move according to the target movement parameter. Therefore, the human body posture is recognized, the first motion description information corresponding to the first human body posture is superposed on the second motion description information, and then the movable equipment is controlled to move according to the target movement parameters corresponding to the superposed third motion description information. Therefore, the movable equipment is correspondingly controlled according to the human body posture, and the user can control the movement of the movable equipment by making different postures without adding an additional controller.
The technical solutions of the present invention are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present invention are described in detail in the technical solutions of the present application, and are not limited to the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
A first aspect of the invention provides a method of controlling a mobile device. The movable device is, for example, a balance car, a robot, an unmanned aerial vehicle, or the like, which can move, and the present invention is not particularly limited. Referring to fig. 1, a flowchart of a method for controlling a mobile device according to an embodiment of the invention is shown. The method comprises the following steps:
s101: acquiring an image through an image acquisition unit, and identifying the acquired image to obtain an identification result;
s102: superposing first motion description information corresponding to the first human body posture and current second motion description information of the movable equipment to obtain third motion description information;
s103: and calculating a target movement parameter corresponding to the third motion description information, and controlling the movable equipment to move according to the target movement parameter.
Specifically, the mobile device in the embodiment of the present invention is equipped with an image capturing unit, and in a specific implementation process, the image capturing unit may specifically be any one or more of an RGB (Red, Green, Blue, Red, Green, Blue) image capturing unit, a depth image capturing unit, an infrared image capturing unit, and a binocular image capturing unit, which is not limited in particular. The image acquisition unit is used for acquiring the image in the acquisition range and transmitting the acquired image to the device for controlling the movable equipment for identification.
In S101, the image of the acquisition unit is recognized, and a recognition result is obtained. In a specific implementation process, there may be no user or users in the acquisition range, and the postures of the human body may be different at different times, so that the recognition result may specifically indicate that there is a human body in the acquired image or that there is no human body in the acquired image. For convenience of description, in the embodiment of the present invention, it is assumed that the recognition result specifically indicates that a human body exists in the acquired image, and the human body posture is the first human body posture. The first human body posture is, for example, a posture in which the user moves his hand in the body inner direction, a posture in which the user pushes his hand in the body outer direction, and the like, and the present invention is not particularly limited.
Next, in S102, the first motion description information corresponding to the first human body posture and the current second motion description information of the mobile device are superimposed to obtain the superimposed third motion description information. Specifically, the motion description information in the embodiment of the present invention is information for representing motion, such as a motion vector, an angular velocity, a linear velocity on a horizontal plane, a linear velocity on a vertical plane, and the like. And superposing the first motion description information and the second motion description information to further obtain superposed motion description information, namely third motion description information.
Next, in S103, the target movement parameter corresponding to the third motion description information is calculated. Specifically, the movement parameters in the embodiment of the present invention include, but are not limited to, a linear velocity, an angular velocity, a linear velocity in a horizontal plane, a linear velocity in a vertical plane, and the like, and the present invention is not particularly limited. And reversely calculating the target movement parameter through the third motion description information, and further controlling the movable equipment to move according to the target movement parameter.
The target movement parameter is obtained by reversely calculating the third movement description information, and the third movement information is superposed by the first movement description corresponding to the first human body posture of the user, so that the movable equipment is controlled to move according to the target movement parameter, and the technical effect of responding to the first human body posture of the user to control the movement of the movable equipment is realized. Therefore, when the user needs to control the movable equipment, the user does not need an additional controller and can do different human body postures.
Specifically, in the embodiment S101 of the present invention, the identification of the acquired image may be specifically realized through the following processes:
carrying out human body detection in the acquired image to obtain a detection result;
when the detection result shows that a human body exists in the acquired image, inputting a target tracking area where the human body is located into a plurality of human body posture classifiers for matching, and obtaining a first matching score output by each human body posture classifier;
and determining the first human body posture corresponding to the highest first matching score as the human body posture.
Specifically, the present invention has two modes for human body detection, or pedestrian detection, in the acquired image.
The first method comprises the following steps: the movable device stores the human body detection model in a storage space in advance and reads the human body detection model from the storage space when identification is needed. In a specific implementation process, the human body detection model stored in the mobile device may specifically be any one or more of a human face detection model, a human body half-body detection model and a human body whole-body detection model, and the present invention is not particularly limited. The movable equipment detects whether the characteristics meeting the human body detection model exist in the collected images according to the characteristics of the human body detection model. If the characteristics meeting the human body detection model are detected, obtaining a detection result indicating that a human body exists in the acquired image; on the contrary, if a feature satisfying the human body detection model is not detected, a detection result indicating that no human body exists in the acquired image is obtained.
And the second method comprises the following steps: in the embodiment of the present invention, in order to perform human body detection on the acquired image, an ACF (aggregation Channel Feature) of the acquired image may be obtained first. The ACF characteristics generally consist of 10 channels, 3 channels for LUV colors, 1 gradient magnitude channel, and 6 gradient direction histogram channels. Adjusting the quantization step size of the gradient direction may change the number of gradient direction histogram channels. LUV (L for luminance, U and V for chrominance) color channels may also be added, deleted or replaced by other color channels, such as RGB or HSV (chrominance, purity, lightness, Hue, Saturation, Value), etc.
In a specific implementation process, the ACF characteristics can be obtained according to a general method, that is, scaling the image at different scales, and then calculating the ACF characteristics respectively. Alternatively, in order to improve efficiency, shorten processing time, and not lose too much accuracy, in the embodiment of the present invention, an accurate ACF characteristic may be calculated by shortening on a partial scale, and other scales may be estimated according to the ACF characteristics in the vicinity.
For example, the scales to be scaled are specifically 1, 7/8, 6/8, 5/8, 4/8, 3/8, 2/8 and 1/8, in the embodiment of the present invention, scaling is performed according to the scales of 1, 5/8 and 2/8, and ACF channel characteristics corresponding to the scales of 1, 5/8 and 2/8 are calculated. Then, the ACF channel characteristics corresponding to 7/8 and 6/8 scales are estimated according to the proportional relation between 1 and 7/8 and 6/8, the ACF channel characteristics corresponding to 4/8 and 3/8 scales are estimated according to the proportional relation between 5/8 and 4/8 and 3/8, the ACF channel characteristics corresponding to 1/8 scales are estimated according to the proportional relation between 2/8 and 1/8, and finally all ACF characteristics are obtained.
After the ACF characteristics are calculated, a classifier is used to determine whether a human body is present in the acquired image. In the embodiment of the present invention, classifiers that may be used include, but are not limited to, adaptive enhancement Adaboost, Soft Cascade, structured Support Vector Machine (SVM), Support Vector Machine (Support Vector Machine), and the like, and the present invention is not limited in particular. Taking an Adaboost classifier as an example, the process of judging whether a human body exists in the acquired image specifically comprises the following steps: firstly extracting positive and negative samples to calculate ACF characteristics, then training an Adaboost classifier, then extracting a new negative sample according to the existing Adaboost classifier and calculating new ACF characteristics, and then training the new Adaboost classifier. And accumulating the output weight of each weak classifier, directly judging as a negative sample when the accumulator is less than-1, and judging as a positive sample when the samples pass through all the weak classifiers. And finally, performing non-maximum suppression on all positive samples to obtain a detection result.
In the specific implementation process, a person skilled in the art may select any one of the two implementation manners to detect a human body according to actual conditions, and the present invention is not limited in particular.
In a specific implementation process, the mobile device may use any detected human body as a target for tracking, or may track only a part of users, for example, users having the authority to control the mobile device, such as owners of the mobile device, and only respond to the control of the human body posture of the users having the authority to control the mobile device. Then, further, after the human body is detected, authority verification may be performed on the detected human body, for example, whether the face of the detected human body is a face of a pre-stored user with control authority or not may be verified. If the detected human body passes the authority verification, the movable device tracks the user, further identifies the human body posture of the user, and further calculates third motion description information. On the contrary, if the detected human body does not pass the authority verification, the mobile device does not follow the user, does not further recognize the human body posture of the user, and calculates corresponding third motion description information.
Further, if a plurality of human bodies are identified from the acquired image, it is necessary to perform authority verification for each human body. If only one of the plurality of human bodies passes the authority verification, the movable equipment only tracks the human body passing the authority verification, and further identifies the human body posture of the human body, so that third motion description information is calculated. If two or more human bodies in the plurality of human bodies pass the authority verification, the movable equipment only tracks the human body with the highest authority or tracks the human body selected by the user, and further identifies the human body posture of the human body, so as to calculate the third motion description information.
And then, when the detection result shows that a human body exists in the acquired image or a user with the authority of controlling the movable equipment is detected, inputting the target tracking area where the human body is located into a plurality of human body posture classifiers for matching, and obtaining a first matching score output by each human body posture classifier.
Specifically, the target tracking area is an area including a human body, such as a circumscribed rectangle of the human body, or a non-circumscribed rectangle including the human body, and the present invention is not particularly limited. A plurality of human posture classifiers are stored in the movable equipment in advance, and each classifier corresponds to a different human posture, such as a posture of waving a hand towards the inner side of the body, a posture of pushing a hand towards the outer side of the body, a posture of pointing the arm to the left of the movable equipment, a posture of pointing the arm to the right of the movable equipment, a posture of connecting two arms into a ring, and the like. And inputting the target tracking area into each human posture classifier to obtain a first matching score of the target tracking area and each human posture classifier. The higher the first matching score is, the more matched the target tracking area is with the human posture classifier, and further the human posture in the target tracking area is closer to the human posture represented by the human posture classifier.
Therefore, in the embodiment of the present invention, the highest first matching score corresponding to the first human body posture is determined as the human body posture of the user in the target tracking area.
In addition, in the embodiment of the present invention, S102 may be implemented by the following processes:
superposing the first motion vector corresponding to the first human body posture with the second motion vector currently executed by the movable equipment to obtain a third motion vector;
wherein a magnitude and a direction of the first motion vector are determined from the first human pose; the magnitude of the second motion vector is determined according to the speed and the direction is determined according to the motion direction.
After the human body posture of the user is determined to be the first human body posture, a first motion vector corresponding to the first human body posture is read. For the second motion vector, the direction and speed of the current motion of the movable device, and the distance and direction between the movable device and the target are acquired, and then the second motion vector is synthesized. Then, the first motion vector and the second motion vector are superposed to obtain a third motion vector. Wherein the third motion vector is the third motion description information.
Finally, in S103, the third motion vector is reversely calculated as the target movement parameter, and the movable apparatus is controlled to move with the target movement parameter.
For example, the movable apparatus is embodied as a robot, and the body postures stored in the robot are embodied as postures of waving in a direction inside the body, postures of pushing hands in a direction outside the body, postures of pointing the arm to the left of the robot, postures of pointing the arm to the right of the robot, and postures of connecting the arms into a ring. Wherein, the gesture of waving hands towards the inner side of the body corresponds to the direction of the robot pointing to the human body, and the speed is a motion vector of 2 m/s; the gesture of pushing hands towards the outer side of the body corresponds to the direction of the human user pointing to the robot, and the speed is a motion vector of 4 m/s; the arm points to the left of the robot corresponding to the posture of the left of the robot, and the speed is a motion vector of 2 m/s; the gesture of the arm pointing to the right of the robot correspondingly points to the right of the robot, and the speed is a motion vector of 2 m/s; the corresponding motion vectors of the postures of the arms connected into a ring are two, one is a first sub-motion vector which has the same size with the second motion vector and is opposite to the second motion vector, and the other is a second sub-motion vector which takes the human body as the center of a circle and the direction of a tangent line of a circle with the distance from the human body as the radius and has the speed of 2 m/s.
And the robot is positioned behind the tracked user and tracks the movement of the user at the speed of 1m/s, so that the direction of the second motion vector points to the human body for the robot, and the speed is 1 m/s. The user turns over and takes a gesture of waving his hand in the direction of the inner side of the body. The robot identifies the collected image, determines and identifies the human body posture as the posture of waving hands towards the inner side of the body, and then reads a first motion vector corresponding to the posture of waving hands towards the inner side of the body. And then, superposing the first motion vector and the second motion vector to obtain a third motion vector with the direction that the robot points to the human body and the speed of 3 m/s. And then, reversely calculating the target movement parameters corresponding to the third motion vector to be linear velocity +3m/s and angular velocity 0 rad/s. Therefore, when the user makes a gesture of waving his hand in the direction of the inner side of the body, the robot will accelerate closer to the user.
Or, the user turns over and makes a gesture of pushing hands to the outside of the body. The robot identifies the acquired image, determines and identifies the posture of the human body as the posture of pushing the hand to the outer side of the body, and then reads a first motion vector corresponding to the posture of pushing the hand to the outer side of the body. And then, superposing the first motion vector and the second motion vector to obtain a third motion vector with the direction of the human body pointing to the robot and the speed of 3 m/s. And then, reversely calculating the target movement parameters corresponding to the third motion vector to be linear velocity-3 m/s and angular velocity 0 rad/s. Therefore, when the user makes a posture of pushing the hand in the body outside direction, the robot will slow down to approach the user, even recede.
Or the user turns over and makes a gesture that the arm points to the left of the robot. The robot determines and recognizes the human body posture as the posture that the arm points to the left of the robot by recognizing the collected image, and then reads a first motion vector corresponding to the posture that the arm points to the left of the robot. Then, the first motion vector and the second motion vector are superposed to obtain a third motion vector with the left advancing direction being 63.5 degrees and the speed being 2.23 m/s. Then, the target movement parameters corresponding to the third motion vector are reversely calculated to be the linear velocity +2.23m/s and the angular velocity +0.46 rad/s. Therefore, when the user makes a gesture that the arm points to the left of the robot, the robot will move to the left and back of the user for tracking.
Or the user turns over and makes a gesture that the arm points to the right of the robot. The robot identifies the collected image, determines that the human body posture is identified as the posture that the arm points to the right of the robot, and then reads a first motion vector corresponding to the posture that the arm points to the right of the robot. Then, the first motion vector and the second motion vector are superposed to obtain a third motion vector which is deviated from the right direction of the advancing direction by 63.5 degrees and has the speed of 2.23 m/s. Then, the target movement parameters corresponding to the third motion vector are reversely calculated to be the linear velocity +2.23m/s and the angular velocity-0.46 rad/s. Therefore, when the user makes a gesture in which the arm points to the right of the robot, the robot will move to the right rear of the user for tracking.
Or the user turns over and makes a posture that two arms are connected to form a ring. The robot identifies the collected images, determines that the human body posture is identified as the posture of the arms connected to form a ring, and further reads a first sub-motion vector and a second sub-motion vector corresponding to the posture of the arms connected to form a ring. Then, the first sub motion vector, the second sub motion vector and the second motion vector are superposed to obtain a third motion vector with the tangential direction of a circle with the radius of the distance from the human body as the direction and the speed of 2 m/s. And then, reversely calculating the target movement parameters corresponding to the third motion vector to be linear velocity-2 m/s and angular velocity +1.5 rad/s. So, when the user makes a gesture in which the arms are connected in a loop, the robot will rotate around the tracking user.
It should be understood that in the examples listed above where the movable apparatus is specifically a robot, the linear velocity is specifically a velocity in a horizontal plane parallel to the ground, "+" before the linear velocity indicates that the driving device driving the robot to generate the linear velocity is driving in a forward direction, and "-" indicates that the driving device driving the robot to generate the linear velocity is driving in a reverse direction. Similarly, "+" before the angular velocity indicates that the driving means driving the robot generating the angular velocity is driven in the forward direction, and "-" indicates that the driving means driving the robot generating the angular velocity is driven in the reverse direction.
For another example, the movable device is specifically an unmanned aerial vehicle, and the human body postures stored in the unmanned aerial vehicle are specifically a posture of waving hands towards the inside direction of the body, a posture of pushing hands towards the outside direction of the body, a posture of pointing to the left of the unmanned aerial vehicle by an arm, a posture of pointing to the right of the unmanned aerial vehicle by an arm, and a posture of connecting two arms into a ring. Wherein, the gesture of waving hands towards the inner side direction of the body corresponds to the motion vector of the unmanned aerial vehicle pointing to the direction of the human body and the speed is 2 m/s; the gesture of pushing hands towards the outer side of the body corresponds to a motion vector of which the human body points to the direction of the unmanned aerial vehicle and the speed is 4 m/s; the arm points to the left of the unmanned aerial vehicle corresponding to the posture of the left of the unmanned aerial vehicle, and the speed is a motion vector of 2m/s (in the horizontal plane); the gesture that the arm points to the right of the unmanned aerial vehicle correspondingly points to the right of the unmanned aerial vehicle, and the speed is a motion vector of 2m/s (in a horizontal plane); the corresponding motion vector of the posture that the arm is connected into a ring has two motion vectors, one is a first sub-motion vector which has the same size with the second motion vector and is opposite to the second motion vector, and the other is a second sub-motion vector which takes the human body as the center of a circle, takes the tangential direction of a circle which takes the distance from the human body as the radius and the height as the flying height as the direction and has the speed of 2 m/s.
The drone is currently located 2m high on the front of the user, tracking the user's movement at a speed of 1 m/s. For convenience, please refer to FIG. 2, which shows the second motion vectorSchematic diagram of human body in geodetic coordinate system. The user makes the gesture of waving hands to the inner side direction of the body, and the unmanned aerial vehicle confirms to recognize the human body gesture as waving hands to the inner side direction of the body through recognizing the collected imagesThe posture of the hand, and further a first motion vector corresponding to the posture of waving the hand toward the inner side of the body is readThen, the first motion vector is processedAnd a second motion vectorThe superposition is performed to obtain a third motion vector as shown in FIG. 3aThen, a third motion vector is calculated in reverseThe corresponding target movement parameters are linear velocity (in the horizontal plane) +3.2m/s, linear velocity (in the vertical plane) -1.5m/s and angular velocity 0 rad/s. Therefore, when the user makes a gesture of waving hands in the direction inside the body, the unmanned aerial vehicle will be close to the user.
Or, the user makes the gesture of pushing hands to the body outside direction, the unmanned aerial vehicle confirms and recognizes the gesture of the human body gesture as the gesture of pushing hands to the body outside direction through recognizing the collected images, and then reads the first motion vector corresponding to the gesture of pushing hands to the body outside directionThen, the first motion vector is processedAnd a second motion vectorThe superposition is performed to obtain a third motion vector as shown in FIG. 3bThen, the third fortune is calculated reverselyMotion vectorThe corresponding target movement parameters are linear velocity (in the horizontal plane) -1.2m/s, linear velocity (in the vertical plane) +1.5m/s and angular velocity 0 rad/s. Therefore, when the user makes the gesture of pushing hands to the outside direction of the body, the unmanned aerial vehicle is reduced to be close to the user and even kept away from the user.
Or, the user makes the gesture that the arm points to the unmanned aerial vehicle left, and unmanned aerial vehicle confirms to discern the human gesture for the gesture that the arm points to the unmanned aerial vehicle left through discerning the image of gathering, and then reads the first motion vector that corresponds with the gesture that the arm points to the unmanned aerial vehicle leftThen, the first motion vector is processedAnd a second motion vectorThe superposition is performed to obtain a third motion vector as shown in FIG. 3cThen, a third motion vector is calculated in reverseThe corresponding target movement parameters are linear velocity (in the horizontal plane) +2.23m/s, linear velocity (in the vertical plane) 0m/s and angular velocity +0.46 rad/s. So, make the gesture that the arm points to the unmanned aerial vehicle left as the user, unmanned aerial vehicle will fly left.
Or, the gesture for making the arm point to the unmanned aerial vehicle right side, the unmanned aerial vehicle confirms that the human gesture is identified as the gesture that the arm points to the unmanned aerial vehicle right side through the image that discernment was gathered, and then reads the first motion vector that corresponds with the gesture that the arm points to the unmanned aerial vehicle right sideThen, the first motion vector is processedAnd a second motion vectorThe superposition is performed to obtain a third motion vector as shown in FIG. 3dThen, a third motion vector is calculated in reverseThe corresponding target movement parameters were linear velocity (in the horizontal plane) +2.23m/s, linear velocity (in the vertical plane) 0m/s and angular velocity-0.46 rad/s. So, make the gesture that the arm points to the unmanned aerial vehicle right side when the user, unmanned aerial vehicle will fly to the right.
Or the user makes the postures of the two arms which are connected to form the ring, the unmanned aerial vehicle determines and recognizes the human body posture as the posture of the arms which are connected to form the ring by recognizing the collected images, and then reads the first sub-motion vector corresponding to the posture of the arms which are connected to form the ringAnd a second sub-motion vectorThen, the first sub-motion vector is processedSecond sub-motion vectorAnd a second motion vectorThe superposition is performed to obtain a third motion vector as shown in FIG. 3eThen, a third motion vector is calculated in reverseThe corresponding target movement parameters are linear velocity (in the horizontal plane) +2m/s, linear velocity (in the vertical direction) 0m/s and angular velocity +1.5 rad/s. Therefore, after the user makes the posture that the arms are connected into a ring, the unmanned aerial vehicle flies around the position where the user corresponds on the current flying height.
It should be understood that, in the examples listed above, the mobile device is specifically a drone, and the linear velocity (in the horizontal plane) is specifically a velocity in the horizontal plane parallel to the ground, "+" before the linear velocity (in the horizontal plane) indicates that the driving device for driving the drone to generate the linear velocity in the horizontal plane drives forward, and "-" indicates that the driving device for driving the drone to generate the linear velocity in the horizontal plane drives backward. The linear velocity (in the vertical plane) is specifically the velocity in the plane perpendicular to the ground, "+" in front of the linear velocity (in the vertical plane) indicates that the driving device driving the unmanned aerial vehicle to generate the linear velocity in the vertical plane drives forward, the flying height of the unmanned aerial vehicle increases at the moment, and "-" indicates that the driving device driving the unmanned aerial vehicle to generate the linear velocity in the vertical plane drives backward, and the flying height of the unmanned aerial vehicle decreases at the moment. Similarly, "+" before the angular velocity indicates that the driving means driving the robot generating the angular velocity is driven in the forward direction, and "-" indicates that the driving means driving the robot generating the angular velocity is driven in the reverse direction.
It can be seen from the above description that the user can control the moving direction and speed of the mobile device through different body gestures, so that when the user needs to control the mobile device, the user does not need an additional controller, and only needs to make different body gestures.
As an optional embodiment, the method in the embodiment of the present invention may further include:
judging whether the mobile equipment loses a target or not;
when the movable device loses the target, the target is searched again.
Specifically, in the embodiment of the present invention, the second motion description information is description information of a motion of a tracking target of the movable device, and the tracking target is a human body, that is, a user or a user having an authority to control the movable device. When the movable device recognizes a human body in S101, the human body is tracked as a target. In tracking, the mobile device tracks the target according to a visual tracking algorithm. The visual tracking algorithm in the embodiment of the present invention may be a DSST (Scale Space tracking) algorithm or an SRDCF (spatial normalization Correlation filtering tracking) algorithm, and a person having ordinary skill in the art to which the present invention belongs may select the algorithm according to the actual situation, and the present invention is not limited specifically.
Due to the irregularity and unpredictability of the body movement, there is a possibility that the object may be lost while the movable device is tracking the body. Therefore, in the embodiment of the present invention, it is also necessary to determine whether the removable device loses the target. How to judge whether the removable device loses the target can be realized by the following processes:
training a target model by using the target tracking area of a first frame of image in the acquired images;
matching the target tracking area of a second frame image behind the first frame image based on the target model to obtain a second matching score;
judging whether the second matching score reaches a threshold value; wherein the mobile device is indicated to miss a target when the second match score does not meet the threshold.
Specifically, the first frame image in the embodiment of the present invention refers to any frame image in which a human body is recognized in the following process this time, or any frame image in which a human face having a right to control the removable device is recognized. The second frame image is the next frame image acquired after the first frame image. And inputting the target tracking area of the first frame of image into a visual tracking algorithm to train a target model.
Take the example of the visual tracking algorithm as DSST. DSST is an online target tracking and model training method based on a correlation filter. By inputting the first frame of image into the filter model trained on-line, the position and scale of the target in the image is determined from the output response. The filter model parameters for online training are determined by minimizing the error of the training sample output and the expected output. Wherein, the target tracking area is a training sample. Extracting FHO G (Fused Histogram of oriented gradients) features of the target tracking area, and establishing a two-dimensional Gaussian function with the center of the FHO G features as a peak value as expected output. The error between the response and the desired output is then calculated by a least squares and minimization filter to determine the various parameters of the filter model trained on-line. And further modifying each parameter of the filter model into the determined value, and taking the online training filter after the parameter modification as a target model.
And then, extracting FHO G characteristics in the search area of the second frame image, inputting the search area into an online-trained filter model, and obtaining the position and the scale of the target tracking area of the second frame image. And then adding the target tracking area of the second frame image into the training sample, and updating the filter model trained on line.
Next, a second matching score is obtained by performing calculation based on the matching degrees of the target model, i.e., the online-trained filter model obtained by the first training and the online-trained filter model obtained by the second training. Or, calculating the distance between the position of the target tracking area of the first frame image and the position of the target tracking area of the second frame image, and determining the distance between the two target tracking areas as a second matching score. Alternatively, the ratio of the characteristic peak value of the second frame image to the average value of the non-peak values is determined as the second matching score. Next, it is determined whether the second matching score reaches a threshold. If the second matching score reaches the threshold value, the targets found twice are similar, namely, the human body is detected twice, or the same user with the authority of controlling the movable equipment is detected twice, and the movable equipment is determined not to lose the targets; on the contrary, if the second matching score does not reach the threshold value, it indicates that the two found targets are not similar, that is, no human body is detected in the second frame image, or the same user as the user having the authority to control the mobile device in the first frame image is not detected in the second frame image, it is determined that the mobile device loses the target.
In the embodiment of the present invention, when the removable device loses the target, S101 is performed again, so as to re-search for the target.
As can be seen from the above description, it is determined whether the movable device loses the target during the process in which the movable device tracks the human body, and the target is sought again when the target is lost, thereby providing movable device tracking reliability.
Further, after the target is found again, the method may further include:
when the suspected target is found again, matching a suspected target tracking area where the suspected target is located based on the target model to obtain a third matching score;
determining that the suspected target is the target when the third match score reaches the threshold.
Specifically, the suspected object in the embodiment of the present invention is a human body which is possibly identified as a target again after the movable device is determined to lose the target. After the suspected target is found, in order to determine whether the found suspected target is a target to be tracked, the mobile device further performs matching in a suspected target tracking area where the suspected target is located based on the target model, so as to obtain a third matching score. In the implementation of the present invention, the method for obtaining the third matching score is similar to the method for obtaining the second matching score, and therefore, the description thereof is not repeated here.
When the third matching score reaches the threshold value, the fact that the suspected target which is found currently is close to the target is high is indicated, therefore, the suspected target is determined as the target, and the tracking of the target is restarted. And when the third matching score does not reach the threshold value, the similarity between the currently searched suspected target and the target is low, so that the suspected target is determined not to be the target, the suspected target is discarded, and other suspected targets are searched again.
According to the above description, when the suspected target is found again, whether the suspected target is the target or not is judged, and the reliability of tracking the movable equipment is further provided.
Based on the same inventive concept as the method of controlling a movable device in the foregoing embodiment, a second aspect of the present invention also provides an apparatus for controlling a movable device, as shown in fig. 4, including:
the identification module 101 is used for acquiring an image through the image acquisition unit, identifying the acquired image and obtaining an identification result; the recognition result indicates that a human body exists in the acquired image, and the human body posture is a first human body posture;
the superposition module 102 is configured to superpose first motion description information corresponding to the first human body posture and second motion description information currently executed by the mobile device, so as to obtain third motion description information;
the control module 103 is configured to calculate a target movement parameter corresponding to the third motion description information, and control the mobile device to move according to the target movement parameter.
Specifically, the recognition module 101 is configured to perform human body detection on the acquired image to obtain a detection result; when the detection result shows that a human body exists in the acquired image, inputting a target tracking area where the human body is located into a plurality of human body posture classifiers for matching, and obtaining a first matching score output by each human body posture classifier; and determining the first human body posture corresponding to the highest first matching score as the human body posture.
Wherein the second motion description information is motion description information of a target tracked by the mobile device, and the target is a human body, and further, the apparatus in the embodiment of the present invention further includes:
the judging module is used for judging whether the movable equipment loses a target or not;
and the searching module is used for searching the target again when the movable equipment loses the target.
The judging module is used for training a target model by utilizing the target tracking area of a first frame image in the acquired images; matching the target tracking area of a second frame image behind the first frame image based on the target model to obtain a second matching score; judging whether the second matching score reaches a threshold value; wherein the mobile device is indicated to miss a target when the second match score does not meet the threshold.
Furthermore, the apparatus in the embodiment of the present invention further includes:
the obtaining module is used for matching a suspected target tracking area where the suspected target is located based on the target model to obtain a third matching score when the suspected target is found again after the target is found again;
a determination module, configured to determine that the suspected target is the target when the third matching score reaches the threshold.
In this embodiment of the present invention, the motion description information is a motion vector, and the superimposing module 102 is configured to superimpose the first motion vector corresponding to the first human body posture and the second motion vector currently executed by the mobile device, so as to obtain a third motion vector; wherein a magnitude and a direction of the first motion vector are determined from the first human pose; the magnitude of the second motion vector is determined according to the speed and the direction is determined according to the motion direction.
Various modifications and specific examples of the method for controlling a movable apparatus in the foregoing embodiments of fig. 1-3 e are also applicable to the apparatus for controlling a movable apparatus in the present embodiment, and the detailed description of the method for controlling a movable apparatus is provided for the sake of brevity.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
according to the technical scheme, firstly, an image is collected through an image collecting unit, the collected image is identified, an identification result is obtained, the identification result indicates that a human body exists in the collected image, the posture of the human body is a first human body posture, then, first motion description information corresponding to the first human body posture and current second motion description information of the movable equipment are overlapped, third motion description information is obtained, finally, a target movement parameter corresponding to the third motion is calculated, and the movable equipment is controlled to move according to the target movement parameter. Therefore, the human body posture is recognized, the first motion description information corresponding to the first human body posture is superposed on the second motion description information, and then the movable equipment is controlled to move according to the target movement parameters corresponding to the superposed third motion description information. Therefore, the movable equipment is correspondingly controlled according to the human body posture, and the user can control the movement of the movable equipment by making different postures without adding an additional controller.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. A method of controlling a mobile device, comprising:
acquiring an image through an image acquisition unit, and identifying the acquired image to obtain an identification result; the recognition result indicates that a human body exists in the acquired image, and the human body posture is a first human body posture;
superposing first motion description information corresponding to the first human body posture and second motion description information currently executed by the movable equipment to obtain third motion description information; wherein the motion description information is a motion vector, and the obtaining of the third motion description information by superimposing the first motion description information corresponding to the first human body posture and the second motion description information currently executed by the mobile device includes: superposing a first motion vector corresponding to the first human body posture with a second motion vector currently executed by the movable equipment to obtain a third motion vector; wherein a magnitude and a direction of the first motion vector are determined from the first human pose; the magnitude of the second motion vector is determined according to the speed, and the direction is determined according to the motion direction;
and calculating a target movement parameter corresponding to the third motion description information, and controlling the movable equipment to move according to the target movement parameter so as to track the human body by utilizing the movable equipment.
2. The method of claim 1, wherein identifying the captured image to obtain an identification result comprises:
carrying out human body detection in the acquired image to obtain a detection result;
when the detection result shows that a human body exists in the acquired image, inputting a target tracking area where the human body is located into a plurality of human body posture classifiers for matching, and obtaining a first matching score output by each human body posture classifier;
and determining the first human body posture corresponding to the highest first matching score as the human body posture.
3. The method of claim 2, wherein the second motion profile is a motion profile of the mobile device tracking a target, the target being the human body, the method further comprising:
judging whether the mobile equipment loses a target or not;
when the movable device loses the target, the target is searched again.
4. The method of claim 3, wherein determining whether the removable device loses the target comprises:
training a target model by using the target tracking area of a first frame of image in the acquired images;
matching the target tracking area of a second frame image behind the first frame image based on the target model to obtain a second matching score;
judging whether the second matching score reaches a threshold value; wherein the mobile device is indicated to miss a target when the second match score does not meet the threshold.
5. The method of claim 4, after re-finding the target, further comprising:
when the suspected target is found again, matching a suspected target tracking area where the suspected target is located based on the target model to obtain a third matching score;
determining that the suspected target is the target when the third match score reaches the threshold.
6. An apparatus for controlling a movable device, comprising:
the identification module is used for acquiring images through the image acquisition unit, identifying the acquired images and obtaining an identification result; the recognition result indicates that a human body exists in the acquired image, and the human body posture is a first human body posture;
the superposition module is used for superposing first motion description information corresponding to the first human body posture and second motion description information currently executed by the movable equipment to obtain third motion description information; the motion description information is a motion vector, and the superposition module is used for superposing a first motion vector corresponding to the first human body posture and a second motion vector currently executed by the movable equipment to obtain a third motion vector; wherein a magnitude and a direction of the first motion vector are determined from the first human pose; the magnitude of the second motion vector is determined according to the speed, and the direction is determined according to the motion direction;
and the control module is used for calculating a target movement parameter corresponding to the third motion description information and controlling the movable equipment to move according to the target movement parameter so as to track the human body by utilizing the movable equipment.
7. The apparatus of claim 6, wherein the recognition module is configured to perform human body detection in the acquired image to obtain a detection result; when the detection result shows that a human body exists in the acquired image, inputting a target tracking area where the human body is located into a plurality of human body posture classifiers for matching, and obtaining a first matching score output by each human body posture classifier; and determining the first human body posture corresponding to the highest first matching score as the human body posture.
8. The apparatus of claim 7, wherein the second motion profile is a motion profile of the mobile device tracking a target, the target being the human body, the apparatus further comprising:
the judging module is used for judging whether the movable equipment loses a target or not;
and the searching module is used for searching the target again when the movable equipment loses the target.
9. The apparatus of claim 8, wherein the determining module is configured to train a target model using the target tracking area of a first frame of the acquired images; matching the target tracking area of a second frame image behind the first frame image based on the target model to obtain a second matching score; judging whether the second matching score reaches a threshold value; wherein the mobile device is indicated to miss a target when the second match score does not meet the threshold.
10. The apparatus of claim 9, wherein the apparatus further comprises:
the obtaining module is used for matching a suspected target tracking area where the suspected target is located based on the target model to obtain a third matching score when the suspected target is found again after the target is found again;
a determination module, configured to determine that the suspected target is the target when the third matching score reaches the threshold.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611051269.8A CN108089695B (en) | 2016-11-23 | 2016-11-23 | Method and device for controlling movable equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611051269.8A CN108089695B (en) | 2016-11-23 | 2016-11-23 | Method and device for controlling movable equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108089695A CN108089695A (en) | 2018-05-29 |
CN108089695B true CN108089695B (en) | 2021-05-18 |
Family
ID=62171145
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611051269.8A Active CN108089695B (en) | 2016-11-23 | 2016-11-23 | Method and device for controlling movable equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108089695B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830219B (en) * | 2018-06-15 | 2022-03-18 | 北京小米移动软件有限公司 | Target tracking method and device based on man-machine interaction and storage medium |
CN109032039B (en) * | 2018-09-05 | 2021-05-11 | 出门问问创新科技有限公司 | Voice control method and device |
CN109460031A (en) * | 2018-11-28 | 2019-03-12 | 科大智能机器人技术有限公司 | A kind of system for tracking of the automatic tractor based on human bioequivalence |
CN109558835A (en) * | 2018-11-28 | 2019-04-02 | 科大智能机器人技术有限公司 | A kind of control method and its system of the automatic tractor based on human bioequivalence |
CN115690194B (en) * | 2022-10-17 | 2023-09-19 | 广州赤兔宸行科技有限公司 | Vehicle-mounted XR equipment positioning method, device, equipment and storage medium |
CN116524569A (en) * | 2023-05-10 | 2023-08-01 | 深圳大器时代科技有限公司 | Multi-concurrency face recognition system and method based on classification algorithm |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104808799A (en) * | 2015-05-20 | 2015-07-29 | 成都通甲优博科技有限责任公司 | Unmanned aerial vehicle capable of indentifying gesture and identifying method thereof |
CN104820998A (en) * | 2015-05-27 | 2015-08-05 | 成都通甲优博科技有限责任公司 | Human body detection and tracking method and device based on unmanned aerial vehicle mobile platform |
CN105159452A (en) * | 2015-08-28 | 2015-12-16 | 成都通甲优博科技有限责任公司 | Control method and system based on estimation of human face posture |
CN105607740A (en) * | 2015-12-29 | 2016-05-25 | 清华大学深圳研究生院 | Unmanned aerial vehicle control method and device based on computer vision |
CN105892493A (en) * | 2016-03-31 | 2016-08-24 | 纳恩博(北京)科技有限公司 | Information processing method and mobile device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102353231B1 (en) * | 2015-04-24 | 2022-01-20 | 삼성디스플레이 주식회사 | Flying Display |
CN105929838B (en) * | 2016-05-20 | 2019-04-02 | 腾讯科技(深圳)有限公司 | The flight control method and mobile terminal and flight control terminal of a kind of aircraft |
-
2016
- 2016-11-23 CN CN201611051269.8A patent/CN108089695B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104808799A (en) * | 2015-05-20 | 2015-07-29 | 成都通甲优博科技有限责任公司 | Unmanned aerial vehicle capable of indentifying gesture and identifying method thereof |
CN104820998A (en) * | 2015-05-27 | 2015-08-05 | 成都通甲优博科技有限责任公司 | Human body detection and tracking method and device based on unmanned aerial vehicle mobile platform |
CN105159452A (en) * | 2015-08-28 | 2015-12-16 | 成都通甲优博科技有限责任公司 | Control method and system based on estimation of human face posture |
CN105607740A (en) * | 2015-12-29 | 2016-05-25 | 清华大学深圳研究生院 | Unmanned aerial vehicle control method and device based on computer vision |
CN105892493A (en) * | 2016-03-31 | 2016-08-24 | 纳恩博(北京)科技有限公司 | Information processing method and mobile device |
Also Published As
Publication number | Publication date |
---|---|
CN108089695A (en) | 2018-05-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108089695B (en) | Method and device for controlling movable equipment | |
KR101645722B1 (en) | Unmanned aerial vehicle having Automatic Tracking and Method of the same | |
US9201425B2 (en) | Human-tracking method and robot apparatus for performing the same | |
KR101769601B1 (en) | Unmanned aerial vehicle having Automatic Tracking | |
CN109923583A (en) | A kind of recognition methods of posture, equipment and moveable platform | |
WO2018058307A1 (en) | Systems and methods for initialization of target object in a tracking system | |
WO2018001245A1 (en) | Robot control using gestures | |
CN110427797B (en) | Three-dimensional vehicle detection method based on geometric condition limitation | |
CN104281839A (en) | Body posture identification method and device | |
WO2019127518A1 (en) | Obstacle avoidance method and device and movable platform | |
CN111976744A (en) | Control method and device based on taxi taking and automatic driving automobile | |
KR102493149B1 (en) | Image processing device and moving robot including the same | |
Lee et al. | independent object detection based on two-dimensional contours and three-dimensional sizes | |
CN112655021A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN108717553B (en) | Method and system for robot to follow human body | |
US20220415088A1 (en) | Extended reality gesture recognition proximate tracked object | |
KR101656519B1 (en) | Unmanned aerial vehicle having Automatic Tracking | |
CN114089364A (en) | Integrated sensing system device and implementation method | |
KR101876543B1 (en) | Apparatus and method for estimating a human body pose based on a top-view image | |
Germi et al. | Estimation of moving obstacle dynamics with mobile RGB-D camera | |
Bakar et al. | Development of a doctor following mobile robot with mono-vision based marker detection | |
KR20200079070A (en) | Detecting system for approaching vehicle in video and method thereof | |
Li et al. | ML-fusion based multi-model human detection and tracking for robust human-robot interfaces | |
EP4394733A1 (en) | Method and electronic device for determining user's hand in video | |
CN108171121A (en) | UAV Intelligent tracking and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |