CN111736607B - Robot motion guiding method, system and terminal based on foot motion - Google Patents

Robot motion guiding method, system and terminal based on foot motion Download PDF

Info

Publication number
CN111736607B
CN111736607B CN202010600315.5A CN202010600315A CN111736607B CN 111736607 B CN111736607 B CN 111736607B CN 202010600315 A CN202010600315 A CN 202010600315A CN 111736607 B CN111736607 B CN 111736607B
Authority
CN
China
Prior art keywords
foot
position information
detected
motion
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010600315.5A
Other languages
Chinese (zh)
Other versions
CN111736607A (en
Inventor
韩磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Black Eye Intelligent Technology Co ltd
Original Assignee
Shanghai Black Eye Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Black Eye Intelligent Technology Co ltd filed Critical Shanghai Black Eye Intelligent Technology Co ltd
Priority to CN202010600315.5A priority Critical patent/CN111736607B/en
Publication of CN111736607A publication Critical patent/CN111736607A/en
Application granted granted Critical
Publication of CN111736607B publication Critical patent/CN111736607B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a robot motion guiding method, a system and a terminal based on foot motion, which comprise the following steps: acquiring and recording an image of the foot to be detected; obtaining foot position information; outputting position information under a world coordinate system; the position information is used as a constraint condition to obtain a detection frame for determining the same foot to be detected; recording position information of the detection frame when the movement of the detection frame exceeds a threshold range; obtaining a motion trail of the foot to be detected; and obtaining a motion instruction of the guiding robot through the classifier and feeding back the motion instruction to the robot. The invention can guide the robot to move to the appointed position of the user, so that the guiding action is more convenient and faster, and the user experience is greatly improved.

Description

Robot motion guiding method, system and terminal based on foot motion
Technical Field
The invention relates to the field of artificial intelligence, in particular to a robot motion guiding method, system and terminal based on foot motion.
Background
With the improvement of life quality, robots are widely applied, but most robots move according to avoidance obstacles, if a user wants to guide the robots to a user designated position, the robots need to be controlled through remote control equipment, so that a great deal of time and energy are wasted, and the remote control equipment needs to be maintained and is easy to break down, so that guide work cannot be performed, and the user experience is not high.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present invention is to provide a method, a system and a terminal for guiding a robot motion based on foot motion, which are used for solving the problems that in the prior art, a user wants to guide the robot to a specified position of the user, and needs to operate by a remote control device, so that a great deal of time and effort are wasted, and the remote control device needs maintenance and is easy to fail, so that guiding work cannot be performed, and the user experience is not high.
To achieve the above and other related objects, the present invention provides a robot motion guiding method based on foot motions, comprising: acquiring and recording an image of the foot to be detected; acquiring foot position information for locating the foot position to be detected in the image based on a target detection algorithm; inputting the foot position information into a camera model to output the position information of the foot to be detected under a world coordinate system; the position information under the world coordinate system is used as a constraint condition to obtain a detection frame for determining the same foot to be detected; when the detection frame movement is detected to exceed a threshold range, starting to continuously record a plurality of detection frame position information; obtaining the motion trail of the foot to be detected according to the continuously recorded position information of the plurality of detection frames; inputting the motion trail into a classifier to obtain a motion instruction of the guiding robot; and feeding back a motion instruction of the guiding robot to be guided to the robot so as to guide the robot to move to the foot position to be detected.
In an embodiment of the present invention, the foot position information includes: position information of a target detection frame corresponding to the foot to be detected.
In an embodiment of the present invention, the foot position information is input into a camera model to output position information of the foot to be detected in a world coordinate system: and inputting the position information of the lower frame of the target detection frame corresponding to the foot to be detected into a camera model so as to output the position information of the foot to be detected under a world coordinate system.
In an embodiment of the present invention, the detecting frame for determining the same foot to be detected is obtained using the position information in the world coordinate system as a constraint condition: and under the condition that the foot position information under the world coordinate system is kept unchanged, obtaining the position information of a detection frame for determining the same foot to be detected.
In an embodiment of the present invention, when the detecting frame movement is detected to exceed the threshold range, the continuous recording of the plurality of detecting frame position information is started: when detecting that the motion displacement value of the center position of the detection frame exceeds a preset threshold value, starting to continuously record the center position information of a plurality of detection frames from the current frame; wherein each frame corresponds to the central position information of one detection frame.
In an embodiment of the present invention, the motion trail of the foot to be detected is obtained according to a plurality of continuously recorded detection frame position information: and calculating and obtaining the motion trail of the foot to be detected in the time range of the multi-frame according to the continuously recorded central position information of the plurality of detection frames corresponding to the multi-frame.
In an embodiment of the invention, the guiding robot motion instruction includes: the robot motion instructions need to be directed or need not be directed.
In an embodiment of the present invention, the number of the center position information of the detection frame is related to a frame rate used in the target detection algorithm.
To achieve the above and other related objects, the present invention provides a robot motion guide system based on foot motions, comprising: the acquisition module is used for acquiring and recording the images of the feet to be detected; the target detection module is connected with the acquisition module and used for acquiring foot position information for positioning the foot position to be detected in the image based on a target detection algorithm; the world coordinate position module is connected with the target detection module and is used for inputting the foot position information into a camera model so as to output the position information of the foot to be detected under a world coordinate system; the constraint module is connected with the world coordinate position module and is used for obtaining a detection frame for determining the same foot to be detected by taking the position information under the world coordinate system as constraint conditions; the detection frame position recording module is connected with the constraint module and is used for starting to continuously record a plurality of detection frame position information when detecting that the detection frame moves beyond a threshold range; the motion trail module is connected with the detection frame position recording module and is used for obtaining the motion trail of the foot to be detected according to the continuously recorded position information of the plurality of detection frames; the motion instruction generation module is connected with the motion trail module and is used for inputting the motion trail into a classifier to obtain a motion instruction of the guiding robot; and the guiding movement module is connected with the movement instruction generation module and used for feeding back a movement instruction of the guiding robot to be guided to the robot so as to guide the robot to move to the position of the foot to be detected.
To achieve the above and other related objects, the present invention provides a robot motion guide terminal based on foot motions, comprising: a memory for storing a computer program; and a processor running the computer program to perform the robot motion guidance method based on foot motions.
As described above, the robot motion guiding method, system and terminal based on foot motion of the present invention have the following beneficial effects: according to the invention, the robot is guided to move to the user-specified position through the motion trail of the foot-operated specified position, so that the guiding action is more convenient and quicker, and the user experience is greatly improved.
Drawings
Fig. 1 is a schematic flow chart of a robot motion guiding method based on foot motion according to an embodiment of the invention.
Fig. 2 is a schematic structural view of a robot motion guidance system based on foot motions according to an embodiment of the present invention.
Fig. 3 is a schematic view showing a structure of a robot motion guide terminal based on foot motions according to an embodiment of the present invention.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
In the following description, reference is made to the accompanying drawings, which illustrate several embodiments of the invention. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present invention. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present invention is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "below," "lower," "above," "upper," and the like, may be used herein to facilitate a description of one element or feature as illustrated in the figures relative to another element or feature.
Throughout the specification, when a portion is said to be "connected" to another portion, this includes not only the case of "direct connection" but also the case of "indirect connection" with other elements interposed therebetween. In addition, when a certain component is said to be "included" in a certain section, unless otherwise stated, other components are not excluded, but it is meant that other components may be included.
The first, second, and third terms are used herein to describe various portions, components, regions, layers and/or sections, but are not limited thereto. These terms are only used to distinguish one portion, component, region, layer or section from another portion, component, region, layer or section. Thus, a first portion, component, region, layer or section discussed below could be termed a second portion, component, region, layer or section without departing from the scope of the present invention.
Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, operations, elements, components, items, categories, and/or groups. The terms "or" and/or "as used herein are to be construed as inclusive, or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; A. b and C). An exception to this definition will occur only when a combination of elements, functions or operations are in some way inherently mutually exclusive.
The invention provides a robot motion guiding method based on foot motion, which is used for solving the problems that in the prior art, a user wants to guide a robot to a user designated position and needs to operate through remote control equipment, so that a great amount of time and energy are wasted, the remote control equipment is required to be maintained and is easy to fail, so that guiding work cannot be performed, and the user experience is not high.
An embodiment of the present invention will be described in detail below with reference to fig. 1 so that those skilled in the art to which the present invention pertains can easily implement the present invention. This invention may be embodied in many different forms and is not limited to the embodiments described herein.
As shown in fig. 1, a schematic flow chart of a robot motion guiding method based on foot motion in an embodiment is shown, namely, the following steps are performed;
step S11: an image is acquired of the foot to be detected.
Alternatively, an RGB color image of the foot to be detected is acquired and recorded.
Alternatively, the capturing may be performed by any capturing device that may capture RGB color images of the foot to be detected, which is not limited in the present invention.
Preferably, the acquisition device adopts one or more of a monocular RGB camera, a binocular RGB camera and a depth camera.
Optionally, the image is a still image or a moving image.
Optionally, an image is acquired and recorded of the complete foot to be detected. I.e. it is not possible to use an image in which only part of the foot to be detected is recorded.
Step S12: based on a target detection algorithm, foot position information for locating the foot position to be detected in the image is obtained.
Optionally, calculating the image of the foot to be detected by using a target detection algorithm to obtain foot position information for positioning the position of the foot to be detected in the image.
Optionally, the target detection algorithm is a target position detection algorithm. Specifically, the position of the foot to be detected is indicated by a target detection box (sounding box). The target detection frame refers to corresponding parameters of a rectangular frame capable of framing a foot to be detected in the image. Wherein the rectangular frame includes: an upper frame, a lower frame and two side frames.
Optionally, the foot position information includes: position information of a target detection frame corresponding to the foot to be detected.
Optionally, the network used by the target detection algorithm includes: one or more of an RCNN network, an SPP-Net, a Fast-RCNN network, and a Fast-RCNN network.
Step S13: and inputting the foot position information into a camera model to output the position information of the foot to be detected under a world coordinate system.
Optionally, the distance between the camera and the foot used by the camera model is controlled within 2 meters, and if the camera is in a vertical relationship with the horizontal plane, the camera model can be directly applied.
Optionally, the position information of the lower frame of the target detection frame corresponding to the foot to be detected is input into a camera model, so as to output the position information of the foot to be detected under the world coordinate system. Wherein, the foot to be detected is on the horizontal plane, and therefore, the lower frame corresponds to the horizontal plane on which the foot to be detected is positioned.
Optionally, the camera model includes: a linear model or a nonlinear model.
Step S14: and obtaining a detection frame for determining the same foot to be detected by taking the position information under the world coordinate system as a constraint condition.
Optionally, under the condition that the foot position information under the world coordinate system is kept unchanged, the position information of the detection frame for determining the same foot to be detected is obtained so as to ensure that the same foot is in motion.
If the sole or heel of the foot to be detected leaves the bottom surface, a horizontal plane or standing on the toe is flapped, and the lower frame of the target detection frame of the foot to be detected is unchanged, so that the position of the foot to be detected in the world coordinate system is not changed. However, the size and the center point of the target detection frame of the foot are changed, so that in order to limit the lifting and lowering movement of the same foot to be recognized, the position information under the world coordinate system obtained by the lower boundary of the target detection frame is generated, and the movement track of the foot to be detected is generated under the condition that the position information under the world coordinate system is unchanged.
Optionally, the motion track is a motion track of the center of the foot to be detected (through the center position of the foot to be detected) under various conditions of ensuring that the same foot to be detected does not leave a plane (by keeping the foot position information under the world coordinate system unchanged), so as to correspond to the motion of the foot to be detected.
Step S15: when the detection frame movement is detected to exceed the threshold range, a plurality of detection frame position information is started to be recorded continuously.
Optionally, when detecting that the motion displacement value of the center position of the detection frame exceeds a preset threshold, starting to record a plurality of pieces of center position information of the detection frame continuously from the current frame; wherein each frame corresponds to the central position information of one detection frame.
Optionally, the motion displacement includes: horizontal displacement or vertical displacement.
Optionally, the threshold is set according to the specific situation, and is not limited in the present invention.
Optionally, the number of detection frame center position information corresponds to a plurality of different consecutive frames, and the number of frames is related to a frame rate used in the target detection algorithm.
For example, if the guiding action is set to be standing on the top of the foot more than 4 times, the whole process is about 2000ms, the frame rate of the target detection is 10FPS, and 20 frames of images can be obtained. And thus, when it is detected that the detection frame center position motion displacement value exceeds a preset threshold value, starting to store 20 frames of detection frame center position information continuously from the current frame.
Step S16: and obtaining the motion trail of the foot to be detected according to the continuously recorded position information of the plurality of detection frames.
Optionally, calculating and obtaining the motion trail of the foot to be detected in the time range of the multiframe according to the continuously recorded central position information of the plurality of detection frames corresponding to the multiframe.
Optionally, the motion of the foot to be detected is corresponding to the motion track.
Optionally, the motion of the foot to be detected includes: with one end of the front sole or the heel at the horizontal plane and the other end at one or more lifting or putting down (standing on the tiptoe and beating the plane). The motion track corresponding to the foot lifting motion to be detected is a track with positive displacement in the vertical direction of the center point of the detection frame, and the motion track corresponding to the foot lowering motion to be detected is a track with negative displacement in the vertical direction of the center point of the detection frame.
The vertical direction mentioned here is the positive y-axis direction in the two-dimensional coordinate system where the detection frame is located, and if the adopted standard is different, the directions are also different, and the definition of positive and negative values is also different, which is not limited in the present invention.
Optionally, the foot tiptoe to be detected starts to act, and the corresponding track is a track with a positive displacement in the horizontal direction under the condition that the displacement in the vertical direction of the center of the detection frame changes;
the foot to be detected is flapped on a plane (the flapping motions refer to the condition that the heel is motionless in the horizontal plane and the front sole moves), and the corresponding track is a track with a negative displacement in the horizontal direction under the condition that the displacement in the vertical direction of the center of the detection frame changes.
It should be noted that the vertical direction mentioned here is the y-axis forward direction in the two-dimensional coordinate system where the detection frame is located, and the horizontal direction is the x-axis forward direction. If the adopted criteria are different, the directions are also different, and the definition of positive values and negative values are also different, and the invention is not limited.
Step S17: and inputting the motion trail into a classifier to obtain a motion instruction of the guiding robot.
Optionally, the classifier is trained by each motion trail, the motion trail is taken as input, and the motion instruction of the guiding robot is taken as output. Wherein the motion trail comprises: a guided motion trajectory or a non-guided motion trajectory.
It should be noted that the guiding movement track is set in advance according to the requirement, and is not limited in the present invention. If the input guiding motion trail is not preset, the guiding motion trail is directly used as the non-guiding motion trail.
Optionally, the robot motion instruction is required to be guided or the robot motion instruction is not required to be guided. Specifically, if the motion trail of the classifier is input, outputting a motion instruction of the robot to be guided; and if the motion trail of the classifier is input, outputting a motion instruction of the robot which does not need to be guided.
Optionally, the classifier is combined with the hidden Markov model to train the motion trail, so that time sequence data can be better processed, and a better effect is achieved.
Step S18: and feeding back a motion instruction of the guiding robot to be guided to the robot so as to guide the robot to move to the foot position to be detected.
Optionally, a motion instruction to be guided, which is output by the classifier, is fed back to the robot, and the robot is guided to move to the position of the foot to be detected.
Optionally, the motion instruction to be guided, which is output by the classifier, is fed back to the motion module of the robot, so that the robot is guided to move to the position of the foot to be detected, and the robot can be guided to move to any designated position.
Similar to the principles of the embodiments described above, the present invention provides a robotic motion guiding system based on foot motion.
Specific embodiments are provided below with reference to the accompanying drawings:
a schematic structural diagram of a robot motion guidance system based on foot motion according to an embodiment of the present invention is shown in fig. 2.
The system comprises:
an acquisition module 21 for acquiring and recording an image of the foot to be detected;
the target detection module 22 is connected with the acquisition module 21 and is used for acquiring foot position information for positioning the foot position to be detected in the image based on a target detection algorithm;
a world coordinate position module 23 connected to the target detection module 22, for inputting the foot position information into a camera model to output the position information of the foot to be detected in the world coordinate system;
the constraint module 24 is connected with the world coordinate position module 23 and is used for obtaining a detection frame for determining the same foot to be detected by taking the position information under the world coordinate system as constraint conditions;
a detection frame position recording module 25 connected to the constraint module 24, for starting to continuously record a plurality of detection frame position information when detecting that the detection frame movement exceeds a threshold range;
the motion trail module 26 is connected with the detection frame position recording module 25 and is used for obtaining the motion trail of the foot to be detected according to the continuously recorded position information of the plurality of detection frames;
the motion instruction generating module 27 is connected with the motion trail module 26 and is used for inputting the motion trail into a classifier to obtain a motion instruction of the guiding robot;
and the guiding movement module 28 is connected with the movement instruction generation module 27 and is used for feeding back a movement instruction of the guiding robot to be guided to the robot so as to guide the robot to move to the position of the foot to be detected.
Optionally, the acquisition module 21 acquires RGB color images recorded on the foot to be detected.
Optionally, the acquisition module 21 includes: any acquisition device that can acquire and record RGB color images of the foot to be detected can be used for acquisition, and the invention is not limited.
Preferably, the acquisition device adopts one or more of a monocular RGB camera, a binocular RGB camera and a depth camera.
Optionally, the image is a still image or a moving image.
Optionally, the acquisition module 21 acquires an image recorded with the complete foot to be detected. I.e. it is not possible to use an image in which only part of the foot to be detected is recorded.
Optionally, the target detection module 22 calculates an image of the foot to be detected by using a target detection algorithm, and obtains foot position information for locating the position of the foot to be detected in the image.
Optionally, the target detection algorithm is a target position detection algorithm. Specifically, the position of the foot to be detected is indicated by a target detection box (sounding box). The target detection frame refers to corresponding parameters of a rectangular frame capable of framing a foot to be detected in the image. Wherein the rectangular frame includes: an upper frame, a lower frame and two side frames.
Optionally, the foot position information includes: position information of a target detection frame corresponding to the foot to be detected.
Optionally, the network used by the target detection algorithm includes: one or more of an RCNN network, an SPP-Net, a Fast-RCNN network, and a Fast-RCNN network.
Optionally, the distance between the camera and the foot used by the camera model is controlled within 2 meters, and if the camera is in a vertical relationship with the horizontal plane, the camera model can be directly applied.
Optionally, the world coordinate position module 23 inputs position information corresponding to a lower frame of the target detection frame of the foot to be detected into a camera model to output position information of the foot to be detected in a world coordinate system. Wherein, the foot to be detected is on the horizontal plane, and therefore, the lower frame corresponds to the horizontal plane on which the foot to be detected is positioned.
Optionally, the camera model includes: a linear model or a nonlinear model.
Optionally, the constraint module 24 obtains position information of a detection frame for determining the same foot to be detected, while keeping the foot position information in the world coordinate system unchanged, so as to ensure that the same foot is moving.
If the sole or heel of the foot to be detected leaves the bottom surface, a horizontal plane or standing on the toe is flapped, and the lower frame of the target detection frame of the foot to be detected is unchanged, so that the position of the foot to be detected in the world coordinate system is not changed. However, the size and the center point of the target detection frame of the foot are changed, so that in order to limit the lifting and lowering movement of the same foot to be recognized, the position information under the world coordinate system obtained by the lower boundary of the target detection frame is generated, and the movement track of the foot to be detected is generated under the condition that the position information under the world coordinate system is unchanged.
Optionally, the motion track is a motion track of the center of the foot to be detected (through the center position of the foot to be detected) under various conditions of ensuring that the same foot to be detected does not leave a plane (by keeping the foot position information under the world coordinate system unchanged), so as to correspond to the motion of the foot to be detected.
Alternatively, when the detection frame position recording module 25 detects that the detection frame center position movement displacement value exceeds a preset threshold value, a plurality of detection frame center position information is continuously recorded from the current frame; wherein each frame corresponds to the central position information of one detection frame.
Optionally, the motion displacement includes: horizontal displacement or vertical displacement.
Optionally, the threshold is set according to the specific situation, and is not limited in the present invention.
Optionally, the number of detection frame center position information corresponds to a plurality of different consecutive frames, and the number of frames is related to a frame rate used in the target detection algorithm.
Optionally, the motion track module 26 calculates and obtains the motion track of the foot to be detected within the time range of the multiple frames according to the continuously recorded central position information of the multiple detection frames corresponding to the multiple frames.
Optionally, the motion trajectory module 26 corresponds to the motion of the foot to be detected according to the motion trajectory.
Optionally, the motion of the foot to be detected includes: with one end of the forefoot or heel lying in a horizontal plane and the other end raised or lowered one or more times. The motion track corresponding to the foot lifting motion to be detected is a track with positive displacement in the vertical direction of the center point of the detection frame, and the motion track corresponding to the foot lowering motion to be detected is a track with negative displacement in the vertical direction of the center point of the detection frame.
The vertical direction mentioned here is the positive y-axis direction in the two-dimensional coordinate system where the detection frame is located, and if the adopted standard is different, the directions are also different, and the definition of positive and negative values is also different, which is not limited in the present invention.
Optionally, the foot tiptoe to be detected starts to act, and the corresponding track is a track with a positive displacement in the horizontal direction under the condition that the displacement in the vertical direction of the center of the detection frame changes;
the foot to be detected is flapped on a plane (the flapping motions refer to the condition that the heel is motionless in the horizontal plane and the front sole moves), and the corresponding track is a track with a negative displacement in the horizontal direction under the condition that the displacement in the vertical direction of the center of the detection frame changes.
It should be noted that the vertical direction mentioned here is the y-axis forward direction in the two-dimensional coordinate system where the detection frame is located, and the horizontal direction is the x-axis forward direction. If the adopted criteria are different, the directions are also different, and the definition of positive values and negative values are also different, and the invention is not limited.
Optionally, the classifier is trained by each motion trail, the motion trail is taken as input, and the motion instruction of the guiding robot is taken as output. Wherein the motion trail comprises: a guided motion trajectory or a non-guided motion trajectory.
It should be noted that the guiding movement track is set in advance according to the requirement, and is not limited in the present invention. If the input guiding motion trail is not preset, the guiding motion trail is directly used as the non-guiding motion trail.
Optionally, the directing the robot motion instruction includes: the robot motion instructions need to be directed or need not be directed. Specifically, if the motion trail of the classifier is input, outputting a motion instruction of the robot to be guided; and if the motion trail of the classifier is input, outputting a motion instruction of the robot which does not need to be guided.
Optionally, the classifier is combined with the hidden Markov model to train the motion trail, so that time sequence data can be better processed, and a better effect is achieved.
Optionally, the guiding movement module 28 feeds back a movement instruction to be guided, which is output by the classifier, to the robot, and guides the robot to move to the position of the foot to be detected.
Optionally, the guiding movement module 28 feeds back the movement instruction to be guided, which is output by the classifier, to the movement module of the robot, and guides the robot to move to the position of the foot to be detected, so that the robot can be guided to move to any designated position.
As shown in fig. 3, a schematic structural view of a robot motion guide terminal 30 based on foot motion in the embodiment of the present invention is shown.
The robot motion guide terminal 30 based on foot motion includes:
the memory 31 is for storing a computer program; the processor 32 runs a computer program to implement the robot motion guidance method based on foot motions as described in fig. 1.
Alternatively, the number of the memories 31 may be one or more, and the number of the processors 32 may be one or more, and one is taken as an example in fig. 3.
Alternatively, the external device may be any device, such as a mobile terminal and a control terminal of the robot, which is not limited in the present invention.
Optionally, the processor 32 in the robot motion guiding terminal 30 based on foot motion loads one or more instructions corresponding to the process of the application program into the memory 31 according to the steps as shown in fig. 1, and the processor 32 executes the application program stored in the memory 31, thereby implementing various functions in the robot motion guiding method based on foot motion as shown in fig. 1.
Optionally, the memory 31 may include, but is not limited to, high speed random access memory, nonvolatile memory. Such as one or more disk storage devices, flash memory devices, or other non-volatile solid-state storage devices; the processor 31 may include, but is not limited to, a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
Alternatively, the processor 32 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The present invention also provides a computer-readable storage medium storing a computer program which, when run, implements a robot motion guidance method based on foot motions as shown in fig. 1. The computer-readable storage medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (compact disk-read only memories), magneto-optical disks, ROMs (read-only memories), RAMs (random access memories), EPROMs (erasable programmable read only memories), EEPROMs (electrically erasable programmable read only memories), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions. The computer readable storage medium may be an article of manufacture that is not accessed by a computer device or may be a component used by an accessed computer device.
In summary, the method, the system and the terminal for guiding the robot movement based on the foot movement solve the problems that in the prior art, a user wants to guide the robot to reach a user designated position and needs to operate through remote control equipment, so that a great amount of time and energy are wasted, the remote control equipment is required to be maintained and is easy to fail, so that the guiding work cannot be performed, and the user experience is not high. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.

Claims (8)

1. A method of guiding movement of a robot based on foot movement, the method comprising:
acquiring and recording an image of the foot to be detected;
acquiring foot position information for locating the foot position to be detected in the image based on a target detection algorithm;
inputting the foot position information into a camera model to output the position information of the foot to be detected under a world coordinate system; the foot position information is input into a camera model to output the position information of the foot to be detected under a world coordinate system: inputting the position information of the lower frame of the target detection frame corresponding to the foot to be detected into a camera model to output the position information of the foot to be detected under a world coordinate system; the lower frame corresponds to the horizontal plane where the foot to be detected is located;
the position information under the world coordinate system is used as a constraint condition to obtain a detection frame for determining the same foot to be detected; the method comprises the steps of obtaining a detection frame for determining the same foot to be detected by taking position information under the world coordinate system as constraint conditions: under the condition of keeping the foot position information under the world coordinate system unchanged, obtaining the position information of a detection frame for determining the same foot to be detected;
when the detection frame movement is detected to exceed a threshold range, starting to continuously record a plurality of detection frame position information;
obtaining the motion trail of the foot to be detected according to the continuously recorded position information of the plurality of detection frames;
inputting the motion trail into a classifier to obtain a motion instruction of the guiding robot;
and feeding back a motion instruction of the guiding robot to be guided to the robot so as to guide the robot to move to the foot position to be detected.
2. The robot motion guiding method based on foot motion according to claim 1, wherein the foot position information includes: position information of a target detection frame corresponding to the foot to be detected.
3. The robot motion guiding method based on foot motion according to claim 1, wherein when the detection frame motion is detected to exceed a threshold range, the continuous recording of a plurality of detection frame position information is started:
when detecting that the motion displacement value of the center position of the detection frame exceeds a preset threshold value, starting to continuously record the center position information of a plurality of detection frames from the current frame; wherein each frame corresponds to the central position information of one detection frame.
4. The robot motion guiding method based on foot motion according to claim 1, wherein the motion trajectory of the foot to be detected is obtained from a plurality of pieces of detection frame position information recorded in succession:
and calculating and obtaining the motion trail of the foot to be detected in the time range of the multi-frame according to the continuously recorded central position information of the plurality of detection frames corresponding to the multi-frame.
5. The robot motion guiding method based on foot motions of claim 1, wherein the directing the robot motion instructions comprises: the robot motion instructions need to be directed or need not be directed.
6. The robot motion guiding method based on the foot motion according to claim 3, wherein the number of the detection frame center position information is related to a frame rate used in the target detection algorithm.
7. A robot motion guidance system based on foot motion, comprising:
the acquisition module is used for acquiring and recording the images of the feet to be detected;
the target detection module is connected with the acquisition module and used for acquiring foot position information for positioning the foot position to be detected in the image based on a target detection algorithm; the foot position information is input into a camera model to output the position information of the foot to be detected under a world coordinate system: inputting the position information of the lower frame of the target detection frame corresponding to the foot to be detected into a camera model to output the position information of the foot to be detected under a world coordinate system; the lower frame corresponds to the horizontal plane where the foot to be detected is located;
the world coordinate position module is connected with the target detection module and is used for inputting the foot position information into a camera model so as to output the position information of the foot to be detected under a world coordinate system; the method comprises the steps of obtaining a detection frame for determining the same foot to be detected by taking position information under the world coordinate system as constraint conditions: under the condition of keeping the foot position information under the world coordinate system unchanged, obtaining the position information of a detection frame for determining the same foot to be detected;
the constraint module is connected with the world coordinate position module and is used for obtaining a detection frame for determining the same foot to be detected by taking the position information under the world coordinate system as constraint conditions;
the detection frame position recording module is connected with the constraint module and is used for starting to continuously record a plurality of detection frame position information when detecting that the detection frame moves beyond a threshold range;
the motion trail module is connected with the detection frame position recording module and is used for obtaining the motion trail of the foot to be detected according to the continuously recorded position information of the plurality of detection frames;
the motion instruction generation module is connected with the motion trail module and is used for inputting the motion trail into a classifier to obtain a motion instruction of the guiding robot;
and the guiding movement module is connected with the movement instruction generation module and used for feeding back a movement instruction of the guiding robot to be guided to the robot so as to guide the robot to move to the position of the foot to be detected.
8. A robot motion guide terminal based on foot motion, comprising:
a memory for storing a computer program;
a processor for executing the computer program to perform the robot motion guidance method based on foot motions as claimed in any one of claims 1 to 6.
CN202010600315.5A 2020-06-28 2020-06-28 Robot motion guiding method, system and terminal based on foot motion Active CN111736607B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010600315.5A CN111736607B (en) 2020-06-28 2020-06-28 Robot motion guiding method, system and terminal based on foot motion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010600315.5A CN111736607B (en) 2020-06-28 2020-06-28 Robot motion guiding method, system and terminal based on foot motion

Publications (2)

Publication Number Publication Date
CN111736607A CN111736607A (en) 2020-10-02
CN111736607B true CN111736607B (en) 2023-08-11

Family

ID=72651531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010600315.5A Active CN111736607B (en) 2020-06-28 2020-06-28 Robot motion guiding method, system and terminal based on foot motion

Country Status (1)

Country Link
CN (1) CN111736607B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232279B (en) * 2020-11-04 2023-09-05 杭州海康威视数字技术股份有限公司 Personnel interval detection method and device
CN112379781B (en) * 2020-12-10 2023-02-28 深圳华芯信息技术股份有限公司 Man-machine interaction method, system and terminal based on foot information identification

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105223957A (en) * 2015-09-24 2016-01-06 北京零零无限科技有限公司 A kind of method and apparatus of gesture manipulation unmanned plane
CN105224912A (en) * 2015-08-31 2016-01-06 电子科技大学 Based on the video pedestrian detection and tracking method of movable information and Track association
CN108229318A (en) * 2017-11-28 2018-06-29 北京市商汤科技开发有限公司 The training method and device of gesture identification and gesture identification network, equipment, medium
CN109343701A (en) * 2018-09-03 2019-02-15 电子科技大学 A kind of intelligent human-machine interaction method based on dynamic hand gesture recognition
CN109732593A (en) * 2018-12-28 2019-05-10 深圳市越疆科技有限公司 A kind of far-end control method of robot, device and terminal device
CN111241940A (en) * 2019-12-31 2020-06-05 浙江大学 Remote control method of robot and human body boundary frame determination method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190143517A1 (en) * 2017-11-14 2019-05-16 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for collision-free trajectory planning in human-robot interaction through hand movement prediction from vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224912A (en) * 2015-08-31 2016-01-06 电子科技大学 Based on the video pedestrian detection and tracking method of movable information and Track association
CN105223957A (en) * 2015-09-24 2016-01-06 北京零零无限科技有限公司 A kind of method and apparatus of gesture manipulation unmanned plane
CN108229318A (en) * 2017-11-28 2018-06-29 北京市商汤科技开发有限公司 The training method and device of gesture identification and gesture identification network, equipment, medium
CN109343701A (en) * 2018-09-03 2019-02-15 电子科技大学 A kind of intelligent human-machine interaction method based on dynamic hand gesture recognition
CN109732593A (en) * 2018-12-28 2019-05-10 深圳市越疆科技有限公司 A kind of far-end control method of robot, device and terminal device
CN111241940A (en) * 2019-12-31 2020-06-05 浙江大学 Remote control method of robot and human body boundary frame determination method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Kinect的手势识别在人机交互中的应用研究;蒋涵妮;《中国优秀硕士学位论文全文数据库 信息科技辑》;第32-33、44页 *

Also Published As

Publication number Publication date
CN111736607A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
US11360571B2 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
CN111736607B (en) Robot motion guiding method, system and terminal based on foot motion
WO2021139484A1 (en) Target tracking method and apparatus, electronic device, and storage medium
US8265425B2 (en) Rectangular table detection using hybrid RGB and depth camera sensors
TWI684136B (en) Robot, control system and method for operating the robot
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
KR101706365B1 (en) Image segmentation method and image segmentation device
JP6694039B1 (en) Fish size calculator
EP2957206B1 (en) Robot cleaner and method for controlling the same
KR101279561B1 (en) A fast and accurate face detection and tracking method by using depth information
JP2008023630A (en) Arm-guiding moving body and method for guiding arm
CA3183341A1 (en) Autonomous livestock monitoring
KR101371038B1 (en) Mobile robot and method for tracking target of the same
CN114756020A (en) Method, system and computer readable recording medium for generating robot map
US20230020725A1 (en) Information processing apparatus, information processing method, and program
CN112379781A (en) Man-machine interaction method, system and terminal based on foot information identification
JP2005196359A (en) Moving object detection apparatus, moving object detection method and moving object detection program
JP6265370B2 (en) Object tracking method and object tracking system
Pal et al. A novel end-to-end vision-based architecture for agricultural human–robot collaboration in fruit picking operations
Chen et al. Multiple-object tracking based on monocular camera and 3-D lidar fusion for autonomous vehicles
WO2015129152A1 (en) Image recognition system and semiconductor integrated circuit
Stock et al. Subpixel corner detection for tracking applications using cmos camera technology
US20230237835A1 (en) Object tracking method and object tracking device
CN111813131B (en) Guide point marking method and device for visual navigation and computer equipment
CN115106644A (en) Self-adaptive judgment method for welding seam starting point, welding method, welding equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant