CN112379781A - Man-machine interaction method, system and terminal based on foot information identification - Google Patents

Man-machine interaction method, system and terminal based on foot information identification Download PDF

Info

Publication number
CN112379781A
CN112379781A CN202011457598.9A CN202011457598A CN112379781A CN 112379781 A CN112379781 A CN 112379781A CN 202011457598 A CN202011457598 A CN 202011457598A CN 112379781 A CN112379781 A CN 112379781A
Authority
CN
China
Prior art keywords
foot
information
human
identification
computer interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011457598.9A
Other languages
Chinese (zh)
Other versions
CN112379781B (en
Inventor
韩磊
凌璠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huaxin Information Technology Co Ltd
Original Assignee
Shenzhen Huaxin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huaxin Information Technology Co Ltd filed Critical Shenzhen Huaxin Information Technology Co Ltd
Priority to CN202011457598.9A priority Critical patent/CN112379781B/en
Publication of CN112379781A publication Critical patent/CN112379781A/en
Application granted granted Critical
Publication of CN112379781B publication Critical patent/CN112379781B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Manipulator (AREA)

Abstract

The human-computer interaction method, the human-computer interaction system and the human-computer interaction terminal based on the foot information identification solve the problems that in the prior art, a user wants to control the robot to perform different kinds of work, needs to control the robot through remote control equipment or face identification, the remote control equipment needs to be maintained, and faults are prone to occur, so that the guiding work cannot be performed, and the user experience degree is low. And the robot with special visual angle can not adopt face recognition, and the face recognition requires high variable range and is smaller, resulting in low recognition accuracy and small recognition action amplitude, and further greatly reducing the efficiency of man-machine interaction work. The invention provides a man-machine interaction method for foot information identification, which realizes direct interaction between a user and a machine by identifying the foot gesture in an acquired image, omits intermediate steps, better accords with the habit of the user, reduces the hardware cost and improves the efficiency.

Description

Man-machine interaction method, system and terminal based on foot information identification
Technical Field
The invention belongs to the field of artificial intelligence, and particularly relates to a human-computer interaction method, a human-computer interaction system and a human-computer interaction terminal based on foot information identification.
Background
Along with the improvement of quality of life, the robot is used by a large amount, but most of robots mostly move according to the remote controller remote control that the user used, if the user wants to control the robot carries out different kinds of work, need control through remote control equipment, can not only waste a large amount of time and energy like this, and remote control equipment need maintain still easily to break down and lead to the guide work to go on moreover for user experience is not high.
Today face recognition is mostly used to control the operation of robots, but this method is not applicable for robots with special view angles. For example, the floor sweeping robot needs a user to bend down to press a switch on the robot or control the robot through a remote controller because the height of the robot is low, and the robot cannot be controlled by utilizing facial recognition, and the requirement for the facial recognition is high, the variable range is small, so that the recognition accuracy is low, the recognition action amplitude is small, and the efficiency of man-machine interaction work is greatly reduced.
Disclosure of Invention
In view of the above drawbacks of the prior art, an object of the present invention is to provide a human-computer interaction method, system and terminal based on foot information recognition, for solving the problems that in the prior art, a user wants to control the robot to perform different kinds of work, and needs to control the robot through a remote control device or face recognition, and the remote control device needs to be maintained and is also prone to malfunction, which results in that guidance work cannot be performed, and user experience is low. And the robot with special visual angle can not adopt face recognition, and the face recognition requires high variable range and is smaller, resulting in low recognition accuracy and small recognition action amplitude, and further greatly reducing the efficiency of man-machine interaction work.
In order to achieve the above and other related objects, the present invention provides a human-computer interaction method based on foot information recognition, including: acquiring a foot state image in real time; the foot state image records two complete feet to be identified; identifying the foot state image to obtain foot state identification information; obtaining an interactive response signal corresponding to the foot state identification information according to the foot state identification information; and feeding back the interactive response signal to the robot so as to enable the robot to make a response action corresponding to the interactive response signal.
In an embodiment of the present invention, the state identification information includes: static identification information and/or dynamic identification information.
In an embodiment of the present invention, the static identification information includes: static pose information and/or static position information.
In an embodiment of the present invention, the dynamic identification information includes: dynamic attitude information and/or dynamic trajectory position information.
In an embodiment of the present invention, the identifying method includes: and one or more of target detection identification, image classification, video classification and posture estimation modes.
In an embodiment of the present invention, the response action includes: one or more of stop, start, pause, cancel, charge, test, motion direction control, motion speed control, motion type control, and motion time control actions.
In an embodiment of the present invention, the foot status image includes: an RGB image or a depth image.
In an embodiment of the present invention, a response manner of the response action includes: one or more of sound, vibration, motion, and display mode; wherein, the display mode includes: one or more of image display, light display and character display modes.
In order to achieve the above and other related objects, the present invention provides a human-computer interaction system based on foot information recognition, comprising: the image acquisition module is used for acquiring foot state images in real time; the foot state image records two complete feet to be identified; the identification module is connected with the image acquisition module and used for identifying the foot state image to obtain foot state identification information; wherein the state identification information includes: static identification information and/or dynamic identification information. The interactive response signal generating module is connected with the identification module and is used for acquiring an interactive response signal corresponding to the foot state identification information according to the foot state identification information; and the interactive action response module is connected with the interactive response signal generation module and used for feeding back the interactive response signal to the robot so as to enable the robot to make a response action corresponding to the interactive response signal.
In order to achieve the above and other related objects, the present invention provides a human-computer interaction terminal based on foot information identification, comprising: a memory for storing a computer program; and the processor is used for executing the man-machine interaction method based on the foot information identification.
As described above, the present invention is a human-computer interaction method, system and terminal based on foot information identification, and has the following beneficial effects: the invention provides a man-machine interaction method for foot information identification, which realizes direct interaction between a user and a machine by identifying the foot gesture in an acquired image, omits intermediate steps, better accords with the habit of the user, reduces the hardware cost and improves the efficiency. The method can also provide a higher acquisition frame rate, meet the effect of a subsequent algorithm, ensure the precision of the algorithm of the image processing module, and have higher processing efficiency so as to meet the real-time property of human-computer interaction.
Drawings
Fig. 1 is a flowchart illustrating a human-computer interaction method based on foot information recognition according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating a human-computer interaction method based on foot information recognition according to an embodiment of the invention.
Fig. 3 is a schematic structural diagram of a human-computer interaction system based on foot information recognition according to an embodiment of the invention.
Fig. 4 is a schematic structural diagram of a human-computer interaction terminal based on foot information identification according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It is noted that in the following description, reference is made to the accompanying drawings which illustrate several embodiments of the present invention. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present invention. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present invention is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "below," "lower," "over," "upper," and the like, may be used herein to facilitate describing one element or feature's relationship to another element or feature as illustrated in the figures.
Throughout the specification, when a part is referred to as being "connected" to another part, this includes not only a case of being "directly connected" but also a case of being "indirectly connected" with another element interposed therebetween. In addition, when a certain part is referred to as "including" a certain component, unless otherwise stated, other components are not excluded, but it means that other components may be included.
The terms first, second, third, etc. are used herein to describe various elements, components, regions, layers and/or sections, but are not limited thereto. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the scope of the present invention.
Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," and/or "comprising," when used in this specification, specify the presence of stated features, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, operations, elements, components, items, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions or operations are inherently mutually exclusive in some way.
The embodiment of the invention provides a human-computer interaction method based on foot information identification, which solves the problems that in the prior art, a user wants to control a robot to perform different kinds of work, needs to control the robot through remote control equipment or face identification, and the remote control equipment needs to be maintained and is easy to break down, so that the guiding work cannot be performed, and the user experience is low. And the robot with special visual angle can not adopt face recognition, and the face recognition requires high variable range and is smaller, resulting in low recognition accuracy and small recognition action amplitude, and further greatly reducing the efficiency of man-machine interaction work. The invention provides a man-machine interaction method for foot information identification, which realizes direct interaction between a user and a machine by identifying the foot gesture in an acquired image, omits intermediate steps, better accords with the habit of the user, reduces the hardware cost and improves the efficiency.
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings so that those skilled in the art can easily implement the embodiments of the present invention. The present invention may be embodied in many different forms and is not limited to the embodiments described herein.
Fig. 1 is a schematic flow chart showing a human-computer interaction method based on foot information recognition according to an embodiment of the present invention.
The method comprises the following steps:
step S11: acquiring a foot state image in real time; the foot state image records two complete feet to be identified.
Optionally, the dynamic foot state image or the continuous foot state image in a period of time is acquired in real time.
Optionally, the foot status image includes: an RGB image or a depth image.
Optionally, the foot status image includes: the user can completely take two foot images in a straddling state.
Step S12: and identifying the foot state image to obtain foot state identification information.
Optionally, the identifying manner includes: and one or more of target detection identification, image classification, video classification and posture estimation modes.
Wherein the target detection and identification comprises: the detection and identification are performed by using one or more of a Histogram of Oriented Gradient (HOG), an image pyramid (image pyramid), and a sliding window (sliding window), which all belong to common technologies, and the specific identification process is not described herein. The image recognition mode comprises the following steps: and identifying the image by using the extracted image features through a classifier or a classification model. The video classification is behavior classification, and the computer processes, analyzes and processes the images, and identifies the types and positions of the targets contained in the images so as to identify the targets and objects in different modes. The attitude estimation mode comprises the following steps: the method comprises the steps of representing the structure and the shape of an object by using a certain geometric model or structure, establishing a corresponding relation between the model and an image by extracting certain object characteristics, and then realizing estimation of the spatial attitude of the object by a geometric method or other methods.
Optionally, the state identification information includes: static identification information and/or dynamic identification information. Specifically, the static identification information refers to identification information in which the foot is in a static state (kept in a fixed posture). The dynamic identification information refers to identification information of the foot part in a dynamic state (one foot is moved or two feet are moved).
Optionally, the static identification information includes: static pose information and/or static position information. Specifically, the static posture information is posture information of the foot in a static state, for example, a. lifting one foot; b. the sole faces the camera; c. two legs are opened; d. two legs are crossed to stand; e. one foot is stepped on the other foot; f. both feet are in static postures of 90 degrees and the like. The static position information is position information (coordinates) of the two feet when the device is in the static posture, namely, coordinates of the feet of the user relative to the device are deduced in an applicable scene needing machine movement. It should be noted that the static posture referred to herein is not limited to the above modes, and is not limited in the present application. In the monocular camera scenario, the simplest way is to set the camera horizontally, and make inferences using the camera principle and the results of target recognition. In a multiple camera scenario, there may be a better solution.
Optionally, the dynamic identification information includes: dynamic attitude information and/or dynamic trajectory position information. Specifically, the dynamic posture information is posture information of the one or both feet when the one or both feet are in a dynamic state, for example, a. tiptoe; b. when one foot is not moving, the other foot slides; c. the feet jump off the ground and fall back to the ground, etc. The dynamic track position information is the motion track position (coordinates) of the two feet in a dynamic state, namely the coordinates of the feet of the user relative to the motion of the machine are deduced in an applicable scene needing the motion of the machine. It should be noted that the dynamic gesture referred to herein is not limited to the above modes, and is not limited in this application.
Step S13: and obtaining an interactive response signal corresponding to the foot state identification information according to the foot state identification information.
Optionally, the interaction response information associated with the foot state identification information is pre-stored according to static identification information and/or dynamic identification information in the foot state identification information. It is noted that one or more foot status identifiers are associated with an interactive response signal.
Step S14: and feeding back the interactive response signal to the robot so as to enable the robot to make a response action corresponding to the interactive response signal.
Optionally, the interactive response signal corresponding to the foot state identification information is fed back to the robot, and the robot may perform a response action with an association relationship pre-stored with the interactive response signal according to the interactive response signal.
It should be noted that the association relationship between the interactive response signal and the response action includes: an interactive response signal corresponding to one or more response actions; alternatively, the plurality of interactive response signals correspond to a single response action.
Optionally, the response action includes: one or more of stop, start, pause, cancel, charge, test, motion direction control, motion speed control, motion type control, and motion time control actions. For example, when a test response signal corresponding to a test action is received, the robot performs a test operation on the test action, and the machine does not detect any action or posture other than the "test" action until the test response is received. This way also false triggering of the instruction can be avoided. The user can also choose whether to design the 'terminate \ pause' action to cancel or withdraw the last action according to different tasks, so as to terminate \ pause the error instruction sent by the machine execution processing module.
Optionally, the response mode of the response action includes: one or more of sound, vibration, motion, and display mode; wherein, the display mode includes: one or more of image display, light display and character display modes.
The following describes a process for implementing the human-computer interaction method based on foot information recognition in a better way in conjunction with specific embodiments.
Example 1: a man-machine interaction method based on foot information identification is provided. Fig. 2 is a schematic flow chart of the human-computer interaction method based on foot information identification in the embodiment.
Acquiring foot state images in an RGB image form in real time; the foot state image records two complete feet to be identified;
identifying the foot state image to obtain awakening action identification information;
acquiring a wakeup response signal according to the wakeup action identification information;
feeding the awakening response signal back to the robot so as to enable the robot to perform awakening action;
after the robot is awakened, the robot enters a standby state and acquires a new foot state image in real time; the foot state image records two complete feet to be identified;
identifying the foot state image to obtain foot state identification information;
obtaining an interactive response signal corresponding to the foot state identification information according to the foot state identification information;
if the interactive response signal is a cancellation response signal, feeding back to the robot to enable the robot to return to an un-awakened state;
and if the interactive signal is a non-cancellation response signal, feeding back to the robot so as to enable the robot to make a response action corresponding to the interactive response signal. If the robot receives the cancellation response signal again, the robot returns to the former action non-execution stage after waking up, and enters a standby state.
Similar to the principle of the embodiment, the invention provides a human-computer interaction system based on foot information identification.
Specific embodiments are provided below in conjunction with the attached figures:
fig. 3 is a schematic structural diagram of a system of a human-computer interaction method based on foot information recognition according to an embodiment of the present invention.
The system comprises:
the image acquisition module 31 is used for acquiring foot state images in real time; the foot state image records two complete feet to be identified;
the identification module 32 is connected to the image acquisition module 31 and is used for identifying the foot state image to obtain foot state identification information; wherein the state identification information includes: static identification information and/or dynamic identification information.
An interactive response signal generating module 33, connected to the identifying module 32, for obtaining an interactive response signal corresponding to the foot state identifying information according to the foot state identifying information;
and the interactive action response module 34 is connected with the interactive response signal generation module 33 and is used for feeding the interactive response signal back to the robot so that the robot can make a response action corresponding to the interactive response signal.
Optionally, the image acquisition module 31 acquires a dynamic image of the foot state or a continuous image of the foot state within a period of time in real time, so as to ensure the real-time acquisition of the foot posture.
Optionally, the image capturing module 31 is any device capable of capturing an RGB image or a depth image.
Optionally, the image capturing module 31 is configured as an RGB camera at the lowest. More sensors can be selected on the basis to improve the effect of the subsequent algorithm.
Optionally, the image acquisition module 31 is installed at a position that at least can capture the image information of the two complete feet of the user in a straddle situation.
Optionally, the identification module 32 identifies a manner including: and one or more of target detection identification, image classification, video classification and posture estimation modes.
Optionally, the identification module 32 identifies static identification information and/or dynamic identification information.
Optionally, the interactive response signal generating module 33 is configured to pre-store interactive response information associated with the foot state identification information according to static identification information and/or dynamic identification information in the foot state identification information. It is noted that one or more foot status identifiers are associated with an interactive response signal.
Optionally, the interactive action response module 34 feeds back the interactive response signal corresponding to the foot state identification information to the robot, and the robot may make a response action having an association relationship with the interactive response signal according to the interactive response signal. It should be noted that the association relationship between the interactive response signal and the response action includes: an interactive response signal corresponding to one or more response actions; alternatively, the plurality of interactive response signals correspond to a single response action.
Optionally, the response action includes: one or more of stop, start, pause, cancel, charge, test, motion direction control, motion speed control, motion type control, and motion time control actions.
Optionally, the response mode of the response action includes: one or more of sound, vibration, motion, and display mode; wherein, the display mode includes: one or more of image display, light display and character display modes. For example, the machine should respond in some way (e.g., audible or light) to the response signal that it has recognized the user's associated action to ensure that the user knows that the machine is operating properly.
Alternatively, the interactive response module 34 is not limited to being stationary in place, but may move and accept new input following the user through the image processing module.
Fig. 4 shows a schematic structural diagram of a human-computer interaction terminal 40 based on foot information identification in the embodiment of the present invention.
The human-computer interaction terminal 40 based on foot information identification comprises: a memory 41 and a processor 42, the memory 41 being for storing computer programs; the processor 42 runs a computer program to implement the human-computer interaction method based on the foot information identification as shown in fig. 1.
Alternatively, the number of the memories 41 may be one or more, the number of the processors 42 may be one or more, and fig. 4 illustrates one example.
Optionally, the processor 42 in the human-computer interaction terminal 40 based on the foot information identification loads one or more instructions corresponding to the processes of the application program into the memory 41 according to the steps shown in fig. 1, and the processor 42 runs the application program stored in the first memory 41, so as to implement various functions in the human-computer interaction method based on the foot information identification shown in fig. 1.
Optionally, the memory 41 may include, but is not limited to, a high speed random access memory, a non-volatile memory. Such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices; the Processor 42 may include, but is not limited to, a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
Optionally, the Processor 42 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
The invention further provides a computer-readable storage medium, which stores a computer program, and when the computer program runs, the man-machine interaction method based on the foot information identification shown in fig. 1 is realized. The computer-readable storage medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (compact disc-read only memories), magneto-optical disks, ROMs (read-only memories), RAMs (random access memories), EPROMs (erasable programmable read only memories), EEPROMs (electrically erasable programmable read only memories), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions. The computer readable storage medium may be a product that is not accessed by the computer device or may be a component that is used by an accessed computer device.
In summary, the human-computer interaction method, the human-computer interaction system and the human-computer interaction terminal based on the foot information identification are used for solving the problems that in the prior art, a user wants to control the robot to perform different kinds of work, the user needs to control the robot through remote control equipment or face identification, the remote control equipment needs to be maintained and is prone to failure, so that guiding work cannot be performed, and user experience is low. And the robot with special visual angle can not adopt face recognition, and the face recognition requires high variable range and is smaller, resulting in low recognition accuracy and small recognition action amplitude, and further greatly reducing the efficiency of man-machine interaction work. The invention provides a man-machine interaction method for foot information identification, which realizes direct interaction between a user and a machine by identifying the foot gesture in an acquired image, omits intermediate steps, better accords with the habit of the user, reduces the hardware cost and improves the efficiency. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles of the present invention and its efficacy, and are not to be construed as limiting the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A man-machine interaction method based on foot information identification is characterized by comprising the following steps:
acquiring a foot state image in real time; the foot state image records two complete feet to be identified;
identifying the foot state image to obtain foot state identification information;
obtaining an interactive response signal corresponding to the foot state identification information according to the foot state identification information;
and feeding back the interactive response signal to the robot so as to enable the robot to make a response action corresponding to the interactive response signal.
2. A human-computer interaction method based on foot information identification as claimed in claim 1, wherein the state identification information comprises: static identification information and/or dynamic identification information.
3. A human-computer interaction method based on foot information recognition according to claim 2, wherein the static recognition information comprises: static pose information and/or static position information.
4. A human-computer interaction method based on foot information recognition according to claim 2, wherein the dynamic recognition information comprises: dynamic attitude information and/or dynamic trajectory position information.
5. A human-computer interaction method based on foot information recognition according to claim 1, wherein the recognition manner comprises: and one or more of target detection identification, image classification, video classification and posture estimation modes.
6. A human-computer interaction method based on foot information recognition according to claim 1, wherein the response action comprises: one or more of stop, start, pause, cancel, charge, test, motion direction control, motion speed control, motion type control, and motion time control actions.
7. The human-computer interaction method based on foot information recognition according to claim 1, wherein the foot state image comprises: an RGB image or a depth image.
8. The human-computer interaction method based on foot information recognition according to claim 1, wherein the response mode of the response action comprises: one or more of sound, vibration, motion, and display mode; wherein, the display mode includes: one or more of image display, light display and character display modes.
9. A human-computer interaction system based on foot information identification is characterized in that the system comprises:
the image acquisition module is used for acquiring foot state images in real time; the foot state image records two complete feet to be identified;
the identification module is connected with the image acquisition module and used for identifying the foot state image to obtain foot state identification information; wherein the state identification information includes: static identification information and/or dynamic identification information.
The interactive response signal generating module is connected with the identification module and is used for acquiring an interactive response signal corresponding to the foot state identification information according to the foot state identification information;
and the interactive action response module is connected with the interactive response signal generation module and used for feeding back the interactive response signal to the robot so as to enable the robot to make a response action corresponding to the interactive response signal.
10. The utility model provides a human-computer interaction terminal based on foot's information identification which characterized in that includes:
a memory for storing a computer program;
a processor for executing the human-computer interaction method based on foot information identification according to any one of claims 1 to 7.
CN202011457598.9A 2020-12-10 2020-12-10 Man-machine interaction method, system and terminal based on foot information identification Active CN112379781B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011457598.9A CN112379781B (en) 2020-12-10 2020-12-10 Man-machine interaction method, system and terminal based on foot information identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011457598.9A CN112379781B (en) 2020-12-10 2020-12-10 Man-machine interaction method, system and terminal based on foot information identification

Publications (2)

Publication Number Publication Date
CN112379781A true CN112379781A (en) 2021-02-19
CN112379781B CN112379781B (en) 2023-02-28

Family

ID=74590615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011457598.9A Active CN112379781B (en) 2020-12-10 2020-12-10 Man-machine interaction method, system and terminal based on foot information identification

Country Status (1)

Country Link
CN (1) CN112379781B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114232940A (en) * 2021-11-17 2022-03-25 珠海格力电器股份有限公司 Skirting line equipment and control method thereof
WO2023070841A1 (en) * 2021-10-26 2023-05-04 美智纵横科技有限责任公司 Robot control method and apparatus, and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030009259A1 (en) * 2000-04-03 2003-01-09 Yuichi Hattori Robot moving on legs and control method therefor, and relative movement measuring sensor for robot moving on legs
US20070003915A1 (en) * 2004-08-11 2007-01-04 Templeman James N Simulated locomotion method and apparatus
CN104062945A (en) * 2013-03-18 2014-09-24 张朵 Automatic detection control method and system
CN105798931A (en) * 2016-04-26 2016-07-27 南京玛锶腾智能科技有限公司 Arousing method and device for intelligent robot
CN109493857A (en) * 2018-09-28 2019-03-19 广州智伴人工智能科技有限公司 A kind of auto sleep wake-up robot system
CN111026277A (en) * 2019-12-26 2020-04-17 深圳市商汤科技有限公司 Interaction control method and device, electronic equipment and storage medium
CN111242084A (en) * 2020-01-21 2020-06-05 深圳市优必选科技股份有限公司 Robot control method, device, robot and computer readable storage medium
CN111736607A (en) * 2020-06-28 2020-10-02 上海黑眸智能科技有限责任公司 Robot motion guiding method and system based on foot motion and terminal
WO2020213786A1 (en) * 2019-04-17 2020-10-22 주식회사 지티온 Virtual interactive content execution system using body movement recognition

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030009259A1 (en) * 2000-04-03 2003-01-09 Yuichi Hattori Robot moving on legs and control method therefor, and relative movement measuring sensor for robot moving on legs
US20070003915A1 (en) * 2004-08-11 2007-01-04 Templeman James N Simulated locomotion method and apparatus
CN104062945A (en) * 2013-03-18 2014-09-24 张朵 Automatic detection control method and system
CN105798931A (en) * 2016-04-26 2016-07-27 南京玛锶腾智能科技有限公司 Arousing method and device for intelligent robot
CN109493857A (en) * 2018-09-28 2019-03-19 广州智伴人工智能科技有限公司 A kind of auto sleep wake-up robot system
WO2020213786A1 (en) * 2019-04-17 2020-10-22 주식회사 지티온 Virtual interactive content execution system using body movement recognition
CN111026277A (en) * 2019-12-26 2020-04-17 深圳市商汤科技有限公司 Interaction control method and device, electronic equipment and storage medium
CN111242084A (en) * 2020-01-21 2020-06-05 深圳市优必选科技股份有限公司 Robot control method, device, robot and computer readable storage medium
CN111736607A (en) * 2020-06-28 2020-10-02 上海黑眸智能科技有限责任公司 Robot motion guiding method and system based on foot motion and terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023070841A1 (en) * 2021-10-26 2023-05-04 美智纵横科技有限责任公司 Robot control method and apparatus, and storage medium
CN114232940A (en) * 2021-11-17 2022-03-25 珠海格力电器股份有限公司 Skirting line equipment and control method thereof

Also Published As

Publication number Publication date
CN112379781B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
Chen et al. Repetitive assembly action recognition based on object detection and pose estimation
CN111568314B (en) Cleaning method and device based on scene recognition, cleaning robot and storage medium
Raheja et al. Real-time robotic hand control using hand gestures
US9171380B2 (en) Controlling power consumption in object tracking pipeline
US20110304541A1 (en) Method and system for detecting gestures
CN112379781B (en) Man-machine interaction method, system and terminal based on foot information identification
US20130342636A1 (en) Image-Based Real-Time Gesture Recognition
US20120086778A1 (en) Time of flight camera and motion tracking method
CN102200830A (en) Non-contact control system and control method based on static gesture recognition
JP2005078376A (en) Object detection device, object detection method, and robot device
WO2022252642A1 (en) Behavior posture detection method and apparatus based on video image, and device and medium
CN112507918B (en) Gesture recognition method
CN102799273B (en) Interaction control system and method
US10444852B2 (en) Method and apparatus for monitoring in a monitoring space
CN109343701A (en) A kind of intelligent human-machine interaction method based on dynamic hand gesture recognition
CN112949689A (en) Image recognition method and device, electronic equipment and storage medium
Kyrkou C 3 Net: end-to-end deep learning for efficient real-time visual active camera control
Chang et al. Parallel design of background subtraction and template matching modules for image objects tracking system
WO2021248857A1 (en) Obstacle attribute discrimination method and system, and intelligent robot
CN111736607A (en) Robot motion guiding method and system based on foot motion and terminal
CN108181989B (en) Gesture control method and device based on video data and computing equipment
JP5713655B2 (en) Video processing apparatus, video processing method, and program
Ashok et al. FINGER RECONGITION AND GESTURE BASED VIRTUAL KEYBOARD
Wang et al. Research and Design of Human Behavior Recognition Method in Industrial Production Based on Depth Image
CN113715019B (en) Robot control method, device, robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant