CN114724243A - Bionic action recognition system based on artificial intelligence - Google Patents

Bionic action recognition system based on artificial intelligence Download PDF

Info

Publication number
CN114724243A
CN114724243A CN202210324915.2A CN202210324915A CN114724243A CN 114724243 A CN114724243 A CN 114724243A CN 202210324915 A CN202210324915 A CN 202210324915A CN 114724243 A CN114724243 A CN 114724243A
Authority
CN
China
Prior art keywords
module
output end
input end
action
recognition system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210324915.2A
Other languages
Chinese (zh)
Inventor
赵新博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202210324915.2A priority Critical patent/CN114724243A/en
Publication of CN114724243A publication Critical patent/CN114724243A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a bionic motion recognition system based on artificial intelligence, which comprises a motion capture module, wherein the output end of the motion capture module is connected with the input end of a processing module, and the output end of the processing module is connected with the input end of the recognition system. This bionical action recognition system based on artificial intelligence, input through central processing system is connected with action classification module's output, action to required discernment is acquireed in advance and is modelled, can be better have certain cognition to the roughly movement track of action, and correspond different actions through adopting different gestures, can be simple let the robot discern, and correspond and operate, its error rate greatly reduced, can be through training in advance, let the robot better be familiar with the action of required imitation, and can in time improve to the action of mistake appearing, improve the quantity of distinguishable action.

Description

Bionic action recognition system based on artificial intelligence
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a bionic motion recognition system based on artificial intelligence.
Background
Artificial intelligence, abbreviated as AI, is a new technical science for researching and developing theories, methods, techniques and application systems for simulating, extending and expanding human intelligence, which is a branch of computer science, attempts to understand the essence of intelligence and produces a new intelligent machine that can respond in a manner similar to human intelligence, the research in this field includes robots, language recognition, image recognition, natural language processing, expert systems, etc., artificial intelligence is emerging, theories and techniques are becoming more mature, the application field is expanding, it is assumed that scientific and technological products brought by future artificial intelligence will be a "container" of human intelligence, artificial intelligence can simulate information processes of human consciousness and thinking, artificial intelligence is not human intelligence, but can think like a human, and can exceed human intelligence, bionic is a scientific method for building technical systems by simulating the functions and behaviors of biological systems, breaking the boundaries of organisms and machines and communicating various systems.
The existing recognition capability for bionic motion is low, and due to the fact that the amplitude of the simulated motion is not clear, the robot is easy to make mistakes when executing the motion, the overall operation can be influenced, and the recognition capability for the overall motion is limited, so that the robot has certain limitation.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a bionic action recognition system based on artificial intelligence, which solves the problems that the robot is easy to generate errors when executing actions and the overall recognition action is limited.
In order to achieve the purpose, the invention is realized by the following technical scheme: a bionic motion recognition system based on artificial intelligence comprises a motion capture module, wherein the output end of the motion capture module is connected with the input end of a processing module, the output end of the processing module is connected with the input end of a recognition system, the recognition system is in bidirectional connection with a control terminal, and the recognition system is in bidirectional connection with a large database;
the recognition system comprises a central processing system, the input end of the central processing system is connected with the output end of the action classification module, the input end of the action classification module is connected with the output end of the training model acquisition module, the output end of the central processing system is connected with the input end of the robot operation module, the central processing system is in bidirectional connection with the classified action specific gesture establishment module, and the output end of the robot operation module is connected with the input end of the operation state monitoring module.
Preferably, the output end of the operation state monitoring module is connected with the input end of the malfunction identification module, and the output end of the malfunction identification module is connected with the input end of the improved processing module, and the output end of the improved processing module is connected with the input end of the central processing system.
Preferably, the central processing system is bidirectionally connected with the data updating module, and the output end of the data updating module is connected with the input end of the data backup module.
Preferably, the processing module includes a motion image acquisition module, and an output end of the motion image acquisition module is connected with an input end of the image noise reduction processing module.
Preferably, the output end of the image denoising processing module is connected with the input end of the action contour extraction module, the output end of the action contour extraction module is connected with the input end of the feature extraction module, and the output end of the feature extraction module is connected with the input end of the training model establishing module.
Preferably, the robot running module comprises a command gesture obtaining module, and an output end of the command gesture obtaining module is connected with an input end of the command receiving module.
Preferably, the output end of the command receiving module is connected with the input end of the gesture recognition module, the output end of the gesture recognition module is connected with the input end of the corresponding action obtaining module, and the output end of the corresponding action obtaining module is connected with the input end of the action executing module.
Advantageous effects
The invention provides a bionic action recognition system based on artificial intelligence. Compared with the prior art, the method has the following beneficial effects:
(1) the bionic motion recognition system based on artificial intelligence is characterized in that the input end of a central processing system is connected with the output end of a motion classification module, the input end of the motion classification module is connected with the output end of a training model acquisition module, the output end of the central processing system is connected with the input end of a robot operation module, the central processing system is bidirectionally connected with a classification motion specific gesture establishment module, the output end of the robot operation module is connected with the input end of an operation state monitoring module, the output end of the operation state monitoring module is connected with the input end of a false motion recognition module, the output end of the false motion recognition module is connected with the input end of an improved processing module, the output end of the improved processing module is connected with the input end of the central processing system, the motion to be recognized is acquired and modeled in advance, and the approximate motion trajectory of the motion can be better recognized, and different gestures are adopted to correspond to different actions, so that the robot can be simply identified and correspondingly operated, and the error rate is greatly reduced.
(2) This bionical action recognition system based on artificial intelligence, include action image acquisition module through processing module, the output that action image acquisition module is connected with the input of image denoising processing module, the output of image denoising processing module and the input that the action profile drawed the module are connected, and the output that the action profile drawed the module is connected with the input that the module was drawed to the characteristic, the output that the module was drawed to the characteristic is connected with the input that the module was established to the training model, can be through training in advance, let the robot better be familiar with the action of required imitation, and can in time improve to the action of mistake appearing, improve the quantity of distinguishable action.
Drawings
FIG. 1 is a schematic block diagram of the architecture of the system of the present invention;
FIG. 2 is a schematic block diagram of the structure of a processing module of the present invention;
fig. 3 is a schematic block diagram of the structure of the recognition system of the present invention.
In the figure: 1-motion capture module, 2-processing module, 21-motion image acquisition module, 22-image noise reduction processing module, 23-motion contour extraction module, 24-feature extraction module, 25-training model establishment module, 3-recognition system, 31-central processing system, 32-motion classification module, 33-training model acquisition module, 34-robot operation module, 341-command gesture acquisition module, 342-command reception module, 343-gesture recognition module, 344-corresponding motion acquisition module, 345-motion execution module, 35-classification motion specific gesture establishment module, 36-operation state monitoring module, 37-false motion recognition module, 38-improvement processing module, 39-data update module, 39-motion image acquisition module, 33-motion contour extraction module, 24-feature extraction module, 25-training model establishment module, 3-motion recognition module, 31-central processing system, 32-motion classification module, 33-training model acquisition module, 34-robot operation module, 341-command gesture acquisition module, 342-command reception module, 343-gesture recognition module, 344-corresponding motion acquisition module, 345-motion execution module, 35-classification motion specific gesture establishment module, 36-operation state monitoring module, 37-false motion recognition module, 38-improvement processing module, 39-data update module, and data update module, 310-data backup module, 4-control terminal, 5-big database.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-3, the present invention provides a technical solution: a bionic motion recognition system based on artificial intelligence comprises a motion capture module 1, wherein the output end of the motion capture module 1 is connected with the input end of a processing module 2, the output end of the processing module 2 is connected with the input end of a recognition system 3, the recognition system 3 is in bidirectional connection with a control terminal 4, and the recognition system 3 is in bidirectional connection with a large database 5;
the recognition system 3 comprises a central processing system 31, the input end of the central processing system 31 is connected with the output end of the action classification module 32, the input end of the action classification module 32 is connected with the output end of the training model acquisition module 33, the output end of the central processing system 31 is connected with the input end of the robot operation module 34, the central processing system 31 is in bidirectional connection with the classification action specific gesture establishment module 35, the output end of the robot operation module 34 is connected with the input end of the operation state monitoring module 36, the actions to be recognized are acquired and modeled in advance, a certain cognition can be better realized on the approximate motion track of the actions, different actions are corresponding to different gestures, the robot can be simply recognized and correspondingly operated, and the error rate is greatly reduced.
In the present invention, the output end of the operation status monitoring module 36 is connected to the input end of the malfunction identification module 37, the output end of the malfunction identification module 37 is connected to the input end of the improved processing module 38, and the output end of the improved processing module 38 is connected to the input end of the central processing system 31.
In the present invention, the central processing system 31 is connected to the data updating module 39 in a bidirectional manner, and the output terminal of the data updating module 39 is connected to the input terminal of the data backup module 310.
In the present invention, the processing module 2 includes an action image obtaining module 21, and an output end of the action image obtaining module 21 is connected to an input end of the image denoising processing module 22.
In the invention, the output end of the image noise reduction processing module 22 is connected with the input end of the action contour extraction module 23, the output end of the action contour extraction module 23 is connected with the input end of the characteristic extraction module 24, and the output end of the characteristic extraction module 24 is connected with the input end of the training model establishing module 25, so that the robot can be better familiar with actions to be simulated through training in advance, and the actions with errors can be timely improved, and the number of recognizable actions is increased.
In the present invention, the robot running module 34 includes a command gesture obtaining module 341, and an output end of the command gesture obtaining module 341 is connected to an input end of the command receiving module 342.
In the present invention, the output end of the command receiving module 342 is connected to the input end of the gesture recognition module 343, the output end of the gesture recognition module 343 is connected to the input end of the corresponding action obtaining module 344, and the output end of the corresponding action obtaining module 344 is connected to the input end of the action executing module 345.
And those not described in detail in this specification are well within the skill of those in the art.
When the gesture recognition system is used, actions to be simulated by a robot are displayed in advance, the action capturing module 1 captures the actions and transmits the actions to the processing module 2, the action image acquisition module 21 in the processing module 2 forms rough images according to the displayed actions and then performs noise reduction treatment on the rough images to enable the rough images to be clearer, the action outline extraction module 23 acquires rough outlines of motion tracks of the noise-reduced images, then the displayed features are extracted through the feature extraction module 24, the training model establishment module 25 forms a training model of the system and provides the training model for the robot to simulate, actions in the recognition system 3 are classified according to the established training model, then different corresponding simple gestures are established according to different actions through the classified action specific gesture establishment module 35, and then the robot operation module 34 can acquire the gestures to be simulated according to the command gesture acquisition module 341, then, the action corresponding to the gesture is obtained according to the recognized gesture, the action is simulated through the action executing module 345, the operation state monitoring module 36 detects the action of the robot, the wrong action recognizing module 37 can recognize the place where the action of the robot is wrong, then the place is processed through the improving module 38, the improved data needs to be updated through the data updating module 39, and is backed up and stored through the data backup module 310, and meanwhile, all the operation data are stored through the big database 5.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (7)

1. A bionic motion recognition system based on artificial intelligence comprises a motion capture module (1), and is characterized in that: the output end of the motion capture module (1) is connected with the input end of the processing module (2), the output end of the processing module (2) is connected with the input end of the recognition system (3), the recognition system (3) is in bidirectional connection with the control terminal (4), and the recognition system (3) is in bidirectional connection with the large database (5);
the recognition system (3) comprises a central processing system (31), the input end of the central processing system (31) is connected with the output end of the action classification module (32), the input end of the action classification module (32) is connected with the output end of the training model acquisition module (33), the output end of the central processing system (31) is connected with the input end of the robot operation module (34), the central processing system (31) and the classification action specific gesture establishment module (35) realize bidirectional connection, and the output end of the robot operation module (34) is connected with the input end of the operation state monitoring module (36).
2. The artificial intelligence based bionic motion recognition system as claimed in claim 1, wherein: the output end of the running state monitoring module (36) is connected with the input end of the error action recognition module (37), the output end of the error action recognition module (37) is connected with the input end of the improved processing module (38), and the output end of the improved processing module (38) is connected with the input end of the central processing system (31).
3. The artificial intelligence based bionic motion recognition system according to claim 1, wherein: the central processing system (31) is connected with the data updating module (39) in a bidirectional mode, and the output end of the data updating module (39) is connected with the input end of the data backup module (310).
4. The artificial intelligence based bionic motion recognition system according to claim 1, wherein: the processing module (2) comprises a motion image acquisition module (21), and the output end of the motion image acquisition module (21) is connected with the input end of the image noise reduction processing module (22).
5. The artificial intelligence based bionic motion recognition system according to claim 4, wherein: the output end of the image denoising processing module (22) is connected with the input end of the action contour extraction module (23), the output end of the action contour extraction module (23) is connected with the input end of the feature extraction module (24), and the output end of the feature extraction module (24) is connected with the input end of the training model establishing module (25).
6. The artificial intelligence based bionic motion recognition system according to claim 1, wherein: the robot running module (34) comprises a command gesture obtaining module (341), and the output end of the command gesture obtaining module (341) is connected with the input end of the command receiving module (342).
7. The artificial intelligence based bionic motion recognition system according to claim 6, wherein: the output end of the command receiving module (342) is connected with the input end of the gesture recognition module (343), the output end of the gesture recognition module (343) is connected with the input end of the corresponding action acquisition module (344), and the output end of the corresponding action acquisition module (344) is connected with the input end of the action execution module (345).
CN202210324915.2A 2022-03-29 2022-03-29 Bionic action recognition system based on artificial intelligence Pending CN114724243A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210324915.2A CN114724243A (en) 2022-03-29 2022-03-29 Bionic action recognition system based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210324915.2A CN114724243A (en) 2022-03-29 2022-03-29 Bionic action recognition system based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN114724243A true CN114724243A (en) 2022-07-08

Family

ID=82240857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210324915.2A Pending CN114724243A (en) 2022-03-29 2022-03-29 Bionic action recognition system based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN114724243A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105867630A (en) * 2016-04-21 2016-08-17 深圳前海勇艺达机器人有限公司 Robot gesture recognition method and device and robot system
US9785247B1 (en) * 2014-05-14 2017-10-10 Leap Motion, Inc. Systems and methods of tracking moving hands and recognizing gestural interactions
CN107443396A (en) * 2017-08-25 2017-12-08 魔咖智能科技(常州)有限公司 A kind of intelligence for imitating human action in real time accompanies robot
CN108647654A (en) * 2018-05-15 2018-10-12 合肥岚钊岚传媒有限公司 The gesture video image identification system and method for view-based access control model
CN109590986A (en) * 2018-12-03 2019-04-09 深圳市越疆科技有限公司 Robot teaching's method, intelligent robot and storage medium
CN110308669A (en) * 2019-07-27 2019-10-08 南京市晨枭软件技术有限公司 A kind of modular robot selfreparing analogue system and method
US10768708B1 (en) * 2014-08-21 2020-09-08 Ultrahaptics IP Two Limited Systems and methods of interacting with a robotic tool using free-form gestures

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9785247B1 (en) * 2014-05-14 2017-10-10 Leap Motion, Inc. Systems and methods of tracking moving hands and recognizing gestural interactions
US10768708B1 (en) * 2014-08-21 2020-09-08 Ultrahaptics IP Two Limited Systems and methods of interacting with a robotic tool using free-form gestures
CN105867630A (en) * 2016-04-21 2016-08-17 深圳前海勇艺达机器人有限公司 Robot gesture recognition method and device and robot system
CN107443396A (en) * 2017-08-25 2017-12-08 魔咖智能科技(常州)有限公司 A kind of intelligence for imitating human action in real time accompanies robot
CN108647654A (en) * 2018-05-15 2018-10-12 合肥岚钊岚传媒有限公司 The gesture video image identification system and method for view-based access control model
CN109590986A (en) * 2018-12-03 2019-04-09 深圳市越疆科技有限公司 Robot teaching's method, intelligent robot and storage medium
CN110308669A (en) * 2019-07-27 2019-10-08 南京市晨枭软件技术有限公司 A kind of modular robot selfreparing analogue system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
赵新博: "数据库远程可视化维护虚拟仿真实验平台构建", 太原师范学院学报(自然科学版), vol. 20, no. 4, 31 December 2021 (2021-12-31), pages 43 - 47 *
陈苏明;高正创;王若愚;卢小康;叶子夜;: "基于图像识别的仿生机械臂研究", 信息与电脑(理论版), no. 05, 10 March 2020 (2020-03-10) *

Similar Documents

Publication Publication Date Title
US10949658B2 (en) Method and system for activity classification
Niekum et al. Learning and generalization of complex tasks from unstructured demonstrations
Malima et al. A fast algorithm for vision-based hand gesture recognition for robot control
US20180211104A1 (en) Method and device for target tracking
CN108983979B (en) Gesture tracking recognition method and device and intelligent equipment
Lee et al. A syntactic approach to robot imitation learning using probabilistic activity grammars
CN109598229B (en) Monitoring system and method based on action recognition
CN109117893A (en) A kind of action identification method and device based on human body attitude
CN111104820A (en) Gesture recognition method based on deep learning
CN103529944A (en) Human body movement identification method based on Kinect
CN111985333B (en) Behavior detection method based on graph structure information interaction enhancement and electronic device
CN110532883A (en) On-line tracking is improved using off-line tracking algorithm
CN111860117A (en) Human behavior recognition method based on deep learning
CN110188669A (en) A kind of aerial hand-written character track restoration methods based on attention mechanism
CN114187561A (en) Abnormal behavior identification method and device, terminal equipment and storage medium
CN116922379A (en) Vision-based mechanical arm obstacle avoidance method, system, electronic equipment and storage medium
CN118340500A (en) Entity interaction system and method for autism spectrum disorder children
CN114863571A (en) Collaborative robot gesture recognition system based on computer vision
CN114819557A (en) Intelligent dance evaluation and teaching method and device based on gesture recognition
CN112598953B (en) Train driving simulation system-based crew member evaluation system and method
CN202584048U (en) Smart mouse based on DSP image location and voice recognition
CN114724243A (en) Bionic action recognition system based on artificial intelligence
CN110108510A (en) Based on embedded system automobile electronics intelligent checking system and its method
CN114768246A (en) Game man-machine interaction method and system
CN109213101A (en) Pretreated method and system under a kind of robot system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination