CN112656402B - Acquisition robot linkage control system applied to 3D posture detection and analysis - Google Patents

Acquisition robot linkage control system applied to 3D posture detection and analysis Download PDF

Info

Publication number
CN112656402B
CN112656402B CN202011377769.7A CN202011377769A CN112656402B CN 112656402 B CN112656402 B CN 112656402B CN 202011377769 A CN202011377769 A CN 202011377769A CN 112656402 B CN112656402 B CN 112656402B
Authority
CN
China
Prior art keywords
module
user
acquisition
analysis
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011377769.7A
Other languages
Chinese (zh)
Other versions
CN112656402A (en
Inventor
张天喜
白定群
宋虹孝
吴基玉
胡荣海
李刚
彭鞘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Younaite Medical Instrument Co ltd
Original Assignee
Chongqing Younaite Medical Instrument Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Younaite Medical Instrument Co ltd filed Critical Chongqing Younaite Medical Instrument Co ltd
Priority to CN202011377769.7A priority Critical patent/CN112656402B/en
Publication of CN112656402A publication Critical patent/CN112656402A/en
Application granted granted Critical
Publication of CN112656402B publication Critical patent/CN112656402B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to the technical field of posture analysis, in particular to a collection robot linkage control system applied to 3D posture detection and analysis, which comprises a control host, an input device, a display and a plurality of collection robots, wherein the control host is in data connection with the collection robots and the display; the collecting robot is used for collecting the walking posture data of the user; the control host includes: the scene scanning module is used for detecting the environmental data of the current scene; the scene construction module is used for constructing a detection scene according to the environment data; the user walking route acquiring module is used for acquiring a user walking route; and the path planning module is used for planning and acquiring the walking path and the acquisition direction of the robot according to the environment data and the walking route of the user. The utility model provides a be applied to 3D posture detection and analysis's collection robot coordinated control system, can be nimble carry out multi-angle, omnidirectional collection to user's posture, improve the degree of flexibility of equipment deployment and the degree of accuracy of analysis result, reduce the deployment cost.

Description

Acquisition robot linkage control system applied to 3D posture detection and analysis
Technical Field
The invention relates to the technical field of posture analysis, in particular to a collection robot linkage control system applied to 3D posture detection and analysis.
Background
The human body state can reflect the health condition of the human body, and the analysis on the human body state can judge the health condition or rehabilitation condition of muscles, joints and the like of each part of the human body, thereby providing a basis for rehabilitation diagnosis and treatment, exercise and fitness, effect evaluation, assistive device selection and the like.
Gait analysis is the most common analysis mode in human body posture analysis, dynamic quantitative analysis is carried out on the motion and stress conditions of all parts of the human body, particularly the lower limbs, when the human body walks through the modern measurement technology, the gait analysis can be used for analyzing the motion of the walking state period of a normal person, and more common effective means for carrying out systematic evaluation on the walking function in clinic are important components of rehabilitation evaluation (such as guidance of rehabilitation treatment and rehabilitation evaluation after stroke).
In the conventional gait analysis, medical staff visually observe the walking process of a patient, and then obtain a preliminary analysis conclusion by means of abundant clinical experience according to the obtained impression or the result of item-by-item evaluation according to a certain observation item. However, this method is only qualitative and not quantitative. With the development of science and technology, more and more gait analysis is recorded and analyzed by means of auxiliary equipment at present, and some existing ways are to enable a user to walk on a treadmill by arranging the treadmill and a camera, and the camera is used for collecting posture data. The existing mode needs to fixedly install the camera at a preset position, the position is not easy to adjust after deployment, and the problems of inconvenience in use, poor expansion flexibility, inconvenience in expansion and maintenance and the like exist.
Disclosure of Invention
The invention aims to provide a collection robot linkage control system applied to 3D posture detection and analysis, and can solve the problems of poor flexibility, inconvenience in maintenance and expansion and incomprehensive collection angle of the existing 3D posture detection equipment.
The application provides the following technical scheme:
the acquisition robot linkage control system applied to 3D posture detection and analysis comprises a control host, an input device, a display and a plurality of acquisition robots, wherein the control host is in data connection with the acquisition robots and the display;
the acquisition robot is used for acquiring walking posture data of a user;
the control host includes:
the scene scanning module is used for detecting the environmental data of the current scene;
the scene construction module is used for constructing a detection scene according to the environment data;
the user walking route acquisition module is used for acquiring a user walking route input by the input device;
the path planning module is used for planning and acquiring a walking path and an acquisition direction of the robot according to the environment data and the walking route of the user;
the robot control module is used for controlling the position and the direction of the acquisition robot according to the planned walking path and the acquisition direction;
the display is used for displaying a detection scene, a user walking route and collecting a robot walking path.
Further, the control host further comprises a scene correction module for correcting the environmental data of the detection scene by the user.
Further, the path planning module includes:
the grouping module is used for dividing the acquisition robots into different groups according to the number of users to be acquired and user information, and each group of acquisition robots are responsible for acquiring user walking posture data of one user;
the angle dividing module is used for dividing the angle range which is responsible for acquisition by each acquisition robot according to the number of each group of robots;
the position dividing module is used for determining the acquisition position and the acquisition direction of each robot according to the walking route of the user and the angle range which is acquired by each acquisition robot;
and the path generation module is used for generating a walking path according to the current acquisition position and the acquisition position of each robot.
Further, the scene scanning module includes a ranging module, the environment data includes a scene size, and the ranging module is configured to detect the scene size of the current scene.
Further, the scene scanning module further comprises an obstacle scanning module, which is used for acquiring obstacles in the scene; the path generation module further comprises an obstacle avoidance module used for adjusting the walking path according to the obstacles in the scene.
Further, the collection robot includes:
the travelling mechanism comprises travelling wheels, a driving motor and a steering motor, the driving motor is in power connection with the travelling wheels, the driving motor is used for driving the travelling wheels to rotate, the steering motor is in power connection with the travelling wheels, and the steering motor is used for driving the travelling wheels to steer;
the acquisition mechanism comprises a holder, and a camera is arranged on the holder;
the main control module, main control module includes controller, wireless communication module, the controller is connected with wireless communication module, camera, driving motor and turns to the equal electricity of motor, the controller is used for gathering image data through the camera, the camera is used for sending the data of gathering to the main control system through wireless communication module, the controller still is used for receiving control command and controlling camera, driving motor and turn to the motor through wireless communication module and open and close.
Further, the acquisition mechanism comprises a lifting mechanism, the lifting mechanism comprises a lifting motor, the holder is arranged on the lifting mechanism, the lifting motor is electrically connected with the controller, and the controller is also used for controlling the lifting motor to open and close.
And the real-time adjusting module is used for controlling the acquisition mechanism of the acquisition robot to adjust the height and the angle of the camera according to the pictures acquired by the camera of each acquisition robot.
Further, the collection robot is provided with at least three.
Further, the camera is a 3D structure optical camera.
The technical scheme of the invention has the beneficial effects that:
1. according to the technical scheme, the collection robot is arranged, the walking path of the robot is generated based on the scene and the user walking route, and the collected angle range is divided and distributed to each robot according to the number of the users and the user walking route, so that the collection robot can collect the user posture in a multi-angle and all-around manner, and the all-around detection and collection of the user are ensured.
2. By adopting the acquisition robot, flexible deployment can be performed according to the actual detection scene condition and the user condition, so that the flexibility of equipment deployment can be improved, the deployment cost can be reduced, and the accuracy of an analysis result can be improved.
3. By adopting the 3D structure optical camera, compared with the method of only using a camera, the depth information can be acquired, and more accurate detection data can be provided.
Drawings
Fig. 1 is a flowchart of a first embodiment of the acquisition robot linkage control system applied to 3D posture detection and analysis according to the present invention.
Detailed Description
The technical scheme of the application is further explained in detail through the following specific implementation modes:
example one
As shown in fig. 1, the collection robot linkage control system applied to 3D posture detection and analysis disclosed in this embodiment includes a control host, an input device, a display, and a plurality of collection robots, where the control host is connected to the collection robots and the display, and specifically, in this embodiment, the control host is connected to the collection robots and the display through a wireless network.
The collecting robot is used for collecting the walking posture data of the user; one user sets at least three acquisition robots for data acquisition. According to the technical scheme, the user walking posture data can be collected for a plurality of users, the number of the collection robots is determined according to the number of the users, and in the embodiment, one user is taken as an example, and three collection robots are arranged.
The collection robot comprises a walking mechanism, a collection mechanism and a main control module.
The travelling mechanism comprises travelling wheels, a driving motor and a steering motor, the driving motor is in power connection with the travelling wheels, the driving motor is used for driving the travelling wheels to rotate, the steering motor is in power connection with the travelling wheels, and the steering motor is used for driving the travelling wheels to steer; in this embodiment, the running mechanism is an existing common four-wheel running mechanism, such as a remote control racing car. Specifically, the walking wheels are four in number and are divided into two front wheels and two rear wheels, the two rear wheels are connected through a rotating shaft, a transmission gear is arranged on the rotating shaft, and an output shaft of a driving motor is in power connection with the two rear wheels through the transmission gear. The driving motor drives the two rear wheels to rotate, so that walking driving is realized. The steering motor adopts a steering engine, a steering engine output rod is connected with a steering rod, two ends of the steering rod are respectively hinged with the two front wheels, and the steering engine drives the steering rod to rotate so as to drive the front wheels to steer. In other embodiments of the present application, other walking structures may be adopted, such as two wheels for driving and steering, and two universal wheels for balancing, or a walking mechanism of an existing two-wheel balance car may be directly adopted.
The traveling mechanism is fixedly provided with a mounting frame through bolts, and the acquisition mechanism is arranged on the mounting frame. The acquisition mechanism comprises a lifting mechanism, the lifting mechanism comprises a lifting motor, a holder is arranged on the lifting mechanism, and a camera is arranged on the holder. In this embodiment, elevating system includes the lead screw pair, the slider fixed connection of cloud platform and lead screw pair, the lead screw of lead screw pair passes through gear power with elevator motor to be connected, and elevator motor rotates the lift of adjusting the slider through driving the lead screw, and then the lift of control cloud platform. The cloud platform includes the fixed block, and the fixed block passes through bolt fixed connection with the slider, is equipped with first slewing mechanism on the fixed block, and first slewing mechanism includes first rotation motor and first revolving stage, and first revolving stage center fixedly connected with first axis of rotation, first axis of rotation and fixed block rotate to be connected, pass through gear power connection between first axis of rotation and the first rotation motor. Be equipped with second slewing mechanism on the first revolving stage, second slewing mechanism includes that the second rotates motor and second and rotates the platform, and second revolving stage center is equipped with the second pivot, rotates between second pivot and the first revolving stage to be connected, and the second pivot is rotated the motor with the second and is passed through gear power and be connected, and first pivot and second pivot are mutually perpendicular, in this embodiment, the vertical setting of first pivot, the horizontal setting of second pivot, the camera setting is on the second revolving stage. In this embodiment, the camera is a 3D structure optical camera.
Be equipped with the master control box on the mounting bracket, master control module sets up in the master control box, master control module includes controller and wireless communication module, the camera, driving motor, turn to the equal electricity of motor and elevator motor and be connected, the controller is used for gathering image data through the camera, the camera is used for sending the data of gathering to main control system through wireless communication module, the controller still is used for receiving control command and controlling the camera through wireless communication module, driving motor, turn to motor and elevator motor and open and close. The main control module further comprises a positioning module, the controller is electrically connected with the positioning module, and the controller is used for indoor positioning through the positioning module. The positioning module comprises an RFID signal receiver, a plurality of RFID tags are arranged in a scene, the controller calculates the position of the controller by reading the signal intensity of each RFID tag, and after the controller receives a control signal of the control host, the controller controls each motor to run, so that the functions of position movement, camera angle rotation and lifting, user walking posture data acquisition and the like are realized. In this embodiment, the controller is preferably STM32 series singlechip, and wireless communication module adopts wiFi communication module, camera to adopt 3D structure light camera.
The control host includes: the system comprises a scene scanning module, a scene construction module, a scene correction module, a user walking route acquisition module, a real-time adjustment module, a path planning module and a robot control module.
The scene scanning module is used for detecting the environmental data of the current scene; in this embodiment, the scene scanning module includes a distance measurement module and an obstacle scanning module, the environment data includes a scene size and obstacle data, and the distance measurement module is configured to detect a scene size of a current scene. The obstacle scanning module is used for acquiring obstacle data in a scene.
The scene construction module is used for constructing a detection scene according to the environment data; the scene correction module is used for correcting the environment data of the detection scene by a user, such as the size of the scene, the distribution of obstacles in the scene and the like, and ensuring that the scene is consistent with the actual detection scene.
The user walking route acquisition module is used for acquiring a user walking route input by the input device; in this embodiment, the input device is a touch screen, and the administrator directly draws the walking route of the user on the display screen displaying the scene in a touch manner.
And the path planning module is used for planning and acquiring the walking path and the acquisition direction of the robot according to the environment data and the walking route of the user.
The real-time adjusting module is used for controlling the collecting mechanism of the collecting robot to adjust the height and the angle of the camera according to the pictures collected by the camera of each collecting robot. Specifically, in this embodiment, it is determined whether the frame range covers the user according to the images collected by the cameras, and if not, the adjustment is performed according to the deviation angle, and if the part of the user's foot is not photographed, the camera angle is decreased.
And the robot control module is used for controlling the position and the direction of the acquisition robot according to the planned walking path and the acquisition direction.
The display is used for displaying the detection scene, the walking route of the user and collecting the walking path of the robot.
In this embodiment, the path planning module includes:
the grouping module is used for dividing the acquisition robots into different groups according to the number of users to be acquired and user information, and each group of acquisition robots are responsible for acquiring user walking posture data of one user;
the angle dividing module is used for dividing the angle range which is taken charge of by each acquisition robot according to the number of each group of robots;
the position dividing module is used for determining the acquisition position and the acquisition direction of each robot according to the walking route of the user and the angle range which is acquired by each acquisition robot;
the path generation module is used for generating a walking path according to the current acquisition position and the acquisition position of each robot;
and the obstacle avoidance module is used for adjusting the walking path according to the obstacles in the scene.
Example two
The difference between this embodiment and the first embodiment is that, in this embodiment, the control host further includes:
the posture analysis module is used for analyzing the user posture according to the user walking posture data acquired by the acquisition robot; the posture analysis module comprises a skeleton tracking module and a data analysis module and is used for identifying human body skeleton data according to the user walking posture data acquired by the acquisition robot and constructing human body 3D posture data according to the human body skeleton data; specifically, in this embodiment, a human skeleton recognition technology based on kinect is adopted to perform human body skeleton recognition and human body 3D posture data construction.
The data analysis subsystem comprises a data analysis module, the data analysis module comprises a feature extraction module, an analysis and evaluation module and a report generation module, the feature extraction module is used for extracting analysis features according to human body 3D posture data, the analysis and evaluation module comprises an analysis and evaluation model, and the analysis and evaluation module is used for analyzing the analysis features through the analysis and evaluation model; the report generation module is used for generating an analysis result report according to the output result of the analysis evaluation model;
the characteristic extraction module is used for extracting analysis characteristics according to the human body 3D posture data, the data characteristic extraction module comprises a characteristic management module, an extraction rule management module and an extraction module, the characteristic management module is used for enabling managers to add or delete characteristics, the extraction rule management module is used for enabling users to establish extraction rules corresponding to the associated characteristics, the extraction module is used for extracting the analysis characteristics from the human body 3D posture data according to the extraction rules, the analysis characteristics comprise steps, step frequency, leg lifting height, head and side tilting angles and the like, and the managers can add the characteristics through the characteristic management module to enable analysis to be more comprehensive.
In this embodiment, the analysis and evaluation model adopts a neural network model based on a rehabilitation standard scale, and the neural network model is used for outputting the posture problem of the user according to the analysis characteristics of the user.
The neural network model adopts a BP neural network model and comprises an input layer, a hidden layer and a plurality of output layers, each output layer corresponds to a posture problem, such as lumbar vertebra protrusion, leg type classification, pelvic bone inclination and the like, the analysis characteristics correspond to all indexes of a rehabilitation standard scale, the analysis characteristics are used as the input of the input layer of the neural network model, and the output layers output the corresponding posture problems of a userThe probability of (c). In this embodiment, the analysis features of the user are used as input to the input layer, and the output is a prediction of the corresponding probability; the present embodiment uses the following formula to determine the number of hidden nodes:
Figure BDA0002807592270000061
wherein l is the number of nodes of the hidden layer, n is the number of nodes of the input layer, m is the number of nodes of the output layer, and a is a number between 1 and 10. BP neural networks typically employ Sigmoid differentiable functions and linear functions as the excitation function of the network. In this embodiment, tansig function is used as the excitation function of hidden layer neurons. The prediction model selects an S-shaped logarithmic function tansig as an excitation function of neurons of an output layer, and the existing data are used as a sample pair for training. In other embodiments of the application, basic information of the user, such as age, height, sex, and the like, and symptoms of the user, such as leg pain, lumbar pain, and the like, can also be used as input of the input layer to train and use the model, so that the accuracy of the analysis can be further improved.
The data analysis subsystem also comprises an evaluation result correction module, and the evaluation result correction module is used for correcting the evaluation result report.
The analysis and evaluation module further comprises a model adjusting module, the model adjusting module comprises an iteration module and a manual correction module, and the iteration module is used for performing iteration training on the analysis and evaluation model according to the historical evaluation result report. And the manual correction module is used for modifying the indexes of the input layer of the analysis and evaluation model by a manager.
The display is also used for displaying human body 3D posture data and an analysis result report.
EXAMPLE III
The difference between this embodiment and the second embodiment is that, in this embodiment, the data analysis module further includes a time analysis module, a reason analysis module, an adjustment suggestion module, and a user terminal, the user terminal is connected to the central controller via a network, the user terminal is configured to upload historical image data to the central controller, the time analysis module includes a degree analysis module, an image analysis module, and a comprehensive analysis module, the image analysis module is configured to perform image analysis on photos and videos in the historical image data uploaded by the user to obtain a posture condition of the user, determine whether a posture of the user in the photos or the videos at each period is problematic, obtain a time when the user generates a posture problem, the degree analysis module is configured to calculate a time when the user generates a posture problem according to a severity of the user posture problem analyzed by the data analysis module, and the comprehensive analysis module is configured to generate a time when the user generates a posture problem according to analysis results of the degree analysis module and the image analysis module. The reason analysis module is used for acquiring a reason list generated by the posture problem according to the posture problem of the user, and selecting a reason closest to the user condition from the reason list by combining the time when the user generates the posture problem and corresponding historical image data. The adjustment suggestion module is used for generating improvement suggestions according to the posture problems of the user and corresponding reasons.
The system comprises a path planning module, a secondary acquisition module and a user position acquisition module, wherein the path planning module plans a path from the acquisition robot to the user position according to the user position, the secondary acquisition module is used for reminding the user to execute a specified action or improve a suggestion and controlling the acquisition robot to acquire an image of the user, and the secondary acquisition module is also used for verifying a reason and an improvement suggestion according to an acquisition result. And adjusting the improvement suggestion according to the verification result. Since the same physical problem may be for a variety of reasons and the constitution of different persons varies, the general improvement advice is not suitable for each person. Through the technical scheme of this embodiment, can judge the time that the user produced the posture problem on the basis of discovering the user posture problem, and then find out the reason that the user produced the posture problem, can propose the improvement suggestion of pertinence according to the reason to through control acquisition robot once more and carry out secondary image data and gather the validity that verifies the improvement suggestion, can ensure to improve the suggestion and be applicable to current user.
The above are merely examples of the present invention, and the present invention is not limited to the field related to this embodiment, and the common general knowledge of the known specific structures and characteristics in the schemes is not described herein too much, and those skilled in the art can know all the common technical knowledge in the technical field before the application date or the priority date, can know all the prior art in this field, and have the ability to apply the conventional experimental means before this date, and those skilled in the art can combine their own ability to perfect and implement the scheme, and some typical known structures or known methods should not become barriers to the implementation of the present invention by those skilled in the art in light of the teaching provided in the present application. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

Claims (10)

1. Be applied to 3D posture detection and analysis's collection robot coordinated control system, its characterized in that: the system comprises a control host, an input device, a display and a plurality of acquisition robots, wherein the control host is in data connection with the acquisition robots and the display;
the acquisition robot is used for acquiring walking posture data of a user;
the control host includes:
the scene scanning module is used for detecting the environmental data of the current scene;
the scene construction module is used for constructing a detection scene according to the environment data;
the user walking route acquisition module is used for acquiring a user walking route input by the input device;
the path planning module is used for planning and acquiring a walking path and an acquisition direction of the robot according to the environmental data and the walking route of the user;
the robot control module is used for controlling the position and the direction of the acquisition robot according to the planned walking path and the acquisition direction;
the time analysis module is used for generating time for generating a posture problem by a user;
the reason analysis module is used for obtaining the reason closest to the user condition according to the user posture problem;
the adjustment suggestion module is used for generating an improvement suggestion according to the posture problem of the user and the corresponding reason;
the system comprises a secondary acquisition module, a path planning module and a control module, wherein the secondary acquisition module is used for planning a path from an acquisition robot to a user position according to the user position, reminding the user to execute a specified action or an improvement suggestion and controlling the acquisition robot to acquire an image of the user, and the secondary acquisition module is also used for verifying a reason and the improvement suggestion according to an acquisition result;
the display is used for displaying a detection scene, a user walking route and collecting a robot walking path.
2. The system of claim 1, wherein the system is applied to the 3D posture detection and analysis and comprises: the control host further comprises a scene correction module for correcting the environmental data of the detection scene by the user.
3. The system of claim 2, wherein the system is applied to the 3D posture detection and analysis and comprises: the path planning module comprises:
the grouping module is used for dividing the acquisition robots into different groups according to the number of users to be acquired and user information, and each group of acquisition robots are responsible for acquiring user walking posture data of one user;
the angle dividing module is used for dividing the angle range which is responsible for acquisition by each acquisition robot according to the number of each group of robots;
the position dividing module is used for determining the acquisition position and the acquisition direction of each robot according to the walking route of the user and the angle range which is acquired by each acquisition robot;
and the path generation module is used for generating a walking path according to the current acquisition position and the acquisition position of each robot.
4. The system of claim 3, wherein the system is applied to the collection robot linkage control system for 3D posture detection and analysis, and comprises: the scene scanning module comprises a ranging module, the environment data comprises a scene size, and the ranging module is used for detecting the scene size of the current scene.
5. The system of claim 4, wherein the system is applied to the 3D posture detection and analysis and comprises: the scene scanning module further comprises an obstacle scanning module used for acquiring obstacles in a scene; the path generation module further comprises an obstacle avoidance module used for adjusting the walking path according to the obstacles in the scene.
6. The system of claim 5, wherein the system is applied to the 3D posture detection and analysis and comprises: the collection robot includes:
the travelling mechanism comprises travelling wheels, a driving motor and a steering motor, the driving motor is in power connection with the travelling wheels, the driving motor is used for driving the travelling wheels to rotate, the steering motor is in power connection with the travelling wheels, and the steering motor is used for driving the travelling wheels to steer;
the acquisition mechanism comprises a holder, and a camera is arranged on the holder;
the main control module, main control module includes controller, wireless communication module, the controller is connected with wireless communication module, camera, driving motor and turns to the equal electricity of motor, the controller is used for gathering image data through the camera, the camera is used for sending the data of gathering to the main control system through wireless communication module, the controller still is used for receiving control command and controlling camera, driving motor and turn to the motor through wireless communication module and open and close.
7. The system of claim 6, wherein the system is applied to the 3D posture detection and analysis and comprises: the acquisition mechanism comprises a lifting mechanism, the lifting mechanism comprises a lifting motor, the holder is arranged on the lifting mechanism, the lifting motor is electrically connected with the controller, and the controller is also used for controlling the lifting motor to open and close.
8. The system of claim 7, wherein the system is applied to the 3D posture detection and analysis and comprises: the system further comprises a real-time adjusting module, wherein the real-time adjusting module is used for controlling the collecting mechanism of the collecting robot to adjust the height and the angle of the camera according to the pictures collected by the camera of each collecting robot.
9. The system of claim 8, wherein the system is applied to the 3D posture detection and analysis and comprises: the collection robot is provided with at least three.
10. The system of claim 9, wherein the system is applied to the 3D posture detection and analysis and comprises: the camera is a 3D structure optical camera.
CN202011377769.7A 2020-11-30 2020-11-30 Acquisition robot linkage control system applied to 3D posture detection and analysis Active CN112656402B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011377769.7A CN112656402B (en) 2020-11-30 2020-11-30 Acquisition robot linkage control system applied to 3D posture detection and analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011377769.7A CN112656402B (en) 2020-11-30 2020-11-30 Acquisition robot linkage control system applied to 3D posture detection and analysis

Publications (2)

Publication Number Publication Date
CN112656402A CN112656402A (en) 2021-04-16
CN112656402B true CN112656402B (en) 2023-01-13

Family

ID=75403036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011377769.7A Active CN112656402B (en) 2020-11-30 2020-11-30 Acquisition robot linkage control system applied to 3D posture detection and analysis

Country Status (1)

Country Link
CN (1) CN112656402B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008142841A (en) * 2006-12-11 2008-06-26 Toyota Motor Corp Mobile robot
CN102499692A (en) * 2011-11-30 2012-06-20 沈阳工业大学 Ultrasonic gait detection device and method
CN105335696A (en) * 2015-08-26 2016-02-17 湖南信息职业技术学院 3D abnormal gait behavior detection and identification based intelligent elderly assistance robot and realization method
CN106020201A (en) * 2016-07-13 2016-10-12 广东奥讯智能设备技术有限公司 Mobile robot 3D navigation and positioning system and navigation and positioning method
CN106250867A (en) * 2016-08-12 2016-12-21 南京华捷艾米软件科技有限公司 A kind of skeleton based on depth data follows the tracks of the implementation method of system
CN106781165A (en) * 2016-11-30 2017-05-31 华中科技大学 A kind of indoor multi-cam intelligent linkage supervising device based on depth sensing
CN108596306A (en) * 2018-08-03 2018-09-28 四川民工加网络科技有限公司 Worker's information acquisition system
CN109190704A (en) * 2018-09-06 2019-01-11 中国科学院深圳先进技术研究院 The method and robot of detection of obstacles
CN109426248A (en) * 2017-08-25 2019-03-05 科沃斯机器人股份有限公司 The method of self-movement robot and its traveling method, display distribution of obstacles
CN109709947A (en) * 2017-10-26 2019-05-03 株式会社日立大厦系统 Robot management system

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007007803A (en) * 2005-07-01 2007-01-18 Toyota Motor Corp Robot and control method thereof
CN102895092A (en) * 2011-12-13 2013-01-30 冷春涛 Multi-sensor integration based three-dimensional environment identifying system for walker aid robot
WO2016178523A1 (en) * 2015-05-07 2016-11-10 Samsung Electronics Co., Ltd. Method of providing information according to gait posture and electronic device for same
CN104986241B (en) * 2015-06-29 2018-04-24 山东大学(威海) The gait planning method of quadruped robot
US10383552B2 (en) * 2016-04-26 2019-08-20 Toyota Jidosha Kabushiki Kaisha Gait analysis medical assistance robot
CN106525049B (en) * 2016-11-08 2019-06-28 山东大学 A kind of quadruped robot ontology posture tracking method based on computer vision
CN106726340B (en) * 2016-12-05 2019-06-04 北京理工大学 A kind of human body lower limbs recovery exercising robot of intelligent and safe protection
CN107358250B (en) * 2017-06-07 2019-11-22 清华大学 Body gait recognition methods and system based on the fusion of two waveband radar micro-doppler
US10722149B2 (en) * 2017-07-26 2020-07-28 Victoria University Real-time biofeedback rehabilitation tool guiding and illustrating foot placement for gait training
CN107423729B (en) * 2017-09-20 2023-12-19 湖南师范大学 Remote brain-like three-dimensional gait recognition system oriented to complex visual scene and implementation method
CN110833418A (en) * 2018-08-15 2020-02-25 上海脉沃医疗科技有限公司 Gait collecting and analyzing device
CN111067543A (en) * 2019-12-31 2020-04-28 中航创世机器人(西安)有限公司 Man-machine interaction system of horizontal stepping type rehabilitation training robot
CN111968713B (en) * 2020-08-05 2023-10-27 七海行(深圳)科技有限公司 Data acquisition method and inspection device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008142841A (en) * 2006-12-11 2008-06-26 Toyota Motor Corp Mobile robot
CN102499692A (en) * 2011-11-30 2012-06-20 沈阳工业大学 Ultrasonic gait detection device and method
CN105335696A (en) * 2015-08-26 2016-02-17 湖南信息职业技术学院 3D abnormal gait behavior detection and identification based intelligent elderly assistance robot and realization method
CN106020201A (en) * 2016-07-13 2016-10-12 广东奥讯智能设备技术有限公司 Mobile robot 3D navigation and positioning system and navigation and positioning method
CN106250867A (en) * 2016-08-12 2016-12-21 南京华捷艾米软件科技有限公司 A kind of skeleton based on depth data follows the tracks of the implementation method of system
CN106781165A (en) * 2016-11-30 2017-05-31 华中科技大学 A kind of indoor multi-cam intelligent linkage supervising device based on depth sensing
CN109426248A (en) * 2017-08-25 2019-03-05 科沃斯机器人股份有限公司 The method of self-movement robot and its traveling method, display distribution of obstacles
CN109709947A (en) * 2017-10-26 2019-05-03 株式会社日立大厦系统 Robot management system
CN108596306A (en) * 2018-08-03 2018-09-28 四川民工加网络科技有限公司 Worker's information acquisition system
CN109190704A (en) * 2018-09-06 2019-01-11 中国科学院深圳先进技术研究院 The method and robot of detection of obstacles

Also Published As

Publication number Publication date
CN112656402A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
US11929173B2 (en) Learning apparatus, rehabilitation support system, method, program, and trained model
US10568502B2 (en) Visual disability detection system using virtual reality
CN109758157B (en) Gait rehabilitation training evaluation method and system based on augmented reality
CN102567638B (en) A kind of interactive upper limb healing system based on microsensor
CN103679203B (en) Robot system and method for detecting human face and recognizing emotion
CN111816309A (en) Rehabilitation training prescription self-adaptive recommendation method and system based on deep reinforcement learning
CN107397658B (en) Multi-scale full-convolution network and visual blind guiding method and device
CN109806113A (en) A kind of ward ICU horizontal lower limb rehabilitation intelligent interaction robot group system based on ad hoc network navigation
CN111652078A (en) Yoga action guidance system and method based on computer vision
CN112494034B (en) Data processing and analyzing system and method based on 3D posture detection and analysis
CN110533763B (en) Intelligent orthopedic external fixation system based on cloud platform
KR20190140920A (en) Articulated Arm for Analyzing Anatomical Objects Using Deep Learning Networks
CN113101134A (en) Children lower limb movement auxiliary rehabilitation system based on power exoskeleton
CN110674792A (en) Construction progress monitoring device and method based on neural network
CN113241150A (en) Rehabilitation training evaluation method and system in mixed reality environment
CN112656402B (en) Acquisition robot linkage control system applied to 3D posture detection and analysis
CN115429516A (en) System and method for controlling scoliosis
CN213703458U (en) Collection robot
CN112137846A (en) Learning system, walking training system, method, program, and learning completion model
CN107967941A (en) A kind of unmanned plane health monitoring method and system based on intelligent vision reconstruct
CN113768471A (en) Parkinson disease auxiliary diagnosis system based on gait analysis
CN106419957A (en) Auxiliary system of ultrasonic scanning device
US20210005106A1 (en) Motion support system, action support method, program, learning apparatus, trained model, and learning method
Scheffer et al. Inertial motion capture in conjunction with an artificial neural network can differentiate the gait patterns of hemiparetic stroke patients compared with able-bodied counterparts
JP2017191350A (en) Driving skill evaluation device, server device, driving skill evaluation system, program and driving skill evaluation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant