CN114770596B - Medical behavior acquisition robot based on active vision and hearing and control method - Google Patents

Medical behavior acquisition robot based on active vision and hearing and control method Download PDF

Info

Publication number
CN114770596B
CN114770596B CN202210457976.6A CN202210457976A CN114770596B CN 114770596 B CN114770596 B CN 114770596B CN 202210457976 A CN202210457976 A CN 202210457976A CN 114770596 B CN114770596 B CN 114770596B
Authority
CN
China
Prior art keywords
module
image
active
visual
hearing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210457976.6A
Other languages
Chinese (zh)
Other versions
CN114770596A (en
Inventor
张军
张宇威
吴菁岳
赵凝
肖毅
宋爱国
李欣
李伟峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202210457976.6A priority Critical patent/CN114770596B/en
Publication of CN114770596A publication Critical patent/CN114770596A/en
Application granted granted Critical
Publication of CN114770596B publication Critical patent/CN114770596B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J18/00Arms
    • B25J18/06Arms flexible
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1607Calculation of inertia, jacobian matrixes and inverses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a medical behavior acquisition robot based on active vision and hearing and a control method thereof, comprising a flexible mechanical arm, an active vision module, an active hearing module, a driving module, a control processing module and a power module, wherein the flexible mechanical arm consists of a base, a steering and pitching adjusting mechanism, a bending and stretching mechanism and a vibration suppressing and tensioning device; the invention discloses a device for automatically adjusting the vibration of a medical intelligent auxiliary device, which comprises a bending and stretching mechanism, a vibration suppressing tensioning device and a vibration suppressing device, wherein the bending and stretching mechanism consists of a motor base, a push rod motor, a steel sheet connecting piece, an elastic steel sheet, a plurality of knuckle blocks, a plurality of rotating shafts and tail end knuckle blocks, the vibration suppressing tensioning device consists of a plurality of electromagnet units, the active visual module consists of a camera base, an industrial camera and an adjustable light supplementing lamp group, and the active visual module consists of a microphone array.

Description

Medical behavior acquisition robot based on active vision and hearing and control method
Technical Field
The invention relates to the crossing fields of intelligent medical treatment, artificial intelligence, robotics, instrument science, control science, computer science, sensor technology, man-machine interaction technology and the like, and relates to a medical behavior acquisition robot based on active vision and hearing and a control method.
Background
The traditional medical action flow mainly comprises identification, diagnosis, treatment and rehabilitation, and each link mainly depends on a professional doctor. On one hand, due to long working time and heavy working load of medical staff, the repeated working for a long time is easy to be tired, and the risks of errors and mistakes exist, and on the other hand, the relative lack of the medical resources in the region aggravates the medical pressure. These problems have affected the harmonious development of society, while challenges are presented to the detection and quality control of medical behavior. With the development of intelligent hospitals and intelligent medical treatment, the detection and management of medical behaviors by intelligent robots and artificial intelligence technology are urgently needed.
In recent years, medical robots have been widely used in the fields of identification and diagnosis of medical behaviors, remote consultation, whole-course data acquisition in operating rooms, and the like. In terms of vision-based medical behavior recognition and diagnosis research, chinese patent 201910243144.2 proposes a medical monitoring system including a method for displaying and navigating clinical decision support procedures, chinese patent 201710468651.7 proposes a multi-modal intelligent analysis method and system, a condition description is multi-modal data, the multi-modal data includes text data, time-series signal data and visual data, and condition diagnosis navigation and diagnosis decisions based on the multi-modal data are designed. The invention and the existing medical behavior data acquisition system release the tense medical resources to a certain extent, but have a plurality of problems, such as fixed shooting angle caused by camera fixation and single acquisition target; the requirement on ambient light is higher when the image is acquired; when the target is blocked, the camera cannot be adjusted in time, so that a proper visual angle can be found. In addition, in the medical action process of a complex medical scene, voice interference and volume level change exist in the interaction process of a plurality of doctors and patients, and challenges are presented to voice information collection.
Aiming at the problems existing in the prior patent schemes, the patent provides a medical behavior acquisition system and a control method based on active vision and hearing, and the information acquisition of medical behaviors in complex medical scenes is realized by actively tracking doctors or patients through active vision and hearing, actively adjusting illumination intensity, actively identifying and tracking speakers. The system is based on a visual non-contact scheme, and is easy to accept by patients; the movable and adjustable functions are suitable for patients with inconvenient actions, so that the comfort level of acquisition is improved; the automatic acquisition can improve the acquisition efficiency; active pose adjustment and ambient light compensation can improve the acquisition quality; active target tracking, shielding avoidance and voice recognition can realize non-blind area collection of the whole flow of medical behaviors.
Disclosure of Invention
The technical problems to be solved are that the inconvenience caused by contact type medical behavior data acquisition is overcome, the problems that in medical behavior image data acquisition, the visual angle of a camera cannot be actively adjusted, the ambient light cannot be actively changed, and targets are blocked are solved, the problem that when audio data are acquired in a disordered environment, the problem that pertinence is lacking in acquiring the audio data in a multi-sound-source environment is solved, the medical behavior acquisition robot and the control method based on active vision and hearing are designed, doctors or patients are tracked through active vision and hearing, the illumination intensity is actively adjusted, speakers are actively identified and tracked, and the information acquisition of medical behaviors in a complex medical scene is realized.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the medical behavior acquisition robot based on active vision and hearing comprises a flexible mechanical arm, a task input module, an active vision module, an active hearing module, a driving module, a control processing module and a power module, wherein the task input module is used for appointing an acquisition task, the active vision module is used for providing a vision feedback signal of a target and acquiring image data of medical behavior, the active hearing module is used for providing an audio feedback signal of the target, the control processing module receives the vision and audio feedback signal, the driving module receives an output instruction of the control processing module and controls the flexible mechanical arm to adjust the pose so as to realize tracking control of the target, and the power module is used for providing power for the active vision module, the active hearing module, the control processing module and the flexible mechanical arm;
the flexible mechanical arm consists of a base, a steering and pitching adjusting mechanism, a bending and stretching mechanism and a vibration suppression tensioning device;
the base is cylindrical, the steering and pitching adjusting mechanism comprises a steering motor, a connecting piece and a pitching motor, the steering motor is arranged on the upper surface in the cylindrical barrel of the base, a steering motor shaft extends out of the upper surface of the cylindrical barrel, the bottom of the connecting piece is fixed on an output shaft of the steering motor, the pitching motor is fixed on the connecting piece, and the output shaft of the pitching motor is perpendicular to the output shaft of the steering motor;
the bending and stretching mechanism comprises a motor seat, a push rod motor, a steel sheet connecting piece, an elastic steel sheet, knuckle blocks, a rotating shaft and tail end knuckle blocks, wherein the motor seat is fixed on an output shaft of a pitching motor, the push rod motor is arranged on the motor seat, the steel sheet connecting piece is arranged on the output shaft of the push rod motor, the rear end of the elastic steel sheet is fixed on the steel sheet connecting piece, each knuckle block is sequentially connected with the rotating pair of the motor seat through the rotating shaft to form a bending and stretching mechanism body, the knuckle block at the rearmost end is connected with the rotating pair of the motor seat, the knuckle block at the foremost end is connected with the rotating pair of the tail end knuckle block, a groove-shaped through hole is formed in the side surface above the rotating shaft along the direction of the bending and stretching mechanism body, a circular groove is formed in one side surface of each knuckle block, and the front end of the elastic steel sheet penetrates through the groove-shaped through hole on the corresponding knuckle block and is fixed in the groove-shaped through hole of the tail end knuckle block;
the vibration suppressing tensioning device is composed of at least 2 electromagnet units, and the electromagnet units are respectively arranged in the side circular grooves of the corresponding knuckle blocks.
As a further improvement of the medical behavior acquisition robot of the present invention,
the active vision module consists of a camera base, an industrial camera and an adjustable light supplementing lamp set, wherein the camera base is arranged on a tail end section block, is in a boss shape and is divided into an upper layer and a lower layer; the industrial camera is arranged on the upper layer of the camera base, the industrial camera consists of an image sensor assembly and a lens, the light supplementing lamp group consists of at least 2 LED lamps, and the light supplementing lamp group is annularly distributed on the upper layer of the camera base;
as a further improvement of the medical behavior acquisition robot of the present invention,
the active hearing module consists of a microphone array, wherein the microphone array consists of eight microphone units, and the microphone array is annularly distributed on the lower layer of the camera base.
As a further improvement of the medical behavior acquisition robot of the present invention,
the power module is arranged below the base and is responsible for supplying power to the driving module, the motor in the base, the steering motor of the steering and pitching adjusting mechanism, the pitching motor, the push rod motor, the active visual module, the active hearing module and the control processing module.
As a further improvement of the medical behavior acquisition robot of the present invention,
the control processing module comprises a visual/auditory controller and a joint controller, and is fixed below the base (6-1).
The invention provides a control method of a medical behavior acquisition robot based on active vision and hearing, which comprises a cooperative control method of active vision and active hearing, and comprises the following steps:
s0: task input;
s1: selecting an operating mode;
s2-1-1: the industrial camera collects image data;
s2-1-2: image preprocessing, namely median filtering, gray threshold segmentation and image edge detection by adopting a Sobel operator, wherein the preprocessed image retains main characteristic information and noise is removed;
s2-1-3: extracting image features, extracting features by using Haar feature operators to obtain fed-back feature vectors, and then calculating differences between the fed-back feature vectors and expected features;
s2-1-4: acquiring expected image features;
s2-2-1: the microphone array collects audio data;
s2-2-2: performing filtering processing on the audio data by adopting an AEC filtering algorithm based on deep learning;
s2-2-3: performing voiceprint recognition by adopting an I-Vector model based on a deep neural network, and locking an acquisition target;
s2-2-4: obtaining the sound source position of the audio signal by utilizing the arrival time delay difference;
s3: calculating the difference between the desired feature and the extracted feature
S4: outputting the sound source position and the difference between the expected feature and the extracted feature to a visual/auditory controller, and outputting a velocity vector v_ (f) and the type of acquired data;
s5: judging whether the image data is image data or not;
s6: image data, adjusting brightness of the light supplementing lamp group;
s7: judging whether the data are compliant;
s8: the joint controller controls the joint angle to adjust the visual angle of the industrial camera and the pose of the microphone array, and repeats the operation;
s9: compliance and the task is ended.
As a further improvement of the control method of the present invention,
the visual/audio controller fuses the image information and the audio information in the step S4, and outputs a control command, wherein the purpose of the visual feedback loop is to minimize an image error, which is defined as:
e(t)=s(m(t),a)-s*
wherein m (t) is a Haar feature, a is a potential unknown parameter in the system, S represents a PD control algorithm, namely, the extracted feature m (t) is mapped into a space of S of a target feature, and compared with the S, when a system error reaches an error allowable range, the system reaches a stable state, wherein the most core is to establish a mapping change between image change and mechanical arm movement, namely, a jacobian matrix of the system;
wherein the differential mapping relation of the robot joint space to the image feature space:
f’=J·θ’
where θ 'is the joint velocity, f' is the image feature change velocity, and J is the jacobian of the system:
J=J r ·J q
wherein J r Is an image jacobian matrix representing the relationship between image feature changes and end velocity changes, J q Is a mechanical arm jacobian matrix, and the relation between the reaction end speed and the joint speed. Wherein the mechanical arm jacobian matrix J q The rigid part models the mechanical arm according to a D-H method to obtain D-H parameters, the modeling of the flexible extension structure part is based on the assumption of piecewise constant curvature, and a space mapping transformation method is adopted to obtain a homogeneous transformation relation between the extension and retraction of the push rod and the position and the posture of the tail end of the flexible mechanical arm, wherein the homogeneous transformation matrix is as follows:
0 T 30 T 1 1 T 2 2 T 3
the forward and reverse movements of the flexible mechanical arm and the jacobian matrix of the mechanical arm are solved, and the jacobian matrix of the image is calculated:
firstly modeling a camera based on a pinhole imaging principle, then introducing a change matrix of world coordinates and camera coordinates, calculating to obtain an image jacobian matrix, obtaining the image jacobian matrix and a mechanical arm jacobian matrix, calculating to obtain a system jacobian matrix, and then calculating a real-time image jacobian matrix by adopting a jacobian matrix on-line estimation method based on Kalman filtering to obtain a visual velocity vector v c
The audio feedback loop aims to provide sound source positions, a plurality of microphone units on a microphone array synchronously collect sound signals, firstly, an I-Vector model based on a deep neural network is adopted to conduct voiceprint recognition, a registrant in a registry is searched, thus a collection target is locked, the sound source positions are obtained based on a method of arrival time delay difference, and the sound source positions are input to a visual/auditory controller to obtain an auditory velocity Vector v a
The output result equation of the vision/hearing controller is as follows:
v f =a*v c +(1-a)*v a
a=f(Δm,l)
wherein v is f The terminal velocity vector output by the visual/audio controller is the visual velocity vector v c And an auditory speed vector v a F (Δm, l) is a weight function, ρm is an image feature difference, and l is a parameter representing ambient light;
the visual/auditory controller outputs v f And the joint controller controls the joint to rotate, so that the pose of the industrial camera and microphone array at the tail end of the mechanical arm is adjusted.
Compared with the prior art, the invention has the beneficial effects that:
the invention can collect medical behaviors in a non-contact way based on vision, thereby improving the collected comfort level of patients; the flexible mechanical arm is adopted, so that the movement is convenient, and the safety is good; the tracking control method based on active vision and hearing can realize the follow-up tracking when the target is lost, and avoid the blind area during acquisition; the active adjustment of the light supplementing lamp group can make up for the deficiency of ambient light and ensure the quality of the acquired image; and the pose of the microphone array is actively regulated, so that the quality of collected audio is ensured.
Drawings
FIG. 1 is a schematic diagram of a medical behavior acquisition robot system of the present invention;
FIG. 2 is a perspective view of the overall mechanism of the medical behavior acquisition robot of the present invention;
FIG. 3 is a perspective view of a base of the medical behavior acquisition robot of the present invention;
FIG. 4 is a bottom view of the base of the medical action collection robot of the present invention;
FIG. 5 is a perspective view of a section of a knuckle of the medical action collection robot of the present invention;
FIG. 6 is a perspective view of a flexible extension mechanism of the medical behavior acquisition robot of the present invention;
FIG. 7 is a state diagram of the articulation of the medical behavior acquisition robot of the present invention;
FIG. 8 is a perspective view of an active vision and active hearing module of the medical behavior acquisition robot of the present invention;
FIG. 9 is a flowchart of an active tracking control method algorithm of the medical behavior acquisition robot of the present invention;
FIG. 10 is a diagram of a working scenario of the medical behavior acquisition robot of the present invention;
accessory marking:
1. a task input module; 2. an active vision module; 3. an active hearing module; 4, controlling a processing module; 5. a driving module; 6. a flexion-extension mechanism; 7. a power module; 8. a target; 9. ending the task; 2-1, industrial cameras; 2-2, a light supplementing lamp group; 2-3, a camera base; a 3-1 microphone array; 6-1, a base; 6-2, steering motor; 6-3, connecting pieces; 6-4, a push rod motor; 6-5 pitching motors; 6-6 steel sheet connectors; 6-7, a motor base; 6-8, elastic steel sheets; 6-9, knuckle blocks; 6-10, rotating shaft; 6-11, end section blocks; 6-12, groove type through holes; 6-13, a circular groove; s_1, a flexible mechanical arm is in a straightening state; s_2, bending state of the flexible mechanical arm; s_3, changing the pitch angle of the flexible mechanical arm; s_4, the rotation angle of the flexible mechanical arm changes state.
Detailed Description
The invention is described in further detail below with reference to the attached drawings and detailed description:
examples:
referring to fig. 1, a medical behavior acquisition robot and tracking control method based on active vision and hearing is composed of a task input module 1, an active vision module 2, an active hearing module 3, a control processing module 4, a driving module 5, a flexible mechanical arm 6 and a power module 7, wherein the task input module 1 is used for designating an acquisition task, the active vision module 2 is used for providing a vision feedback signal of a target 8 and collecting image data of medical behavior, the active hearing module 3 is used for providing an audio feedback signal of the target 8, the control processing module 4 is used for receiving the vision and audio feedback signal, the driving module 5 is used for receiving an output instruction of the control processing module 4 and controlling the flexible mechanical arm 6 to adjust the pose so as to realize tracking control of the target 8, the power module 7 is used for providing power for the active vision module 2, the active hearing module 3, the control processing module 4 and the flexible mechanical arm 6, and a task end 9 represents completion of the task;
referring to fig. 2, 3, 4 and 7, the base 6-1 is cylindrical, the steering and pitching adjusting mechanism includes a steering motor 6-2, a connecting piece 6-3 and a pitching motor 6-5, the steering motor 6-2 is installed in the cylindrical of the base, a steering motor shaft extends out of the upper surface of the cylindrical, the rotation of the mechanical arm along the vertical direction can be controlled, the first degree of freedom is a first degree of freedom, the rotating shaft of the steering motor 6-2 rotates to drive the connecting piece 6-3, the pitching motor 6-5, the motor seat 6-7 and the bending and stretching mechanism to rotate, the pitch angle changing state s_3 of the flexible mechanical arm is changed into the rotation angle changing state s_4 of the flexible mechanical arm, the bottom of the connecting piece 6-3 is fixed on the output shaft of the steering motor, and the pitching motor 6-5 is fixed on the connecting piece.
Referring to fig. 1, fig. 2, fig. 3, fig. 4, fig. 5, fig. 6 and fig. 7, the bending and stretching mechanism comprises a motor base 6-7, a push rod motor 6-4, a steel sheet connecting piece 6-6, an elastic steel sheet 6-8, a plurality of knuckle blocks 6-9, a plurality of rotating shafts 6-10 and a tail end knuckle block 6-11, wherein the motor base 6-7 is fixed on an output shaft of a pitching motor 6-5 and controls a pitch angle of a control mechanical arm, the pitch angle is a second degree of freedom, and the output shaft of the pitching motor 6-5 rotates to drive the motor base 6-7 and the bending and stretching mechanism to rotate, so that a bending state s_2 of the flexible mechanical arm is changed into a pitch angle changing state s_3 of the flexible mechanical arm. The push rod motor 6-4 is arranged on the motor base 6-7, the steel sheet connecting piece is arranged on the output shaft of the push rod motor, the rear end of the elastic steel sheet 6-8 is fixed on the steel sheet connecting piece and is placed in the groove-shaped through hole 6-12, the plurality of knuckle blocks 6-9 are sequentially connected through rotating shafts in a rotating pair mode to form a bending and stretching mechanism body, the knuckle block at the rearmost end is connected with the motor base rotating pair, the knuckle block at the foremost end is connected with the tail end knuckle block rotating pair, the elastic steel sheet is bent by shortening the push rod of the push rod motor 6-4, the elastic steel sheet is stretched to restore the original state, the third freedom degree is achieved, the push rod of the push rod motor 6-4 is contracted, the elastic steel sheet is bent, the bending and stretching mechanism is bent, and the bending state S_1 of the flexible mechanical arm is changed into the bending state S_2 of the flexible mechanical arm.
Referring to the vibration suppressing tensioning device shown in fig. 6 and 7, a plurality of electromagnet units are respectively arranged in the circular grooves 6-13 on one side of each knuckle block, when the bending and stretching structure oscillates, the electromagnet is electrified to adsorb the elastic steel sheet 6-8, so that the elastic steel sheet is attached to the circular grooves 6-13, friction force is increased, kinetic energy is converted into heat energy generated by friction, and the vibration suppressing effect is achieved.
Referring to fig. 8, the active vision module is composed of a camera base 2-3, an industrial camera 2-1 and an adjustable light supplementing lamp set 2-2, wherein the camera base is installed on a terminal section 6-11, is in a boss shape and is divided into an upper layer and a lower layer; the industrial camera is arranged on the upper layer of the camera base, the industrial camera consists of an image sensor component and a lens, the light supplementing lamp group 2-2 consists of a plurality of LED lamps, and the LED lamps are annularly distributed on the upper layer of the camera base. The active hearing module consists of a microphone array, the microphone array 3-1 consists of eight microphone submodules, and the active hearing module is annularly distributed on the lower layer of the camera base.
Referring to fig. 9, the tracking control method includes the following steps:
s0: task input;
s1: selecting an operating mode;
s2-1-1: the industrial camera collects image data;
s2-1-2: image preprocessing, namely median filtering, gray threshold segmentation and image edge detection by adopting a Sobel operator, wherein the preprocessed image retains main characteristic information and noise is removed;
s2-1-3: extracting image features, extracting features by using Haar feature operators to obtain fed-back feature vectors, and then calculating differences between the fed-back feature vectors and expected features;
s2-1-4: acquiring expected image features;
s2-2-1: the microphone array collects audio data;
s2-2-2: performing filtering processing on the audio data by adopting an AEC filtering algorithm based on deep learning;
s2-2-3: performing voiceprint recognition by adopting an I-Vector model based on a deep neural network, and locking an acquisition target;
s2-2-4: obtaining the sound source position of the audio signal by utilizing the arrival time delay difference;
s3: calculating the difference between the desired feature and the extracted feature
S4: outputting the sound source position and the difference between the expected feature and the extracted feature to a visual/auditory controller, and outputting a velocity vector v_ (f) and the type of acquired data;
s5: judging whether the image data is image data or not;
s6: image data, adjusting brightness of the light supplementing lamp group;
s7: judging whether the data are compliant;
s8: the joint controller controls the joint angle to adjust the visual angle of the industrial camera and the pose of the microphone array, and repeats the operation;
s9: compliance and the task is ended.
Referring to fig. 10, in a cardiopulmonary resuscitation scenario, in order to record the compression motion data of a rescuer, the flexible mechanical arm actively adjusts the pose of the industrial camera, finds the chest region of the patient, adjusts the appropriate view to capture the cardiac compression motion, and when artificial respiration is performed, the industrial camera finds the face region of the patient, and records the motion data.
The above description is only of the preferred embodiment of the present invention, and is not intended to limit the present invention in any other way, but is intended to cover any modifications or equivalent variations according to the technical spirit of the present invention, which fall within the scope of the present invention as defined by the appended claims.

Claims (6)

1. Medical behavior acquisition robot based on initiative vision and hearing, including flexible arm (6), task input module (1), initiative vision module (2), initiative hearing module (3), drive module (5), control processing module (4) and power module (7), its characterized in that: the task input module (1) is used for appointing an acquisition task, the active visual module (2) provides visual feedback signals of the target (8) and performs image data acquisition of medical behaviors, the active auditory module (3) is used for providing audio feedback signals of the target (8) and performing audio data acquisition of the medical behaviors, the control processing module (4) receives the visual and audio feedback signals, the driving module (5) receives output instructions of the control processing module (4) and controls the flexible mechanical arm (6) to perform pose adjustment to realize tracking control of the target (8), and the power module (7) provides power for the active visual module (2), the active auditory module (3), the control processing module (4) and the flexible mechanical arm (6);
the flexible mechanical arm (6) consists of a base (6-1), a steering and pitching adjusting mechanism, a bending and stretching mechanism and a vibration suppressing and tensioning device;
the base (6-1) is cylindrical, the steering and pitching adjusting mechanism comprises a steering motor (6-2), a connecting piece (6-3) and a pitching motor (6-5), the steering motor (6-2) is arranged on the upper surface in the cylindrical barrel of the base (6-1), an output shaft of the steering motor (6-2) extends out of the upper surface of the cylindrical barrel, the bottom of the connecting piece (6-3) is fixed on the output shaft of the steering motor (6-2), the pitching motor (6-5) is fixed on the connecting piece (6-3), and the output shaft of the pitching motor (6-5) is perpendicular to the output shaft of the steering motor (6-2);
the bending and stretching mechanism (6) comprises a motor seat (6-7), a push rod motor (6-4), a steel sheet connecting piece (6-6), an elastic steel sheet (6-8), knuckle blocks (6-9), a rotating shaft (6-10) and a tail end knuckle block (6-11), wherein the motor seat (6-7) is fixed on an output shaft of a pitching motor (6-5), the push rod motor (6-4) is arranged on the motor seat (6-7), the steel sheet connecting piece (6-6) is arranged on an output shaft of the push rod motor (6-4), the rear end of the elastic steel sheet (6-8) is fixed on the steel sheet connecting piece (6-6), each knuckle block (6-9) is sequentially connected with a rotating pair through the rotating shaft (6-10) to form a bending and stretching mechanism body, the knuckle block (6-9) at the last end is connected with the rotating pair of the motor seat (6-7), the knuckle block (6-9) at the foremost end is connected with the rotating pair of the tail end knuckle block (6-11), grooves are formed in the side face of each knuckle block (6-9) along the direction of the rotating shaft (6-12) along the single-side of the rotating shaft (6-12), the front end of the elastic steel sheet (6-8) passes through the groove-shaped through hole (6-12) on the corresponding knuckle block (6-9) and is fixed in the groove-shaped through hole (6-12) of the tail end knuckle block;
the vibration suppressing tensioning device consists of at least 2 electromagnet units which are respectively arranged in side circular grooves (6-13) of the corresponding knuckle blocks (6-9);
the control processing module (4) comprises a visual/auditory controller and a joint controller, and is fixed below the base (6-1).
2. The active vision and hearing based medical action acquisition robot of claim 1, wherein:
the active vision module (2) consists of a camera base (2-3), an industrial camera (2-1) and an adjustable light supplementing lamp set (2-2), wherein the camera base (2-3) is arranged on a tail end section block (6-11), and the camera base (2-3) is in a boss shape and is divided into an upper layer and a lower layer; the industrial camera (2-1) is arranged on the upper layer of the camera base (2-3), the industrial camera (2-1) is composed of an image sensor component and a lens, the light supplementing lamp group (2-2) is composed of at least 2 LED lamps, and the light supplementing lamp group is distributed on the upper layer of the camera base (2-3) in a ring shape.
3. The active vision and hearing based medical action acquisition robot of claim 2, wherein:
the active hearing module (3) consists of a microphone array, wherein the microphone array consists of eight microphone units, and the microphone array is annularly distributed on the lower layer of the camera base (2-3).
4. The active vision and hearing based medical action acquisition robot of claim 1, wherein:
the power module (7) is arranged below the base (6-1) and is responsible for supplying power to the driving module (5), a steering motor (6-2) of the steering and pitching adjusting mechanism, the pitching motor (6-5), the push rod motor (6-4), the active visual module (2), the active hearing module (3) and the control processing module (4).
5. The control method of an active vision and hearing based medical action acquisition robot according to any one of claims 1-4, comprising an active vision and active hearing cooperative control method, comprising the steps of:
s0: task input;
s1: selecting an operating mode;
s2-1-1: the industrial camera collects image data;
s2-1-2: image preprocessing, namely median filtering, gray threshold segmentation and image edge detection by adopting a Sobel operator, wherein the preprocessed image retains main characteristic information and noise is removed;
s2-1-3: extracting image features, namely extracting features by using Haar feature operators to obtain fed-back feature vectors;
s2-1-4: acquiring expected image features;
s2-2-1: the microphone array collects audio data;
s2-2-2: performing filtering processing on the audio data by adopting an AEC filtering algorithm based on deep learning;
s2-2-3: performing voiceprint recognition by adopting an I-Vector model based on a deep neural network, and locking an acquisition target;
s2-2-4: obtaining the sound source position of the audio signal by utilizing the arrival time delay difference;
s3: calculating the difference between the expected feature and the extracted feature;
s4: outputting the sound source position and the difference between the expected feature and the extracted feature to a visual/auditory controller, and outputting a velocity vector v_ (f) and the type of acquired data;
s5: judging whether the image data is image data or not;
s6: image data, adjusting brightness of the light supplementing lamp group;
s7: judging whether the image data is compliant;
s8: the joint controller controls the joint angle to adjust the visual angle of the industrial camera and the pose of the microphone array, and the steps are repeated;
s9: compliance and the task is ended.
6. The control method of the active vision and hearing based medical behavior acquisition robot according to claim 5, wherein:
the visual/audio controller fuses the image information and the audio information in the step S4, outputs a control command, and performs visual feedback in the output process, wherein the purpose of the visual feedback loop is to minimize an image error, and the error is defined as:
e(t)=s(m(t),a)-s*
wherein m (t) is a Haar feature, a is a potential unknown parameter in the system, S represents a PD control algorithm, namely, the extracted feature m (t) is mapped into a space of S of a target feature, and compared with the S, when a system error reaches an error allowable range, the system reaches a stable state, wherein the most core is to establish a mapping change between image change and mechanical arm movement, namely, a jacobian matrix of the system;
wherein the differential mapping relation of the robot joint space to the image feature space:
f’=J·θ’
where θ 'is the joint velocity, f' is the image feature change velocity, and J is the jacobian of the system:
J=J r ·J q
wherein J r Is an image jacobian matrix representing the relationship between image feature changes and end velocity changes, J q Is the relationship between the velocity of the reaction end and the velocity of the joint of the mechanical arm jacobian matrix, wherein the jacobian matrix J q The rigid part models the mechanical arm according to a D-H method to obtain D-H parameters, the modeling of the flexible extension structure part is based on the assumption of piecewise constant curvature, and a space mapping transformation method is adopted to obtain a homogeneous transformation relation between the extension and retraction of the push rod and the position and the posture of the tail end of the flexible mechanical arm, wherein the homogeneous transformation matrix is as follows:
0 T 30 T 1 1 T 2 2 T 3
the forward and reverse movements of the flexible mechanical arm and the jacobian matrix of the mechanical arm are solved, and the jacobian matrix of the image is calculated:
firstly modeling a camera based on a pinhole imaging principle, then introducing a change matrix of world coordinates and camera coordinates, and calculating to obtain an image jacobian matrix to obtain the image jacobian matrix and a mechanical arm jacobianAfter the matrix is obtained, a jacobian matrix of the system is obtained by calculation, and then a real-time image jacobian matrix is calculated by adopting a jacobian matrix on-line estimation method based on Kalman filtering to obtain a visual velocity vector v c
The audio feedback loop aims to provide a sound source position, a plurality of microphone units on a microphone array synchronously collect sound signals, firstly, an I-Vector model based on a deep neural network is adopted to carry out voiceprint recognition, a registrant in a registry is searched, thus a collection target is locked, the sound source position is obtained based on a method of arrival time delay difference, and the sound source position is input to a visual/auditory controller to obtain an auditory velocity Vector v a
The output result equation of the vision/hearing controller is as follows:
v f =a*v c +(1-a)*v a
a=f(Δm,l)
wherein v is f The terminal velocity vector output by the visual/audio controller is the visual velocity vector v c And an auditory speed vector v a F (Δm, l) is a weight function, Δm is an image feature difference, and l is a parameter representing ambient light;
the visual/auditory controller outputs v f And the joint controller controls the joint to rotate, so that the pose of the industrial camera and microphone array at the tail end of the mechanical arm is adjusted.
CN202210457976.6A 2022-04-28 2022-04-28 Medical behavior acquisition robot based on active vision and hearing and control method Active CN114770596B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210457976.6A CN114770596B (en) 2022-04-28 2022-04-28 Medical behavior acquisition robot based on active vision and hearing and control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210457976.6A CN114770596B (en) 2022-04-28 2022-04-28 Medical behavior acquisition robot based on active vision and hearing and control method

Publications (2)

Publication Number Publication Date
CN114770596A CN114770596A (en) 2022-07-22
CN114770596B true CN114770596B (en) 2023-08-11

Family

ID=82433451

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210457976.6A Active CN114770596B (en) 2022-04-28 2022-04-28 Medical behavior acquisition robot based on active vision and hearing and control method

Country Status (1)

Country Link
CN (1) CN114770596B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103029139A (en) * 2013-01-15 2013-04-10 北京航空航天大学 Flexible mechanical arm vibration reduction device and method based on magneto-rheological technology
CN105596005A (en) * 2009-03-26 2016-05-25 直观外科手术操作公司 System for providing visual guidance for steering a tip of an endoscopic device towards one or more landmarks and assisting an operator in endoscopic navigation
CN106695879A (en) * 2016-08-09 2017-05-24 北京动力京工科技有限公司 Ultra-elastic alloy slice mechanical arm
EP3320873A1 (en) * 2015-07-09 2018-05-16 Kawasaki Jukogyo Kabushiki Kaisha Surgical robot
CN108044599A (en) * 2018-01-23 2018-05-18 连雪芳 A kind of examining and repairing mechanical arm device applied to high-intensity magnetic field intense radiation operating mode
CN109866214A (en) * 2017-12-01 2019-06-11 深圳光启合众科技有限公司 Bionic flexible structure and robot with it

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016063348A1 (en) * 2014-10-21 2016-04-28 オリンパス株式会社 Curving mechanism and flexible medical equipment
KR101818400B1 (en) * 2017-08-11 2018-01-15 한양대학교 산학협력단 Magnetic Robot System

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105596005A (en) * 2009-03-26 2016-05-25 直观外科手术操作公司 System for providing visual guidance for steering a tip of an endoscopic device towards one or more landmarks and assisting an operator in endoscopic navigation
CN103029139A (en) * 2013-01-15 2013-04-10 北京航空航天大学 Flexible mechanical arm vibration reduction device and method based on magneto-rheological technology
EP3320873A1 (en) * 2015-07-09 2018-05-16 Kawasaki Jukogyo Kabushiki Kaisha Surgical robot
CN106695879A (en) * 2016-08-09 2017-05-24 北京动力京工科技有限公司 Ultra-elastic alloy slice mechanical arm
CN109866214A (en) * 2017-12-01 2019-06-11 深圳光启合众科技有限公司 Bionic flexible structure and robot with it
CN108044599A (en) * 2018-01-23 2018-05-18 连雪芳 A kind of examining and repairing mechanical arm device applied to high-intensity magnetic field intense radiation operating mode

Also Published As

Publication number Publication date
CN114770596A (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN102791214B (en) Adopt the visual servo without calibration that real-time speed is optimized
Mahmud et al. Interface for human machine interaction for assistant devices: A review
JP2022036255A (en) Systems, methods and computer-readable storage media for controlling aspects of robotic surgical device and viewer adaptive stereoscopic display
Duan et al. Design of a multimodal EEG-based hybrid BCI system with visual servo module
JP2018198750A (en) Medical system, control device for medical support arm, and control method for medical support arm
US20120101508A1 (en) Method and device for controlling/compensating movement of surgical robot
CN106214163B (en) Recovered artifical psychological counseling device of low limbs deformity correction postoperative
JP2023501480A (en) How to control a surgical robot
CN114770596B (en) Medical behavior acquisition robot based on active vision and hearing and control method
CN116236328A (en) Visual-based intelligent artificial limb system capable of realizing natural grabbing
Takanishi et al. Development of an anthropomorphic head-eye robot with two eyes-coordinated head-eye motion and pursuing motion in the depth direction
CN111973406B (en) Follow-up flexible servo traction gait rehabilitation robot system
Jin et al. Human-robot interaction for assisted object grasping by a wearable robotic object manipulation aid for the blind
CN111358659B (en) Robot power-assisted control method and system and lower limb rehabilitation robot
Perez et al. Robotic wheelchair controlled through a vision-based interface
WO2019145907A1 (en) Method aimed at patients with motor disabilities for selecting a command by means of a graphic interface, and corresponding system and computer program product
CN111524592B (en) Intelligent diagnosis robot for skin diseases
Maheswari et al. Voice control and eyeball movement operated wheelchair
CN115120250A (en) Intelligent brain-controlled wheelchair system based on electroencephalogram signals and SLAM control
Yang et al. Head-free, human gaze-driven assistive robotic system for reaching and grasping
CN111228084B (en) Upper limb rehabilitation training device
KR20140039518A (en) Apparatus for robot driving control using emg and acceleration sensor and method thereof
Takanishi An anthropomorphic robot head having autonomous facial expression function for natural communication with humans
Foresi et al. Human-robot cooperation via brain computer interface in assistive scenario
KR100374346B1 (en) Head pose tracking method using image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant