CN113133787A - Robot-machine cooperative interaction control method and system for nasopharynx swab sampling robot - Google Patents

Robot-machine cooperative interaction control method and system for nasopharynx swab sampling robot Download PDF

Info

Publication number
CN113133787A
CN113133787A CN202110287765.8A CN202110287765A CN113133787A CN 113133787 A CN113133787 A CN 113133787A CN 202110287765 A CN202110287765 A CN 202110287765A CN 113133787 A CN113133787 A CN 113133787A
Authority
CN
China
Prior art keywords
sampling
data
swab
robot
operator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110287765.8A
Other languages
Chinese (zh)
Other versions
CN113133787B (en
Inventor
王君臣
王嘉楠
孙振
徐颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202110287765.8A priority Critical patent/CN113133787B/en
Publication of CN113133787A publication Critical patent/CN113133787A/en
Application granted granted Critical
Publication of CN113133787B publication Critical patent/CN113133787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Manipulator (AREA)

Abstract

A nasopharyngeal swab sampling robot-human-computer collaborative interaction control method comprises four steps of (1) motion information acquisition, (2) motion state identification, (3) robot remote operation control, and (4) visual feedback and force feedback. The invention has the following technical effects: 1) the operator and the person to be sampled are isolated, and the phenomenon of infection caused by contact in the sampling process is avoided. 2) The method has more intuitive operation feedback, is simple to operate, and reduces the burden of an operator. 3) The comfort of the person to be sampled is improved in the sampling process.

Description

Robot-machine cooperative interaction control method and system for nasopharynx swab sampling robot
Technical Field
The invention relates to the field of robot-human cooperative interaction control, in particular to a human-computer cooperative interaction control method and system of a nasopharyngeal swab sampling robot.
Background
The big outbreak of new crown epidemic situation slows down the global economic growth, and people go out inconveniently, are hindered in work and study, and suffer serious influences on life. The new coronavirus has extremely strong infectivity, so that nucleic acid detection becomes a necessary link before people go out, work and study. Common nucleic acid detection modes include deep cough sputum, nasal swab sampling, pharyngeal swab sampling and the like, wherein pharyngeal swab sampling is the most widely applied sampling mode. In the sampling process, the contact between an operator and a person to be sampled has certain infection risk, and in the background, in order to avoid the infection phenomenon in the sampling process, the pharynx swab sampling robot appears in the visual field of people.
The existing nasopharyngeal swab sampling robots are various, but the research on the man-machine cooperative interaction control method of the nasopharyngeal swab sampling robot is few and few. Chinese patent application number 202021804991.6's nasopharynx swab sampling robot includes mobile work platform and sets up the automatic stripping off device of swab on mobile work platform, dress isolation sleeve device automatically, the test tube is transported and switch cover device, the sampling arm, sampling window module and pluck isolation sleeve device, chinese patent application number 202021426802.6's the robot that is used for nucleic acid to detect the sampling includes running gear, elevating system, detection mechanism, pharynx swab transport mechanism, sterilization mechanism, storage mechanism and total control end, most methods are emphatically solved the robot workstation, the problem in aspects such as bottom control and safety system, and go on for full automaticity, system reliability, the security is difficult for guaranteeing, the comfort level of the person of being sampled is difficult for promoting equally among the sampling process.
In order to increase the safety and the sampling comfort of a sampled person and improve the operation experience of an operator, a better man-machine cooperative interaction control method is needed. Therefore, how to realize the remote operation control of the robot by the operator on the premise of ensuring the safety and the sampling comfort also becomes a key problem.
Disclosure of Invention
The invention provides a man-machine cooperative interaction and control method and system for throat swab sampling. The method can realize that an operator can remotely control the robot in real time through an arm to complete all actions required by sampling such as throat swab sampling, swab taking and placing and the like, meanwhile, the method comprises the visual feedback system and the force feedback system, the visual feedback system can observe the real-time state of a person to be sampled while avoiding the contact of the operator and the person to be sampled, image guidance in the sampling process of the operator is realized, the safety and the effectiveness of the sampling process are ensured, the force feedback system can reflect contact force information in real time, the operator can conveniently master operation force, and the sampling comfort level is ensured.
In order to solve the problems, the invention provides a nasopharyngeal swab sampling robot man-machine cooperative interaction control method which comprises four steps of (1) obtaining motion information, (2) identifying motion states, (3) remotely operating and controlling a robot, and (4) visually feeding back and feeding back force.
According to an embodiment of the present invention, for example, the (1) motion information acquisition includes:
the motion information acquisition is realized by wearing inertial sensors and data gloves with trackers, wherein 4 inertial sensors are respectively fixed at four parts of an upper arm, a lower arm, a wrist and a waist of an operator through binding bands, 3 sensors of the arm and the wrist are enabled to be on the same straight line, and the speed, acceleration and posture (such as Euler angles: Roll, Pitch and Yaw) information of the upper arm, the lower arm and the wrist of the operator are acquired;
the Euler angles are the rotation angles of the object around three coordinate axes of a coordinate system, Yaw (Yaw) is the rotation angle around a y axis, Pitch (Pitch) is the rotation angle around an x axis, Roll (Roll) is the rotation angle around a z axis, and the rotation angle of the sensor relative to the other sensor can be obtained by making a difference between the Euler angles of the two sensors; the Euler angle data of the corresponding waist sensor are respectively subtracted from the Euler angle data of the inertial sensors at the upper arm, the lower arm and the wrist, so that the posture data of the upper arm, the lower arm and the wrist relative to the body can be obtained, the calculation mode is shown in the following formula, through the calculation, the position and the movement of a wearer can not influence the arm control robot, namely, the movement of the robot is only related to the arm action of an operator;
RollBodytoUpperArm=RollUpperArm-RollBody
PitchBodytoUpperArm=PitchUpperArm-PitchBody
YawBodytoUpperArm=YawUpperArm-YawBody
RollBodytoForeArm=RollForeArm-RollBody
PitchBodytoForeArm=PitchForeArm-PitchBody
YawBodytoForeArm=YawForeArm-YawBody
RollBodytoPalm=RollPalm-RollBody
PitchBodytoPalm=PitchPalm-PitchBody
YawBodytoPalm=YawPalm-YawBody
preferably, the inertial sensor is a Dutch Xsens Dot sensor, the size of the sensor is 36 multiplied by 30 multiplied by 11mm, the inertial sensor consists of a three-axis accelerometer and a magnetic measurement unit, the inertial sensor is tied to four limbs or waist through a binding band without influencing comfort, the sensor is in real-time communication with a computer at the frequency of 60Hz through BLE, data comprise three-axis speed, angular speed, acceleration, free acceleration, a magnetic field and the like, and attitude data, namely Euler angles, of a coordinate system of the sensor relative to a terrestrial coordinate system are calculated through a self-carrying sensor fusion algorithm;
preferably, the data glove is worn on a hand, the finger bending information is collected in a wireless transmission mode at the frequency of 60Hz, and the wrist fixing and positioning system comprises a wrist Tracker and an infrared positioning base station and is used for acquiring the position and posture data of the hand.
According to an embodiment of the present invention, for example, the (2) motion state identification includes:
the motion state identification needs to identify the motion state of an operator in real time by using the acceleration and attitude data of a sensor; through the analysis of the sampling process, 4 motion states including a rest state, a swab holding state, a sampling state and other actions are defined, and 2 transition states from the swab holding state to the beginning of sampling and from the end of sampling to the swab holding state are defined; in the sampling process, an operator takes the pharyngeal swab from a rest state through other action states, keeps the state of holding the swab after taking the pharyngeal swab, finds the position of the oral cavity of a person to be sampled, reaches a sampling state through a transition state, changes the sampling state into the holding state through the transition state, finally puts the swab in a kit, and restores to the rest state through other action states; one sampling period can be summarized as the following flow: rest- > other- > clamp- > transition- > sample- > transition- > clamp- > other- > rest; the swab clamping state ensures stable clamping and no loosening of the swab, the sampling state ensures comfort of a person to be sampled and effectiveness of sampling by finer hand actions, and other action states ensure that an operator has more intuitive operation feedback and good operation experience;
the motion state recognition operates according to the following method: the input data are three-axis acceleration and Euler angle data of an upper arm, a lower arm and a wrist and bending angle data of a middle finger, 19 channels are shared, data with the window length of 500ms are selected as input, when the sampling frequency of the sensor is 60Hz, the format of the input data is 1 multiplied by 30 multiplied by 19, a CNN, an LSTM or a Transformer is adopted in an identification algorithm, and the algorithms have better generalization capability and higher accuracy compared with the traditional machine learning classification algorithm;
the training data is provided by more than 10 professional sampling operators, the motion data of the whole process from rest to swab holding and sampling to swab recovery of the operators is recorded, and the state is marked, and the acquisition process of the training data is as follows:
(a) the operator maintains a rest state 2S (rest state)
(b) In 2S, the arm reaches the position for placing the swab and makes the action for picking (other action and motion states)
(c) The operator holds the swab and reaches the human mouth model position in 2S (holding the swab state)
(d) Inserting the swab into the oral cavity in 1S to prepare for sampling (transition state from holding the swab to starting sampling)
(e) Sampling action holding 5S (sampling state)
(f) Stop sampling and take out the swab in 1S (transition state from end of sampling to holding swab)
(g) The swab was taken to the kit in 2S ready to be set down (clamped)
(h) Placing the swab in the kit, placing the arm in the rest position (other action and motion state) in 2S
(j) Operator rest 2S (rest state)
Each operator is prompted strictly according to time, after practice, the operation is repeated for 5 times, data is recorded, and effective data is extracted after the data is sorted to serve as a training set and a test set;
the convolutional neural network CNN has higher operation speed while ensuring accuracy when processing a time sequence with shorter time; the convolutional neural network consists of an input layer, a convolutional layer 1, a convolutional layer 2, a maximum pooling layer, a flat layer, a full-link layer and a SoftMax layer, wherein the input layer is three-dimensional acceleration data of inertial sensors at 3 positions of an upper arm, a lower arm and a wrist, three-dimensional Euler angle data and bending angle data of a middle finger, the data of 19 channels are shared, data with the window size of 500ms is selected as input, and the input data format is 1 multiplied by 30 multiplied by 19; the convolutional layer is mainly used for feature extraction, relates to a plurality of layers of convolutional layers with different kernel sizes in the proposed network and is mainly responsible for learning relevant features from input data; the maximum pooling layer is mainly used for reducing the input size of the model and also preventing the model from being over-fitted, the fully-connected layer consists of a plurality of neurons, the neurons calculate the weight sum of the input with the assigned weight and output activation, and finally the probability of each prediction category is output by the SoftMax layer, and the value is the highest value and is the final prediction category;
the algorithm model training and real-time prediction process is realized: an h5 format algorithm model is obtained after training by using a TensorFlow and a Keras framework in python, the h5 format model is converted into an understandable txt format, the forward propagation process of the algorithm is completed in a C + + program, and the motion state of an operator is predicted in real time.
According to one embodiment of the invention, for example, the control strategy is different in the 4 motion states and the 2 transitional motion states; in the swab clamping state, when the hand bending angle is smaller than 3 degrees, the clamping jaw still keeps clamping, the clamping jaw is not affected by shaking of the hand of an operator, stable clamping ensures that the swab does not fall off and loosen, and meanwhile, the clamping jaw provides sufficient torque to ensure stable clamping;
in a sampling state, the swab is stably clamped, and the sampling comfort and safety of a person to be sampled are ensured; the robot is controlled to move by using the position and posture data of the data glove and the positioning system, and the hand position and the motion of an operator can be finely copied by the control mode, so that the sampling comfort of a sampled person is ensured; meanwhile, an LSTM model is applied to learn the sampling action rule of an operator and predict the action track of the operator, the model consists of an LSTM layer, a Flatten layer and a full-connection layer, the action track of the operator is predicted in real time by using the trained model, when the prediction is deviated from the actual situation, safety problems possibly occur, and the robot is stopped immediately;
in other motion states, the robot is controlled to move by using the attitude (Euler angle) data of the inertial sensor, and the robot and the arm of the operator almost keep the same attitude motion in the operation mode, so that the operator can be provided with more intuitive operation feedback, the operation is simple, and the operation burden of the operator is effectively reduced.
According to an embodiment of the present invention, for example, the (3) robot remote operation control includes:
(3) the robot has two remote operation control modes, and the specific control mode is selected according to the predicted motion state; the 2 transition states are time points for changing the control mode; when the swab holding state is predicted to be in a sampling starting state, the robot is controlled to move by using the inertial sensor data, and when the swab holding state is predicted to be in a swab holding state after the sampling is finished, the robot is controlled to move by using the data glove and the positioning system data;
the method comprises the following steps that firstly, the Euler angle data of an inertial sensor is used for controlling the robot to move, and other motion states are controlled in a first mode; when the sensors are worn, the upper arm, the lower arm and the three inertial sensors at the wrist are ensured to be on the same straight line as much as possible, after the sensors are worn, the arms of an operator keep relaxed, and a group of initial posture data is recorded for correcting wearing errors; calculating the variation of each position sensor relative to the initial position according to a following formula, using the variation as input data from an arm to a robot joint space, obtaining 7 joint angle data of the robot after mapping from the arm to the robot joint space (as shown in the following table 1), and sending a joint angle instruction in real time in an upper computer program to control the robot to move in real time; the control method can ensure that the robot and the human body arm keep the posture motion with the height similar to each other, brings more visual operation experience to an operator, and reduces the operation burden of the operator;
Rollupper arm=RollBodytoUpperArm-Roll0,BodytoUpperArm
PitchUpper arm=PitchBodytoUpperArm-Pitch0,BodytoUpperArm
YawUpper arm=YawBodytoUpperArm-Yaw0,BodytoUpperArm
RollLower arm=RollBodytoForeArm-Roll0,BodytoForeArm
PitchLower arm=PitchBodytoForeArm-Pitch0,BodytoForeArm
YawLower arm=YawBodytoForeArm-Yaw0,BodytoForeArm
RollWrist=RollBodytoPalm-Roll0,BodytoPalm
PitchWrist=PitchBodytoPalm-Pitch0,BodytoPalm
YawWrist=YawBodytoPalm-Yaw0,BodytoPalm
TABLE 1 mapping method of arm to robot joint space
Figure BDA0002981193440000061
Figure BDA0002981193440000071
The second mode is that the position and attitude data (three-dimensional Cartesian coordinates and three-dimensional attitude angles) of the data glove and the positioning system are used for controlling the motion of the robot, the sampling state is controlled by the second mode, the data are obtained by adopting the existing calculation method, and the robot control instruction is also sent in real time through an upper computer program after the data are taken, the control method can highly restore the detailed actions and positions of hands, and an operator can ensure the comfort of a sampled person in the sampling process after skilled operation;
the closing of the terminal anchor clamps of robot is controlled by the crooked data of finger of data gloves, and when centre gripping swab state and sampling state, anchor clamps are tightly closed, do not receive operator's hand shake influence to guarantee that the swab centre gripping is stable and not hard up.
According to an embodiment of the present invention, for example, the (4) visual feedback and force feedback includes:
(4) the visual feedback and the force feedback can provide real-time dynamic information of a sampled person end for an operator so as to ensure the safety and the effectiveness of the sampling process and the sampling comfort of the sampled person;
the visual feedback is applied to two cameras and a display, wherein one camera is arranged at the tail end of the mechanical arm and used for observing the condition of a sampling point, the other camera is arranged on the side surfaces of the robot and a person to be sampled and used for feeding back the relative position relationship between the robot and the person to be sampled in real time, and the pictures of the two cameras are simultaneously accessed to the display of an operator end;
the force feedback system of the Kuka iwwa7 freedom medical robot is applied to force feedback, moment information when the swab is in contact with a sampling point is acquired in real time, the moment information is displayed in the form of a cylindrical diagram in an upper computer interface, a red cylindrical bar with a higher numerical value is displayed when the force is too large, the red cylindrical bar is also transmitted to an operator through a display, and the operator controls the operation of the operator according to the force feedback information, so that the comfort of a person to be sampled is guaranteed.
The invention has the following beneficial technical effects:
1) the operator and the person to be sampled are isolated, and the phenomenon of infection caused by contact in the sampling process is avoided.
2) The method has more intuitive operation feedback, is simple to operate, and reduces the burden of an operator.
3) The comfort of the person to be sampled is improved in the sampling process.
Drawings
Fig. 1 is a flowchart illustrating four main steps of a nasopharyngeal swab sampling robot cooperative interaction control method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a CNN network structure for motion state identification in the embodiment of the present invention.
Fig. 3 is a simplified schematic view of a scene of a nasopharyngeal swab sampling robot collaborative interaction control method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings. Those skilled in the art will appreciate that the present invention is not limited to the drawings and the following examples.
In the description of the present invention, it should be noted that the orientation or positional relationship indicated by the terms "length", "width", "upper", "lower", "far", "near", etc., are based on the orientation or positional relationship shown in the drawings, and are only for convenience of description and simplification of description, but do not indicate or imply that the device or element referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and should not be construed as limiting the specific scope of the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only to distinguish technical features, have no essential meaning, and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features.
The invention provides a nasopharyngeal swab sampling robot man-machine cooperative interaction control system, which comprises: the system comprises a 7-degree-of-freedom mechanical arm, a tail end clamping jaw, an inertial sensor, a data glove, a display and 2 cameras. The clamping jaw is connected with the mechanical arm through a flange, one camera is fixed above the clamping jaw and used for observing the oral cavity of a person to be sampled, and the other camera is fixed in a ward and used for observing the relative position of the robot and the person to be sampled.
The nasopharyngeal swab sampling robot man-machine cooperative interaction control method provided by the invention comprises four main steps of (1) acquiring motion information, (2) identifying motion state, (3) remotely operating and controlling a robot, and (4) visually feeding back and force feeding back.
The detailed flow chart of the four main steps is shown in the attached figure 1.
(1) Motion information acquisition is achieved by wearing inertial sensors and data gloves with trackers. In a specific embodiment, for example, the inertial sensor is a dutch Xsens Dot sensor, the size of the sensor is 36 × 30 × 11mm, the inertial sensor is composed of a triaxial accelerometer and a magnetic measurement unit, the inertial sensor can be tied to limbs or waist through a binding band without influencing comfort, the sensor is in real-time communication with a computer at the frequency of 60Hz through BLE, data comprise triaxial speed, angular speed, acceleration, free acceleration, magnetic field and the like, and attitude data, namely euler angles, of a coordinate system of the sensor relative to a terrestrial coordinate system are calculated through a self-contained sensor fusion algorithm. The data gloves can be worn on hands, finger bending information is collected in a wireless transmission mode at the frequency of 60Hz, the wrist can be fixed with a positioning system, the positioning system comprises a wrist Tracker and an infrared positioning base station, and position and posture data of the hands can be obtained.
The method comprises the steps of fixing 4 inertial sensors on four parts of an upper arm, a lower arm, a wrist and a waist of an operator respectively through binding bands, ensuring that 3 sensors of the arm and the wrist are on the same straight line as much as possible, and simultaneously wearing a data glove with a positioning system, so that information of speed, acceleration and posture (Euler angle: Roll, Pitch and Yaw) of the upper arm, the lower arm and the wrist of the operator and information of a bending angle of fingers and a space position (Cartesian coordinate and posture angle) of hands can be obtained. These data will be used for motion state recognition and for remote operation control of the robot.
The euler angle is the rotation angle of the object around three coordinate axes of the coordinate system, Yaw (Yaw) is the rotation angle around the y-axis, Pitch (Pitch) is the rotation angle around the x-axis, and Roll (Roll) is the rotation angle around the z-axis. The difference between the euler angles of the two sensors can obtain the rotation angle of the sensor relative to the other sensor. The Euler angle data of the corresponding waist sensor are respectively subtracted from the Euler angle data of the inertial sensors at the upper arm, the lower arm and the wrist, so that the posture data of the upper arm, the lower arm and the wrist relative to the body can be obtained, the calculation mode is shown in the following formula, through the calculation, the position and the movement of a wearer can not influence the arm control robot, namely, the movement of the robot is only related to the arm action of an operator.
RollBodytoUpperArm=RollUpperArm-RollBody
PitchBodytoUpperArm=PitchUpperArm-PitchBody
YawBodytoUpperArm=YawUpperArm-YawBody
RollBodytoForeArm=RollForeArm-RollBody
PitchBodytoForeArm=PitchForeArm-PitchBody
YawBodytoForeArm=YawForeArm-YawBody
RollBodytoPalm=RollPalm-RollBody
PitchBodytoPalm=PitchPalm-PitchBody
YawBodytoPalm=YawPalm-YawBody
(2) The motion state identification needs to identify the motion state of the operator in real time by using the acceleration and attitude data of the sensor. Through the analysis of the sampling process, 4 motion states including rest state, swab holding state, sampling state and other actions are defined, and 2 transition states from swab holding to sampling starting and sampling ending to swab holding are defined. In the sampling process, an operator takes the pharynx swab from the rest state, keeps the state of clamping the swab after taking the pharynx swab, finds the oral cavity position of a person to be sampled, reaches the sampling state through the transition state, then turns into the clamping state from the sampling state through the transition state, finally puts the swab in the kit, and then restores to the rest state through other action states. One sampling period can be summarized as the following flow: rest- > other- > clamp- > transition- > sample- > transition- > clamp- > other- > rest. Wherein, different motion state corresponds different control strategy, and the centre gripping swab state need be guaranteed that the centre gripping is stable and the swab is not hard up, and sampling state needs more meticulous action to guarantee by sampler's comfort level and the validity of sampling, and other motion states need the operator to guarantee comparatively audio-visual operation feedback and good operation experience.
1) The motion state recognition operates according to the following method: the input data are the three-axis acceleration and Euler angle data of the upper arm, the lower arm and the wrist and the bending angle data of the middle finger, 19 channels are shared, and the data with the window length of 500ms is selected as the input (when the sampling frequency of the sensor is 60Hz, the input data format is 1 multiplied by 30 multiplied by 19). The recognition algorithm adopts CNN, LSTM or Transformer (the LSTM algorithm has good effect on processing time sequence, the CNN has the advantage of high operation speed on processing shorter time sequence, the Transformer algorithm proposed in recent years can be used for classification tasks by introducing an attribute mechanism to process time sequence), and the algorithms have better generalization capability and higher accuracy compared with the traditional machine learning classification algorithm.
The training data is provided by more than 10 professional sampling operators, the motion data of the whole process from rest to swab holding and sampling to swab recovery of the operators is recorded, and the state is marked, and the acquisition process of the training data is as follows:
the operator remains in rest 2S (rest state)
In 2S, the arm reaches the position of placing the swab and performs an action to prepare for picking (other action and motion state)
The operator holds the swab and reaches the human mouth model position within 2S (holding the swab state)
Put the swab into the mouth in 1S and prepare for sampling (transition state from holding the swab to start sampling)
Sample operation and hold 5S (sampling state)
Stop sampling and take out the swab in 1S (transition state from end of sampling to holding swab)
Carry the swab to the kit in 2S, ready to set down (clamped state)
Place swab in kit, place arm in rest position in 2S (other action exercise state)
Operator rest 2S (rest state)
Each operator is prompted strictly according to time, after practice, the operation is repeated for 5 times, data is recorded, and effective data is extracted after the data is sorted to serve as a training set and a testing set.
The convolutional neural network CNN has a higher operation speed while ensuring accuracy when processing a time sequence with a shorter time. In this embodiment, the convolutional neural network is composed of an input layer, a convolutional layer 1, a convolutional layer 2, a maximum pooling layer, a flat layer, a full connection layer, and a SoftMax layer, the input layer is three-dimensional acceleration data and three-dimensional euler angle data of inertial sensors at 3 positions of an upper arm, a lower arm, and a wrist, and bending angle data of a middle finger, data of 19 channels are shared, data with a window size of 500ms is selected as input, and the input data format is 1 × 30 × 19. The convolutional layer is mainly used for feature extraction, and the proposed network relates to a plurality of layers of convolutional layers with different kernel sizes and is mainly responsible for learning relevant features from input data. The pooling layer is primarily used to reduce the model input size and also to prevent overfitting of the model, where a maximum pooling layer is used. The fully-connected layer is composed of a plurality of neurons, the neurons calculate the weighted sum of the input with the assigned weight and output activation, and finally the probability of each prediction category is output by the SoftMax layer, and the value is the highest value, and the final prediction category is obtained. Fig. 2 is a CNN network structure for motion state recognition.
The algorithm model training and real-time prediction process is realized: an h5 format algorithm model is obtained after training by using a TensorFlow and a Keras framework in python, the h5 format model is converted into an understandable txt format, the forward propagation process of the algorithm is completed in a C + + program, and the motion state of an operator is predicted in real time.
2) The control strategy is different in different states of motion. Under the centre gripping swab state, the clamping jaw can not receive operator's hand shake influence, and stable centre gripping guarantees that the swab does not drop and not become flexible, guarantees that the centre gripping is stable.
In the sampling state, the swab is required to be stably held, and the sampling comfort and safety of a person to be sampled are ensured. The robot is controlled to move by using the position and posture data of the data glove and the positioning system, and the hand position and the motion of an operator can be finely copied by the control mode, so that the sampling comfort of a sampled person is ensured. And meanwhile, an LSTM model is applied to learn the sampling action rule of an operator and predict the action track of the operator, the model consists of an LSTM layer, a Flatten layer and a full link layer, the action track of the operator is predicted in real time by using the trained model, and when the prediction deviates from the actual situation, a safety problem possibly occurs, and the robot stops moving immediately.
In other motion states, the robot control system needs to have more intuitive operation feedback in order to reduce the operation load of the operator. Therefore, the robot is controlled to move by using the attitude (Euler angle) data of the inertial sensor, the robot and the arm of the operator almost keep the same attitude motion in the operation mode, more intuitive operation feedback can be brought to the operator, the operation is simple, and the operation burden of the operator is effectively reduced.
(3) The remote operation control mode of the robot has two modes, and the specific control mode is selected according to the predicted motion state. It was mentioned above that there are 2 transition states from holding the swab to starting the sampling and from ending the sampling to holding the swab, and that the 2 transition states are the time points for changing the control mode. When the swab holding state is predicted to be in the sampling starting state, the robot is controlled to move by using the inertial sensor data, and when the swab holding state is predicted to be in the sampling ending state, the robot is controlled to move by using the data glove and the positioning system data.
The mode is that the Euler angle data of the inertial sensor is used for controlling the movement of the robot. When the sensor is worn, the three inertial sensors at the upper arm, the lower arm and the wrist are ensured to be on the same straight line as far as possible, after the sensor is worn, the arm of an operator keeps relaxed, and a group of initial attitude data is recorded for correcting the wearing error. The variation of each position sensor relative to the initial position is calculated according to a following formula and used as input data from an arm to a robot joint space, 7 joint angle data of the robot are obtained after the mapping from the arm to the robot joint space (as shown in a table 2), and a joint angle instruction is sent in real time in an upper computer program to control the robot to move in real time. The control method can ensure that the robot and the human body arm keep the posture motion with the height similar to each other, brings more visual operation experience to the operator, and reduces the operation burden of the operator.
RollUpper arm=RollBodytoUpperArm-Roll0,BodytoUpperArm
PitchUpper arm=PitchBodytoUpperArm-Pitch0,BodytoUpperArm
YawUpper arm=YawBodytoUpperArm-Yaw0,BodytoUpperArm
RollLower arm=RollBodytoForeArm-Roll0,BodytoForeArm
PitchLower arm=PitchBodytoForeArm-Pitch0,BodytoForeArm
YawLower arm=YawBodytoForeArm-Yaw0,BodytoForeArm
RollWrist=RollBodytoPalm-Roll0,BodytoPalm
PitchWrist=PitchBodytoPalm-Pitch0,BodytoPalm
YawWrist=YawBodytoPalm-Yaw0,BodytoPalm
TABLE 2 mapping method of arm to robot joint space
Figure BDA0002981193440000131
And the second mode is that the position and attitude data (three-dimensional Cartesian coordinates and three-dimensional attitude angles) of the data glove and the positioning system are used for controlling the motion of the robot, the data have a mature calculation method, and the robot control instruction is also sent in real time through an upper computer program after the data are taken. The control method can highly restore the detailed actions and positions of the hands, and can ensure the comfort of a person to be sampled in the sampling process after the operator is skilled to operate.
The closing of the terminal anchor clamps of robot is controlled by the crooked data of finger of data gloves, and when centre gripping swab state and sampling state, anchor clamps are tightly closed, do not receive operator's hand shake influence to guarantee that the swab centre gripping is stable and not hard up.
(4) The visual feedback and the force feedback can provide real-time dynamic information of the sampled person end for an operator so as to ensure the safety and the effectiveness of the sampling process and the sampling comfort of the sampled person.
The visual feedback is used for two cameras and a display, one camera is arranged at the tail end of the mechanical arm and used for observing the condition of the sampling point, the other camera is arranged on the side surfaces of the robot and the person to be sampled and used for feeding back the relative position relation between the robot and the person to be sampled in real time, and the pictures of the two cameras are simultaneously connected to the display of an operator end.
The force feedback system of the Kuka iwwa7 freedom medical robot is applied to force feedback, moment information when the swab is in contact with a sampling point is acquired in real time, the moment information is displayed in the form of a cylindrical diagram in an upper computer interface, a red cylindrical bar with a higher numerical value is displayed when the force is too large, the red cylindrical bar is also transmitted to an operator through a display, and the operator can control own operation according to the force feedback information, so that the comfort of a person to be sampled is guaranteed.
A simplified scene of the nasopharyngeal swab sampling robot collaborative interaction control method provided by the embodiment of the present invention is shown in fig. 3, wherein 100 represents an operator, 101 represents a person to be sampled, 102 represents an inertial sensor, 103 represents a safety guarantee system, 104 represents a human-computer collaborative control system, 105 represents a robot and an actuator, 106 represents a camera, and 107 represents a human-computer interactive control system visual feedback system.

Claims (6)

1. A nasopharyngeal swab sampling robot-human-computer collaborative interaction control method is characterized by comprising four steps of (1) motion information acquisition, (2) motion state identification, (3) robot remote operation control, and (4) visual feedback and force feedback.
2. The nasopharyngeal swab sampling robot-human-computer collaborative interaction control method according to claim 1, wherein the (1) motion information acquisition comprises:
the motion information acquisition is realized by wearing inertial sensors and data gloves with trackers, wherein 4 inertial sensors are respectively fixed at four parts of an upper arm, a lower arm, a wrist and a waist of an operator through binding bands, 3 sensors of the arm and the wrist are enabled to be on the same straight line, and the speed, acceleration and posture (such as Euler angles: Roll, Pitch and Yaw) information of the upper arm, the lower arm and the wrist of the operator are acquired;
the Euler angles are the rotation angles of the object around three coordinate axes of a coordinate system, Yaw (Yaw) is the rotation angle around a y axis, Pitch (Pitch) is the rotation angle around an x axis, Roll (Roll) is the rotation angle around a z axis, and the rotation angle of the sensor relative to the other sensor can be obtained by making a difference between the Euler angles of the two sensors; the Euler angle data of the corresponding waist sensor are respectively subtracted from the Euler angle data of the inertial sensors at the upper arm, the lower arm and the wrist, so that the posture data of the upper arm, the lower arm and the wrist relative to the body can be obtained, the calculation mode is shown in the following formula, through the calculation, the position and the movement of a wearer can not influence the arm control robot, namely, the movement of the robot is only related to the arm action of an operator;
RollBodytoUpperArm=RollUpperArm-RollBody
PitchBodytoUpperArm=PitchUpperArm-PitchBody
YawBodytoUpperArm=YawUpperArm-YawBody
RollBodytoForeArm=RollForeArm-RollBody
PitchBodytoForeArm=PitchForeArm-PitchBody
YawBodytoForeArm=YawForeArm-YawBody
RollBodytoPalm=RollPalm-RollBody
PitchBodytoPalm=PitchPalm-PitchBody
YawBodytoPalm=YawPalm-YawBody
preferably, the inertial sensor is a Dutch Xsens Dot sensor, the size of the sensor is 36 multiplied by 30 multiplied by 11mm, the inertial sensor consists of a three-axis accelerometer and a magnetic measurement unit, the inertial sensor is tied to four limbs or waist through a binding band without influencing comfort, the sensor is in real-time communication with a computer at the frequency of 60Hz through BLE, data comprise three-axis speed, angular speed, acceleration, free acceleration, a magnetic field and the like, and attitude data, namely Euler angles, of a coordinate system of the sensor relative to a terrestrial coordinate system are calculated through a self-carrying sensor fusion algorithm;
preferably, the data glove is worn on a hand, the finger bending information is collected in a wireless transmission mode at the frequency of 60Hz, and the wrist fixing and positioning system comprises a wrist Tracker and an infrared positioning base station and is used for acquiring the position and posture data of the hand.
3. The nasopharyngeal swab sampling robot-human-computer collaborative interaction control method according to claim 1 or 2, wherein the (2) motion state identification comprises:
the motion state identification needs to identify the motion state of an operator in real time by using the acceleration and attitude data of a sensor; through the analysis of the sampling process, 4 motion states including a rest state, a swab holding state, a sampling state and other actions are defined, and 2 transition states from the swab holding state to the beginning of sampling and from the end of sampling to the swab holding state are defined; in the sampling process, an operator takes the pharyngeal swab from a rest state through other action states, keeps the state of holding the swab after taking the pharyngeal swab, finds the position of the oral cavity of a person to be sampled, reaches a sampling state through a transition state, changes the sampling state into the holding state through the transition state, finally puts the swab in a kit, and restores to the rest state through other action states; one sampling period can be summarized as the following flow: rest- > other- > clamp- > transition- > sample- > transition- > clamp- > other- > rest; the swab clamping state ensures stable clamping and no loosening of the swab, the sampling state ensures comfort of a person to be sampled and effectiveness of sampling by finer hand actions, and other action states ensure that an operator has more intuitive operation feedback and good operation experience;
the motion state recognition operates according to the following method: the input data are three-axis acceleration and Euler angle data of an upper arm, a lower arm and a wrist and bending angle data of a middle finger, 19 channels are shared, data with the window length of 500ms are selected as input, when the sampling frequency of the sensor is 60Hz, the format of the input data is 1 multiplied by 30 multiplied by 19, a CNN, an LSTM or a Transformer is adopted in an identification algorithm, and the algorithms have better generalization capability and higher accuracy compared with the traditional machine learning classification algorithm;
the training data is provided by more than 10 professional sampling operators, the motion data of the whole process from rest to swab holding and sampling to swab recovery of the operators is recorded, and the state is marked, and the acquisition process of the training data is as follows:
(a) the operator maintains a rest state 2S (rest state)
(b) In 2S, the arm reaches the position for placing the swab and makes the action for picking (other action and motion states)
(c) The operator holds the swab and reaches the human mouth model position in 2S (holding the swab state)
(d) Inserting the swab into the oral cavity in 1S to prepare for sampling (transition state from holding the swab to starting sampling)
(e) Sampling action holding 5S (sampling state)
(f) Stop sampling and take out the swab in 1S (transition state from end of sampling to holding swab)
(g) The swab was taken to the kit in 2S ready to be set down (clamped)
(h) Placing the swab in the kit, placing the arm in the rest position (other action and motion state) in 2S
(j) Operator rest 2S (rest state)
Each operator is prompted strictly according to time, after practice, the operation is repeated for 5 times, data is recorded, and effective data is extracted after the data is sorted to serve as a training set and a test set;
the convolutional neural network CNN has higher operation speed while ensuring accuracy when processing a time sequence with shorter time; the convolutional neural network consists of an input layer, a convolutional layer 1, a convolutional layer 2, a maximum pooling layer, a flat layer, a full-link layer and a SoftMax layer, wherein the input layer is three-dimensional acceleration data of inertial sensors at 3 positions of an upper arm, a lower arm and a wrist, three-dimensional Euler angle data and bending angle data of a middle finger, the data of 19 channels are shared, data with the window size of 500ms is selected as input, and the input data format is 1 multiplied by 30 multiplied by 19; the convolutional layer is mainly used for feature extraction, relates to a plurality of layers of convolutional layers with different kernel sizes in the proposed network and is mainly responsible for learning relevant features from input data; the maximum pooling layer is mainly used for reducing the input size of the model and also preventing the model from being over-fitted, the fully-connected layer consists of a plurality of neurons, the neurons calculate the weight sum of the input with the assigned weight and output activation, and finally the probability of each prediction category is output by the SoftMax layer, and the value is the highest value and is the final prediction category;
the algorithm model training and real-time prediction process is realized: an h5 format algorithm model is obtained after training by using a TensorFlow and a Keras framework in python, the h5 format model is converted into an understandable txt format, the forward propagation process of the algorithm is completed in a C + + program, and the motion state of an operator is predicted in real time.
4. The nasopharyngeal swab sampling robot-human-computer collaborative interaction control method according to claim 3, wherein in the 4 motion states and the 2 transitional motion states, control strategies are different; in the swab clamping state, when the hand bending angle is smaller than 3 degrees, the clamping jaw still keeps clamping, the clamping jaw is not affected by shaking of the hand of an operator, stable clamping ensures that the swab does not fall off and loosen, and meanwhile, the clamping jaw provides sufficient torque to ensure stable clamping;
in a sampling state, the swab is stably clamped, and the sampling comfort and safety of a person to be sampled are ensured; the robot is controlled to move by using the position and posture data of the data glove and the positioning system, and the hand position and the motion of an operator can be finely copied by the control mode, so that the sampling comfort of a sampled person is ensured; meanwhile, an LSTM model is applied to learn the sampling action rule of an operator and predict the action track of the operator, the model consists of an LSTM layer, a Flatten layer and a full-connection layer, the action track of the operator is predicted in real time by using the trained model, when the prediction is deviated from the actual situation, safety problems possibly occur, and the robot is stopped immediately;
in other motion states, the robot is controlled to move by using the attitude (Euler angle) data of the inertial sensor, and the robot and the arm of the operator almost keep the same attitude motion in the operation mode, so that the operator can be provided with more intuitive operation feedback, the operation is simple, and the operation burden of the operator is effectively reduced.
5. The nasopharyngeal swab sampling robot-human-computer cooperative interaction control method according to any one of claims 1 to 4, wherein the (3) robot remote operation control comprises:
(3) the robot has two remote operation control modes, and the specific control mode is selected according to the predicted motion state; the 2 transition states are time points for changing the control mode; when the swab holding state is predicted to be in a sampling starting state, the robot is controlled to move by using the inertial sensor data, and when the swab holding state is predicted to be in a swab holding state after the sampling is finished, the robot is controlled to move by using the data glove and the positioning system data;
the method comprises the following steps that firstly, the Euler angle data of an inertial sensor is used for controlling the robot to move, and other motion states are controlled in a first mode; when the sensors are worn, the upper arm, the lower arm and the three inertial sensors at the wrist are ensured to be on the same straight line as much as possible, after the sensors are worn, the arms of an operator keep relaxed, and a group of initial posture data is recorded for correcting wearing errors; calculating the variation of each position sensor relative to the initial position according to a following formula, using the variation as input data from an arm to a robot joint space, obtaining 7 joint angle data of the robot after mapping from the arm to the robot joint space (as shown in a table 3 below), and sending a joint angle instruction in real time in an upper computer program to control the robot to move in real time; the control method can ensure that the robot and the human body arm keep the posture motion with the height similar to each other, brings more visual operation experience to an operator, and reduces the operation burden of the operator;
Rollupper arm=RollBodytoUpperArm-Roll0,BodytoUpperArm
PitchUpper arm=PitchBodytoUpperArm-Pitch0,BodytoUpperArm
YawUpper arm=YawBodytoUpperArm-Yaw0,BodytoUpperArm
RollLower arm=RollBodytoForeArm-Roll0,BodytoForeArm
PitchLower arm=PitchBodytoForeArm-Pitch0,BodytoForeArm
YawLower arm=YawBodytoForeArm-Yaw0,BodytoForeArm
RollWrist=RollBodytoPalm-Roll0,BodytoPalm
PitchWrist=PitchBodytoPalm-Pitch0,BodytoPalm
YawWrist=YawBodytoPalm-Yaw0,BodytoPalm
TABLE 3 mapping method of arm to robot joint space
Figure FDA0002981193430000051
Figure FDA0002981193430000061
The second mode is that the position and attitude data (three-dimensional Cartesian coordinates and three-dimensional attitude angles) of the data glove and the positioning system are used for controlling the motion of the robot, the sampling state is controlled by the second mode, the data are obtained by adopting the existing calculation method, and the robot control instruction is also sent in real time through an upper computer program after the data are taken, the control method can highly restore the detailed actions and positions of hands, and an operator can ensure the comfort of a sampled person in the sampling process after skilled operation;
the closing of the terminal anchor clamps of robot is controlled by the crooked data of finger of data gloves, and when centre gripping swab state and sampling state, anchor clamps are tightly closed, do not receive operator's hand shake influence to guarantee that the swab centre gripping is stable and not hard up.
6. The nasopharyngeal swab sampling robot-human-computer collaborative interaction control method according to any one of claims 1 to 5, wherein said (4) visual feedback and force feedback comprises:
(4) the visual feedback and the force feedback can provide real-time dynamic information of a sampled person end for an operator so as to ensure the safety and the effectiveness of the sampling process and the sampling comfort of the sampled person;
the visual feedback is applied to two cameras and a display, wherein one camera is arranged at the tail end of the mechanical arm and used for observing the condition of a sampling point, the other camera is arranged on the side surfaces of the robot and a person to be sampled and used for feeding back the relative position relationship between the robot and the person to be sampled in real time, and the pictures of the two cameras are simultaneously accessed to the display of an operator end;
the force feedback system of the Kuka iwwa7 freedom medical robot is applied to force feedback, moment information when the swab is in contact with a sampling point is acquired in real time, the moment information is displayed in the form of a cylindrical diagram in an upper computer interface, a red cylindrical bar with a higher numerical value is displayed when the force is too large, the red cylindrical bar is also transmitted to an operator through a display, and the operator controls the operation of the operator according to the force feedback information, so that the comfort of a person to be sampled is guaranteed.
CN202110287765.8A 2021-03-17 2021-03-17 Robot-machine cooperative interaction control method and system for nasopharynx swab sampling robot Active CN113133787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110287765.8A CN113133787B (en) 2021-03-17 2021-03-17 Robot-machine cooperative interaction control method and system for nasopharynx swab sampling robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110287765.8A CN113133787B (en) 2021-03-17 2021-03-17 Robot-machine cooperative interaction control method and system for nasopharynx swab sampling robot

Publications (2)

Publication Number Publication Date
CN113133787A true CN113133787A (en) 2021-07-20
CN113133787B CN113133787B (en) 2022-03-22

Family

ID=76811316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110287765.8A Active CN113133787B (en) 2021-03-17 2021-03-17 Robot-machine cooperative interaction control method and system for nasopharynx swab sampling robot

Country Status (1)

Country Link
CN (1) CN113133787B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113842171A (en) * 2021-09-29 2021-12-28 北京清智图灵科技有限公司 Effectiveness judgment device for throat swab machine sampling
CN114505839A (en) * 2022-02-25 2022-05-17 南京航空航天大学 Master-slave robot system for nucleic acid sampling
CN114533137A (en) * 2022-04-27 2022-05-27 建德市疾病预防控制中心(建德市健康教育所) Medical multipurpose sampling swab, sampler, sampling system and control method
CN114926772A (en) * 2022-07-14 2022-08-19 河南科技学院 Method for tracking and predicting trajectory of throat swab head
CN116038726A (en) * 2022-12-28 2023-05-02 深圳市人工智能与机器人研究院 Nucleic acid sampling human-computer interaction device, method and robot based on visual and auditory sense

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170312746A1 (en) * 2011-09-25 2017-11-02 Theranos, Inc. Systems and methods for fluid handling
CN109171720A (en) * 2018-09-20 2019-01-11 中国科学院合肥物质科学研究院 A kind of myoelectricity inertial signal and video information synchronous acquisition device and method
US10451613B1 (en) * 2018-11-09 2019-10-22 Dnt Scientific Research, Llc Rapid diagnostic test device and sampling method using driven flow technology
CN111568558A (en) * 2020-04-13 2020-08-25 上海市胸科医院 Electronic device, surgical robot system, and control method thereof
CN111820955A (en) * 2020-07-27 2020-10-27 南方科技大学 Intelligent device is gathered to portable pharynx swab
CN111839599A (en) * 2020-07-17 2020-10-30 清华大学 High-freedom-degree flexible throat swab clamping and sampling robot
CN111839600A (en) * 2020-07-24 2020-10-30 孙喜琢 Full-automatic nasopharynx swab collecting method and device
CN111906784A (en) * 2020-07-23 2020-11-10 湖南爱米家智能科技有限公司 Pharyngeal swab double-arm sampling robot based on machine vision guidance and sampling method
CN111975799A (en) * 2020-08-26 2020-11-24 中国科学院沈阳自动化研究所 Nasal oropharynx swab sampling robot
CN112057114A (en) * 2020-08-16 2020-12-11 南京理工大学 Throat swab specimen sampling robot
CN112206008A (en) * 2020-10-10 2021-01-12 唐绍辉 Non-contact nasopharynx inspection robot

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170312746A1 (en) * 2011-09-25 2017-11-02 Theranos, Inc. Systems and methods for fluid handling
US20200230596A1 (en) * 2011-09-25 2020-07-23 Theranos Ip Company, Llc Systems and methods for fluid handling
CN109171720A (en) * 2018-09-20 2019-01-11 中国科学院合肥物质科学研究院 A kind of myoelectricity inertial signal and video information synchronous acquisition device and method
US10451613B1 (en) * 2018-11-09 2019-10-22 Dnt Scientific Research, Llc Rapid diagnostic test device and sampling method using driven flow technology
CN111568558A (en) * 2020-04-13 2020-08-25 上海市胸科医院 Electronic device, surgical robot system, and control method thereof
CN111839599A (en) * 2020-07-17 2020-10-30 清华大学 High-freedom-degree flexible throat swab clamping and sampling robot
CN111906784A (en) * 2020-07-23 2020-11-10 湖南爱米家智能科技有限公司 Pharyngeal swab double-arm sampling robot based on machine vision guidance and sampling method
CN111839600A (en) * 2020-07-24 2020-10-30 孙喜琢 Full-automatic nasopharynx swab collecting method and device
CN111820955A (en) * 2020-07-27 2020-10-27 南方科技大学 Intelligent device is gathered to portable pharynx swab
CN112057114A (en) * 2020-08-16 2020-12-11 南京理工大学 Throat swab specimen sampling robot
CN111975799A (en) * 2020-08-26 2020-11-24 中国科学院沈阳自动化研究所 Nasal oropharynx swab sampling robot
CN112206008A (en) * 2020-10-10 2021-01-12 唐绍辉 Non-contact nasopharynx inspection robot

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113842171A (en) * 2021-09-29 2021-12-28 北京清智图灵科技有限公司 Effectiveness judgment device for throat swab machine sampling
CN113842171B (en) * 2021-09-29 2024-03-01 北京清智图灵科技有限公司 Validity judging device for throat swab machine sampling
CN114505839A (en) * 2022-02-25 2022-05-17 南京航空航天大学 Master-slave robot system for nucleic acid sampling
CN114505839B (en) * 2022-02-25 2023-09-26 南京航空航天大学 Master-slave robot system for nucleic acid sampling
CN114533137A (en) * 2022-04-27 2022-05-27 建德市疾病预防控制中心(建德市健康教育所) Medical multipurpose sampling swab, sampler, sampling system and control method
CN114926772A (en) * 2022-07-14 2022-08-19 河南科技学院 Method for tracking and predicting trajectory of throat swab head
CN114926772B (en) * 2022-07-14 2022-10-21 河南科技学院 Method for tracking and predicting trajectory of throat swab head
CN116038726A (en) * 2022-12-28 2023-05-02 深圳市人工智能与机器人研究院 Nucleic acid sampling human-computer interaction device, method and robot based on visual and auditory sense
CN116038726B (en) * 2022-12-28 2024-02-20 深圳市人工智能与机器人研究院 Nucleic acid sampling human-computer interaction device, method and robot based on visual and auditory sense

Also Published As

Publication number Publication date
CN113133787B (en) 2022-03-22

Similar Documents

Publication Publication Date Title
CN113133787B (en) Robot-machine cooperative interaction control method and system for nasopharynx swab sampling robot
CN107378944B (en) Multidimensional surface electromyographic signal artificial hand control method based on principal component analysis method
WO2020221311A1 (en) Wearable device-based mobile robot control system and control method
WO2023056670A1 (en) Mechanical arm autonomous mobile grabbing method under complex illumination conditions based on visual-tactile fusion
Kang et al. Toward automatic robot instruction from perception-temporal segmentation of tasks from human hand motion
CN104881118B (en) A kind of donning system for being used to capture human upper limb locomotion information
CN109955254A (en) The remote operating control method of Mobile Robot Control System and robot end's pose
WO2015153739A1 (en) Systems and methods for planning a robot grasp based upon a demonstrated grasp
JP2011110620A (en) Method of controlling action of robot, and robot system
CN103112007A (en) Human-machine interaction method based on mixing sensor
CN106909216A (en) A kind of Apery manipulator control method based on Kinect sensor
CN107943283A (en) Mechanical arm pose control system based on gesture recognition
CN105446485B (en) System and method is caught based on data glove and the human hand movement function of position tracking instrument
CN113126763A (en) Myasthenia finger function rehabilitation training system based on multi-sensor data gloves
CN115761787A (en) Hand gesture measuring method with fusion constraints
CN113829357B (en) Remote operation method, device, system and medium for robot arm
CN109801709A (en) A kind of system of hand gestures capture and health status perception for virtual environment
CN115723152B (en) Intelligent nursing robot
CN115635482B (en) Vision-based robot-to-person body transfer method, device, medium and terminal
CN116749168A (en) Rehabilitation track acquisition method based on gesture teaching
CN106527720A (en) Immersive interaction control method and system
CN212352006U (en) Teaching glove and teaching system of two-finger grabbing robot
Chu et al. Hands-free assistive manipulator using augmented reality and tongue drive system
CN108682450A (en) Online finger motion function evaluating system
CN115446835A (en) Rigid-soft humanoid-hand autonomous grabbing method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant